Search Results

Search found 17317 results on 693 pages for 'memory upgrade'.

Page 86/693 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Ubuntu Upgrade breaks firefox

    - by Dennis
    I upgraded ubuntu, but the upgrade broke firefox, adobe flash plugin and corrupted the package database. I ran the suggested catalog repair, no change. I ran apt-get -f install, no change. I ran apt purge packageName, no change. How can I resolve this circular dependency? Here are some details: installArchives() failed: dpkg: dependency problems prevent configuration of adobe-flashplugin: firefox (12.0+build1-0ubuntu0.12.04.1) breaks adobe-flashplugin (<= 11.1.102.63-0precise1) and is installed. Version of adobe-flashplugin to be configured is 10.0.32.18-1intrepid1. dpkg: error processing adobe-flashplugin (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of adobe-flash-properties-gtk: adobe-flash-properties-gtk depends on adobe-flashplugin (= 11.2.202.235-0precise1); however: Version of adobe-flashplugin on system is 10.0.32.18-1intrepid1. dpkg: error processing adobe-flash-properties-gtk (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: adobe-flashplugin adobe-flash-properties-gtk Error in function: dpkg: dependency problems prevent configuration of adobe-flash-properties-gtk: adobe-flash-properties-gtk depends on adobe-flashplugin (= 11.2.202.235-0precise1); however: Version of adobe-flashplugin on system is 10.0.32.18-1intrepid1. dpkg: error processing adobe-flash-properties-gtk (--configure): dependency problems - leaving unconfigured

    Read the article

  • Game Object Factory: Fixing Memory Leaks

    - by Bunkai.Satori
    Dear all, this is going to be tough: I have created a game object factory that generates objects of my wish. However, I get memory leaks which I can not fix. Memory leaks are generated by return new Object(); in the bottom part of the code sample. static BaseObject * CreateObjectFunc() { return new Object(); } How and where to delete the pointers? I wrote bool ReleaseClassType(). Despite the factory works well, ReleaseClassType() does not fix memory leaks. bool ReleaseClassTypes() { unsigned int nRecordCount = vFactories.size(); for (unsigned int nLoop = 0; nLoop < nRecordCount; nLoop++ ) { // if the object exists in the container and is valid, then render it if( vFactories[nLoop] != NULL) delete vFactories[nLoop](); } return true; } Before taking a look at the code below, let me help you in that my CGameObjectFactory creates pointers to functions creating particular object type. The pointers are stored within vFactories vector container. I have chosen this way because I parse an object map file. I have object type IDs (integer values) which I need to translate them into real objects. Because I have over 100 different object data types, I wished to avoid continuously traversing very long Switch() statement. Therefore, to create an object, I call vFactoriesnEnumObjectTypeID via CGameObjectFactory::create() to call stored function that generates desired object. The position of the appropriate function in the vFactories is identical to the nObjectTypeID, so I can use indexing to access the function. So the question remains, how to proceed with garbage collection and avoid reported memory leaks? #ifndef GAMEOBJECTFACTORY_H_UNIPIXELS #define GAMEOBJECTFACTORY_H_UNIPIXELS //#include "MemoryManager.h" #include <vector> template <typename BaseObject> class CGameObjectFactory { public: // cleanup and release registered object data types bool ReleaseClassTypes() { unsigned int nRecordCount = vFactories.size(); for (unsigned int nLoop = 0; nLoop < nRecordCount; nLoop++ ) { // if the object exists in the container and is valid, then render it if( vFactories[nLoop] != NULL) delete vFactories[nLoop](); } return true; } // register new object data type template <typename Object> bool RegisterClassType(unsigned int nObjectIDParam ) { if(vFactories.size() < nObjectIDParam) vFactories.resize(nObjectIDParam); vFactories[nObjectIDParam] = &CreateObjectFunc<Object>; return true; } // create new object by calling the pointer to the appropriate type function BaseObject* create(unsigned int nObjectIDParam) const { return vFactories[nObjectIDParam](); } // resize the vector array containing pointers to function calls bool resize(unsigned int nSizeParam) { vFactories.resize(nSizeParam); return true; } private: //DECLARE_HEAP; template <typename Object> static BaseObject * CreateObjectFunc() { return new Object(); } typedef BaseObject*(*factory)(); std::vector<factory> vFactories; }; //DEFINE_HEAP_T(CGameObjectFactory, "Game Object Factory"); #endif // GAMEOBJECTFACTORY_H_UNIPIXELS

    Read the article

  • Unable to mount an LVM Hard-drive after upgrade

    - by Bruce Staples
    I imagine this is a basic gotcha ... but I can't see it. I have a system with 2(physical) harddrives. The boot system (/dev/sda) was running 10.04 & the second drive (/dev/sdb) was just a mounted filesystem. I did a clean load of Ubuntu 12.04 overwriting /dev/sda (not an upgrade) & now cannot mount the second drive. so I do not know what to enter it into the fstab ... I had expected to use: /dev/sdb /tera ext4 defaults 0 2 But even manual mounting fails (I also have tried various "-t" options on the off chance!) sudo mount -t ext4 /dev/sdb1 /tera mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Output from disk queries indicate that it is a Linux LVM & a healthy disk still. sudo lshw -C disk *-disk:0 description: ATA Disk product: WDC WD5000AACS-0 vendor: Western Digital physical id: 0 bus info: scsi@2:0.0.0 logical name: /dev/sda version: 01.0 serial: WD-WCASU1401098 size: 465GiB (500GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=00015a55 *-disk:1 description: ATA Disk product: WDC WD10EADS-00L vendor: Western Digital physical id: 1 bus info: scsi@3:0.0.0 logical name: /dev/sdb version: 01.0 serial: WD-WCAU47836304 size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 sudo fdisk -l Disk /dev/sda: 500.1 GB, 500106780160 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976771055 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00015a55 Device Boot Start End Blocks Id System /dev/sda1 * 2048 972580863 486289408 83 Linux /dev/sda2 972582910 976769023 2093057 5 Extended /dev/sda5 972582912 976769023 2093056 82 Linux swap / Solaris Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 1953525167 976762583+ 8e Linux LVM LVM doesn't appear to be an option for mount or fstab. ... and here's a Smart data Screenshot from Disk Utility.

    Read the article

  • Java JRE 7 Automatic Upgrade and Demantra Requirements - Action Required

    - by user702295
    The following applies to ALL Demantra, EBS and Demantra Oracle Integrations: All EBS desktop administrators must disable JRE Auto-Update for their end-users immediately. See this externally-published article:     URGENT BULLETIN: Disable JRE Auto-Update for All E-Business Suite End-Users     https://blogs.oracle.com/stevenChan/entry/bulletin_disable_jre_auto_update Why is this required? If you have Auto-Update enabled, your JRE 1.6 version will be updated to JRE 7.     This may happen as early as July 3, 2012.     This will definitely happen after Sept. 7, 2012, after the release of 1.6.0_35 (6u35).  Oracle Forms is not compatible with JRE 7 yet.  JRE 7 has not been certified with Oracle E-Business Suite yet. Oracle E-Business Suite functionality based on Forms -- e.g. Financials -- will stop working if you upgrade to JRE 7. Related News Java 1.6.0_33 is certified with Oracle E-Business Suite.  See this externally-published article:     Java JRE 1.6.0_33 Certified with Oracle E-Business Suite     https://blogs.oracle.com/stevenChan/entry/jre_1_6_0_33

    Read the article

  • virtual box upgrade

    - by Husni
    I did upgrade virtualbox from 4.1 to 4.2 wheneverver I want to load my win xp vdi, it gives me the following error: "Kernel driver not installed (rc=-1908) The VirtualBox Linux kernel driver (vboxdrv) is either not loaded or there is a permission problem with /dev/vboxdrv. Please reinstall the kernel module by executing '/etc/init.d/vboxdrv setup' as root. If it is available in your distribution, you should install the DKMS package first. This package keeps track of Linux kernel changes and recompiles the vboxdrv kernel module if necessary." I ran the suggested step to reinstall the kernel module, and the log file files is as follow: Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. I still unable to re-run my win virtual XP vdi file. anyone have a clue?

    Read the article

  • SD card won't appear after upgrade to 13.10

    - by Pixel
    My SD card won't mount when I put it into my lap top, everything was fine before the upgrade. The information about the SD card appears just fine when I type "sudo fdisk l " it just says that it doesn't have a valid partition table. When I type "_sudo blkid" I get the following answer: /dev/sda1: UUID="CCA8-9030" TYPE="vfat" /dev/sda2: UUID="8a1d135b-384b-432d-b608-64dcf09ada24" TYPE="ext2" /dev/sda3: UUID="7s6PtU-kj2Z-N8XD-0mzl-840i-i3HG-enlbAf" TYPE="LVM2_member" /dev/sr0: LABEL="Bamboo CD" TYPE="iso9660" /dev/mapper/ubuntu--vg-root: UUID="c9b521c8-7c9f-493b-95c8-a7d79c465318" TYPE="ext4" /dev/mapper/ubuntu--vg-swap_1: UUID="7f155ab6-e1b9-485b-a2bc-443c0622284d" TYPE="swap" When I use lsusb: Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 13d3:5710 IMC Networks UVC VGA Webcam Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 046d:c52f Logitech, Inc. Unifying Receiver Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub I've read the other threads and I couldn't really find any good answers, my card reader was compatible with the previous version of ubuntu, so technically it should still be compatible with the next version. Also I can't erase what's on the card, it contains important data which I need... :/ If you need anymore information just ask, I'll give it as soon as I can. Pixel.

    Read the article

  • dist-upgrade runs and completes but current kernel stays the same

    - by jaguare22
    I think that my system is staying at an older kernel version. It seems to update when I run dist-upgrade but the current kernel version doesn't change. Is it possible the system is set to install new kernel updates but only load an older version at start up? $ uname -a Linux HTPC 3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:32:50 UTC 2012 i686 athlon i386 GNU/Linux $ dpkg --list | grep linux-image ii linux-image-3.2.0-32-generic 3.2.0-32.51 Linux kernel image for version 3.2.0 on 32 bit x86 SMP ii linux-image-3.2.0-32-generic-pae 3.2.0-32.51 Linux kernel image for version 3.2.0 on 32 bit x86 SMP ii linux-image-3.2.0-32-virtual 3.2.0-32.51 Linux kernel image for version 3.2.0 on 32 bit x86 Virtual Guests dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' linux-headers-3.2.0-36 linux-headers-3.2.0-36-generic-pae linux-headers-3.2.0-37 linux-headers-3.2.0-37-generic-pae linux-headers-3.2.0-38 linux-headers-3.2.0-38-generic-pae linux-headers-3.2.0-39 linux-headers-3.2.0-39-generic-pae linux-headers-3.2.0-40 linux-headers-3.2.0-40-generic-pae linux-headers-3.2.0-41 linux-headers-3.2.0-41-generic-pae linux-headers-3.2.0-43 linux-headers-3.2.0-43-generic-pae linux-headers-3.2.0-44 linux-headers-3.2.0-44-generic-pae linux-headers-3.2.0-45 linux-headers-3.2.0-45-generic-pae linux-headers-3.2.0-48 linux-headers-3.2.0-48-generic-pae

    Read the article

  • Desktop goes un-usable after upgrade to 12.04

    - by Tom Nail
    I have multiple Ubuntu systems connected to a KVM, one of which I recently upgraded from Ubuntu 10 to 12.04. After the upgrade, this system desktop does fine until it is allowed to go to idle (i.e., I've switched to another system on the KVM and it locks it's desktop). When I come back to it, the screen is garbled and paging across at a rate seemingly determined by the mouse. Although no pointer is visible, I can get the screen to stop paging (and just be garbled) by moving the mouse left and right. The paging will slow down and come to a stop, if I can align things carefully enough. This condition persists even when I try to go to a CLI-based login (e.g., CTRL+Alt+F1) and will continue until I reboot the machine. Unfortunately, I'm not very familiar with the Unity desktop, so I don't know where to find things to troubleshoot. A restart of lightdm doesn't change anything, so I'm wondering if this might be more hardware based( although this machine hasn't given me any trouble previously in the same setup). The .xsession-errors file has some issues with compiz, nautilus and GConf listed, but I'm not sure those are actually germane to the issue. Thanks for any help, -=Tom

    Read the article

  • How do I upgrade from ubuntu 9.10 to 12.10 on my Acer Aspire 3000

    - by 770
    I had my Acer Aspire 3000 as a dual boot XP/ubuntu 9.10 a couple years ago. I recently blew the dust of it and wanted to upgrade to 7/Ubuntu 12.10 so I began by formatting the Ubuntu side of the partition and apparently damaged the mbr as I could only get black screen with the error message: GRUB loading. error: no such partition grub rescue I then slaved the hdd to my win7 desktop and formatted the entire drive, both sides of the partition then reinstalled it in the Acer and tried to install win7. Upon starting the Acer I got the same error message: GRUB loading. error: no such partition grub rescue I then tried to reinstall Ubuntu 9.10 as I have an Ubuntu produced installation cd. Same result. Next day I received a new battery I had ordered for the Acer. I plugged it and the power supply in and hit the power button just to see if I at least could charge the battery but to my surprise Ubuntu 9.10 began to install, so I let it and it did. Now the hard drive shows 58 gb and 2.5gb partitions neither of which is formatted NTFS for/by windows. I am guessing that the GRUB/mbr was repaired somehow by the Ubuntu reinstallation. My question, should you choose to accept it; How can I get to my goal of dual boot win7/Ubuntu 12.10. I am a beginner and don't know much about linux or the terminology. Thank you for your thoughts and help.

    Read the article

  • No Unity after ubuntu 12.10 upgrade

    - by Aivaras
    After I upgraded to Ubuntu 12.10 from 12.04, there was low graphics and no Unity. Just the mouse and the wallpaper. So, I got into the terminal via Ctrl+Alt+T, launched Chrome and searched for a solution. As a result, I tried this: sudo sh amd-driver-installer-12.6-legacy-x86.x86_64.run It did not work. Then I tried this: sudo add-apt-repository ppa:makson96/fglrx sudo apt-get update sudo apt-get upgrade sudo apt-get install fglrx-legacy It did not work too. I removed the repository, got back the the xorg version to 1.13, and tried this: sudo sh /usr/share/ati/fglrx-uninstall.sh sudo apt-get remove --purge fglrx fglRx_* fglrx-amdcccle* fglrx-dev* xorg-driver-fglrx sudo apt-get remove --purge xserver-xorg-video-ati xserver-xorg-video-radeon sudo apt-get install xserver-xorg-video-ati sudo apt-get install --reinstall libgl1-mesa-glx libgl1-mesa-dri xserver-xorg-core It did return the screen resolution back, but still no Unity. Is there something what could I do? My graphics card is: lspci | grep VGA 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV620 [Mobility Radeon HD 3400 Series]

    Read the article

  • Errors Code: /var/cache/apt/archives/linux-image-3.8.0-19-generic_3.8.0-19.30~precise1_amd64.deb

    - by user286682
    $ sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages will be upgraded: linux-image-3.8.0-19-generic 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 6 not fully installed or removed. Need to get 0 B/47.8 MB of archives. After this operation, 142 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 164064 files and directories currently installed.) Preparing to replace linux-image-3.8.0-19-generic 3.8.0-19.29 (using .../linux-image-3.8.0-19-generic_3.8.0-19.30~precise1_amd64.deb) ... Done. Unpacking replacement linux-image-3.8.0-19-generic ... dpkg: error processing /var/cache/apt/archives/linux-image-3.8.0-19-generic_3.8.0-19.30~precise1_amd64.deb (--unpack): trying to overwrite '/lib/modules/3.8.0-19-generic/kernel/arch/x86/kvm/kvm-intel.ko', which is also in package linux-image-extra-3.8.0-19-generic 3.8.0-19.29 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.8.0-19-generic /boot/vmlinuz-3.8.0-19-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.8.0-19-generic /boot/vmlinuz-3.8.0-19-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.8.0-19-generic_3.8.0-19.30~precise1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) How can I get this update to work?

    Read the article

  • Is it possible to share a C struct in shared memory between apps compiled with different compilers?

    - by Joseph Garvin
    I realize that in general the C and C++ standards gives compiler writers a lot of latitude. But in particular it guarantees that POD types like C struct members have to be laid out in memory the same order that they're listed in the structs definition, and most compilers provide extensions letting you fix the alignment of members. So if you had a header that defined a struct and manually specified the alignment of its members, then compiled two apps with different compilers using the header, shouldn't one app be able to write an instance of the struct into shared memory and the other app be able to read it without errors? I am assuming though that the size of the types contained is consistent across two compilers on the same architecture (it has to be the same platform already since we're talking about shared memory). I realize that this is not always true for some types (e.g. long vs. long long in GCC and MSVC 64-bit) but nowadays there are uint16_t, uint32_t, etc. types, and float and double are specified by IEEE standards.

    Read the article

  • Six in Six - SQL Server 2012 Webinars

    - by JustinL
    We're running six webinars over the next six months covering our experiences with SQL Server 2012 and customer deployments. I'm presenting the first on upgrading to SQL Server 2012 next month, subsequent sessions will be delivered by colleagues: NOVEMBER: SQL Server 2012 Upgrade Approach and considerations. Friday 23rd November 12:00 – 13:00 Present approaches for upgrade testing, managing risk and rollback. The session will include details on minimizing downtime and upgrading from SQL Server 2000, 2005, and 2008 including.... More details and register. DECEMBER: Delivering Mission Critical BI with SQL Server 2012– Friday 14th December 12:00-13:00 Information is the lifeblood of many organisations and the availability of timely, accurate information is critical to strategic decision making. This session covers the features and capabilities… More details and register. JANUARY: Architecting Highly Available solutions with SQL Server 2012 – Friday 18th January 12:00- 13:00 Overview and comparison of the high availability features available within SQL Server 2012. The session considers business requirements for availability and recoverability and presents a number of alternative solution designs to meet… More details and register. FEBRUARY: Private cloud deployments with SQL Server 2012 – Friday 15th February 12:00- 13:00 Cloud based technology provide cost effective scale and flexibility. This session provides an overview of the benefits organisations can realise through private cloud… More details and register. MARCH: Visualising data patterns with SQL Server 2012 – Friday 22nd March 12:00- 13:00 This webinar demonstrates the ease of delivering business insight by exploring information and identifying trends through data visualisation. SQL Server 2012 provides new capability with enhanced performance and … More details and register. APRIL: Architecting Highly Available solutions with SQL Server 2012 – Friday 26th April 12:00- 13:00 Customers are increasingly interested in leveraging the benefits of cloud based solutions to provide scalable and flexible infrastructure to host their applications. This session looks at common design patterns and workloads… More details and register. Justin Langford - Coeo Ltd SQL Server Consultants | SQL Server Remote DBA

    Read the article

  • Can't (re)Install VLC (removed by update{again})

    - by David matthews
    I use VLC a lot, And When 2.0 came out Ubuntu did not update to that version, the REPO had the older version even months later, So I added the daily repo: http://ppa.launchpad.net/videolan/stable-daily/ubuntu and that worked for a while, after a few months later I received a 'Distribution upgrade' and when I installed it, it removed VLC. when I tried to re-install it gave me a bunch of unmet dependency's, so I disabled the source, ran apt-get update, and tried to install the older VLC, that did not work either. I eventually found a web page, and it helped me get it working, and I was also able to get the 'Stable Daily' working too But last night, I got another 'disto upgrade' and it uninstalled VLC again. when I try to reinstall from daily I get: The following packages have unmet dependencies: vlc : Depends: fonts-freefont-ttf but it is not installable Depends: vlc-nox (= 2.0.3+git20121005+r392-0~r42~precise1) but it is not going to be installed Depends: libvlccore5 (>= 2.0.0) but it is not going to be installed Recommends: vlc-plugin-notify (= 2.0.3+git20121005+r392-0~r42~precise1) but it is not going to be installed Recommends: vlc-plugin-pulse (= 2.0.3+git20121005+r392-0~r42~precise1) but it is not going to be installed E: Unable to correct problems, you have held broken packages. and from the default source: vlc : Depends: vlc-nox (= 2.0.3-0ubuntu0.12.04.1) but it is not going to be installed Depends: libvlccore5 (>= 2.0.0) but it is not going to be installed Recommends: vlc-plugin-notify (= 2.0.3-0ubuntu0.12.04.1) but it is not going to be installed vlc-plugin-pulse : Depends: vlc-nox (= 2.0.3-0ubuntu0.12.04.1) but it is not going to be installed Depends: libvlccore5 (>= 2.0.0) but it is not going to be installed E: Unable to correct problems, you have held broken packages. (and yes, I ran apt-get update after turning off daily) Any Ideas? (ubuntu 12.04 64bit)

    Read the article

  • How can I validate if a 13.10 update was complete?

    - by James
    I attempted to upgrade Ubuntu 13.04 server to 13.10 tonight via the standard linux text console and it had some trouble. Machine boots and displays 13.10, but I am unsure exactly what or how much was successfully upgraded. Is there some command I can run which will validate that all system has all standard binaries upgraded to the 13.10 release? As for the issue .... everything seemed to be going along ok until the screen displayed some kind of menu option regarding local edits to samba config file. There was a prompt requesting root password or ctrl-d to continue, but it would not take any input. From another terminal screen I tried killing the process displaying this samba message, and then some screen/SCREEN processes. The hard drive activity picked up for a while and then all the processes on that pty were gone. As I said, reboot was OK, but I have no idea if everything was upgraded. The machine seems to be acting like normal, except that the upgrade killed my openpvn process which I'll need to reload. Thanks

    Read the article

  • How do you cope with change in open source frameworks that you use for your projects?

    - by Amy
    It may be a personal quirk of mine, but I like keeping code in living projects up to date - including the libraries/frameworks that they use. Part of it is that I believe a web app is more secure if it is fully patched and up to date. Part of it is just a touch of obsessive compulsiveness on my part. Over the past seven months, we have done a major rewrite of our software. We dropped the Xaraya framework, which was slow and essentially dead as a product, and converted to Cake PHP. (We chose Cake because it gave us the chance to do a very rapid rewrite of our software, and enough of a performance boost over Xaraya to make it worth our while.) We implemented unit testing with SimpleTest, and followed all the file and database naming conventions, etc. Cake is now being updated to 2.0. And, there doesn't seem to be a viable migration path for an upgrade. The naming conventions for files have radically changed, and they dropped SimpleTest in favor of PHPUnit. This is pretty much going to force us to stay on the 1.3 branch because, unless there is some sort of conversion tool, it's not going to be possible to update Cake and then gradually improve our legacy code to reap the benefits of the new Cake framework. So, as usual, we are going to end up with an old framework in our Subversion repository and just patch it ourselves as needed. And this is what gets me every time. So many open source products don't make it easy enough to keep projects based on them up to date. When the devs start playing with a new shiny toy, a few critical patches will be done to older branches, but most of their focus is going to be on the new code base. How do you deal with radical changes in the open source projects that you use? And, if you are developing an open source product, do you keep upgrade paths in mind when you develop new versions?

    Read the article

  • Why does Eclipse keep crashing and dpkg erroring out?

    - by Sanju Sony Kurian
    I AM GETTING THE ERROR E: Sub-process /usr/bin/dpkg returned an error code (1) WHILE I WAS TRYING TO UPGRADE MY PC. i USE ECLIPSE AND IT KEEPS ON CRASHING... pLEASE HELP I ATTACH THE ERROR sanju@sanju-Dell-System-XPS-L502X:~$ upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: aisleriot gir1.2-rb-3.0 gir1.2-totem-1.0 gnome-disk-utility gnome-keyring libgck-1-0 libgcr-3-1 libtotem0 linux-generic linux-headers-generic linux-image-generic rhythmbox rhythmbox-data rhythmbox-mozilla rhythmbox-plugin-cdrecorder rhythmbox-plugin-magnatune rhythmbox-plugin-zeitgeist rhythmbox-plugins seahorse totem totem-common totem-mozilla totem-plugins 0 upgraded, 0 newly installed, 0 to remove and 23 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up grub-pc (1.99-21ubuntu3.1) ... /var/lib/dpkg/info/grub-pc.config: 35: /etc/default/grub: Syntax error: EOF in backquote substitution dpkg: error processing grub-pc (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: grub-pc E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • What are some of best Javascript memory detecting tools?

    - by Philip Fourie
    Our team is faced with slow but serious Javascript memory leak. We have read up on the normal causes for memory leaks in Javascript (eg. closures and circular references). We tried to avoid those pitfalls in the code but it likely we still have unknown mistakes left in our code. I started my search for available tools but would like input from people with actual experience with these tools. Some of the tools I found so far (but have no idea how good and useful they would be for our problem): Sieve Drip JavaScript Memory Leak Detector Our search is not limited to free tools, it will be a bonus, but more importantly something that will get the job done. We do the following in our Javascript code: AJAX calls to a .NET WCF back-end that send back JSON data Manipulate the DOM Keep a fairly sized object model in the Javascript to store current state

    Read the article

  • Ubuntu 13.04 to 13.10: Filesystem check or mount failed

    - by SamHuckaby
    I attempted to upgrade from Ubuntu 13.04 to 13.10 today, and mid upgrade the system started flaking out, and eventually locked up entirely. I was forced to restart the computer, and am now unable to get the computer to boot up at all. When I boot currently, it takes me to the GRUB menu, and I can choose to boot normally, or boot in an older version. I have tried several things, which I list below, but no matter what, when I try to finish booting into Ubuntu, I receive the following error: Filesystem check or mount failed. A maintenance shell will now be started. CONTROL-D will terminate this shell and continue booting after re-trying filesystems. Any further errors will be ignored root@ubuntu-computername:~# I have fun fsck -f and everything appears correct, no errors are reported. and it passes all 5 checks. If I run fdisk -l then I get the following information: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes Disk identifier: 0x00010824 Device Boot Start End Blocks Id System /dev/sda1 * 2048 608456703 304227328 83 Linux /dev/sda2 608458750 625141759 8341505 5 Extended Partition 2 does not start on physical sector boundary. /dev/sda5 608458752 625141759 8341504 82 Linux swap / Solaris Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0fb4b7e8 Device Boot Start End Blocks Id System /dev/sdb1 8192 625139711 312565760 7 HPFS/NTFS/exFAT I am considering just installing a new OS on the other disk, that currently has nothing on it, and then just attempting to scrape my data off the old disk (thankfully I didn't encrypt the files). Really my question is this: Can I salvage this Ubuntu install, or should I give up and just reinstall?

    Read the article

  • Can SHA-1 algorithm be computed on a stream? With low memory footprint?

    - by raoulsson
    I am looking for a way to compute SHA-1 checksums of very large files without having to fully load them into memory at once. I don't know the details of the SHA-1 implementation and therefore would like to know if it is even possible to do that. If you know the SAX XML parser, then what I look for would be something similar: Computing the SHA-1 checksum by only always loading a small part into memory at a time. All the examples I found, at least in Java, always depend on fully loading the file/byte array/string into memory. If you even know implementations (any language), then please let me know!

    Read the article

  • How do I force my std::map to deallocate memory used?

    - by monkeyking
    I'm using a std::map, and I can't seem to free the memory back to the OS. It looks like, int main(){ aMap m; while(keepGoing){ while(fillUpMap){ //populate m } doWhatIwantWithMap(m); m.clear(); //flush some buffered values into map for next iteration flushIntoMap(m); } } Each (fillUpmap) allocates around 1gig, so I'm very much interested in getting this back to my system before it eats up all my memory. Ive experienced the same with std::vector, but there I could force it to free by doing a swap with an empty std::vector. This doesn't work with map. When I use valgrind it says that all memory is freed, so its not a problem with a leak, since everything is cleared up nicely after a run.

    Read the article

  • How do you make your Java application memory efficient?

    - by Boune
    How do you optimize the heap size usage of an application that has a lot (millions) of long-lived objects? (big cache, loading lots of records from a db) Use the right data type Avoid java.lang.String to represent other data types Avoid duplicated objects Use enums if the values are known in advance Use object pools String.intern() (good idea?) Load/keep only the objects you need I am looking for general programming or Java specific answers. No funky compiler switch. Edit: Optimize the memory representation of a POJO that can appear millions of times in the heap. Use cases Load a huge csv file in memory (converted into POJOs) Use hibernate to retrieve million of records from a database Resume of answers: Use flyweight pattern Copy on write Instead of loading 10M objects with 3 properties, is it more efficient to have 3 arrays (or other data structure) of size 10M? (Could be a pain to manipulate data but if you are really short on memory...)

    Read the article

  • Which is faster in memory, ints or chars? And file-mapping or chunk reading?

    - by Nick
    Okay, so I've written a (rather unoptimized) program before to encode images to JPEGs, however, now I am working with MPEG-2 transport streams and the H.264 encoded video within them. Before I dive into programming all of this, I am curious what the fastest way to deal with the actual file is. Currently I am file-mapping the .mts file into memory to work on it, although I am not sure if it would be faster to (for example) read 100 MB of the file into memory in chunks and deal with it that way. These files require a lot of bit-shifting and such to read flags, so I am wondering that when I reference some of the memory if it is faster to read 4 bytes at once as an integer or 1 byte as a character. I thought I read somewhere that x86 processors are optimized to a 4-byte granularity, but I'm not sure if this is true... Thanks!

    Read the article

  • How much memory is reserved when i declare a string?

    - by Bhagya
    What exactly happens, in terms of memory, when i declare something like: char arr[4]; How many bytes are reserved for arr? How is null string accommodated when I 'strcpy' a string of length 4 in arr? I was writing a socket program, and when I tried to suffix NULL at arr[4] (i.e. the 5th memory location), I ended up replacing the values of some other variables of the program (overflow) and got into a big time mess. Any descriptions of how compilers (gcc is what I used) manage memory?

    Read the article

  • Which of FILE* or ifstream has better memory usage?

    - by Viet
    I need to read fixed number of bytes from files, whose sizes are around 50MB. To be more precise, read a frame from YUV 4:2:0 CIF/QCIF files (~25KB to ~100KB per frame). Not very huge number but I don't want whole file to be in the memory. I'm using C++, in such a case, which of FILE* or ifstream has better (less/minimal) memory usage? Please kindly advise. Thanks! EDIT: I read fixed number of bytes: 25KB or 100KB (depending on QCIF/CIF format). The reading is in binary mode and forward-only. No seeking needed. No writing needed, only reading. EDIT: If identifying better of them is hard, which one does not require loading the whole file into memory?

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >