Search Results

Search found 8997 results on 360 pages for 'apt cache'.

Page 44/360 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Why does Hibernate 2nd level cache only cache queries within a session?

    - by Synesso
    Using a named query in our application and with ehcache as the provider, it seems that the query results are tied to the session within the cache. Any attempt to access the value from the cache for a second time results in a LazyInitializationException We have set lazy = true for the following mapping because this object is also used by another part of the system which does not require the reference... and we want to keep it lean. <class name="domain.ReferenceAdPoint" table="ad_point" mutable="false" lazy="false"> <cache usage="read-only"/> <id name="code" type="long" column="ad_point_id"> <generator class="assigned" /> </id> <property name="name" column="ad_point_description" type="string"/> <set name="synonyms" table="ad_point_synonym" cascade="all-delete-orphan" lazy="true"> <cache usage="read-only"/> <key column="ad_point_id" /> <element type="string" column="synonym_description" /> </set> </class> <query name="find.adpoints.by.heading">from ReferenceAdPoint adpoint left outer join fetch adpoint.synonyms where adpoint.adPointField.headingCode = ?</query> Here's a snippet from our hibernate.cfg.xml <property name="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</property> <property name="hibernate.cache.use_query_cache">true</property> It doesn't seem to make sense that the cache would be constrained to the session. Why are the cached queries not usable outside of the (relatively short-lived) sessions?

    Read the article

  • iOS - is it possible to cache CGContextDrawImage?

    - by woot586
    I used the timing profile tool to identify that 95% of the time is spent calling the function CGContextDrawImage. In my app there are a lot of duplicate images repeatably being chopped from a sprite map and drawn to the screen. I was wondering if it was possible to cache the output of CGContextDrawImage in an NSMutableDictionay, then if the same sprite is requested again it can be just pull it from the cache rather than doing all the work of clipping and rendering it again. This is what i’ve got but I have not been to successful: Definitions if(cache == NULL) cache = [[NSMutableDictionary alloc]init]; //Identifier based on the name of the sprite and location within the sprite. NSString* identifier = [NSString stringWithFormat:@"%@-%d",filename,frame]; Adding to cache CGRect clippedRect = CGRectMake(0, 0, clipRect.size.width, clipRect.size.height); CGContextClipToRect( context, clippedRect); //create a rect equivalent to the full size of the image //offset the rect by the X and Y we want to start the crop //from in order to cut off anything before them CGRect drawRect = CGRectMake(clipRect.origin.x * -1, clipRect.origin.y * -1, atlas.size.width, atlas.size.height); //draw the image to our clipped context using our offset rect CGContextDrawImage(context, drawRect, atlas.CGImage); [cache setValue:UIGraphicsGetImageFromCurrentImageContext() forKey:identifier]; UIGraphicsEndImageContext(); Rendering a cached sprite There is probably a better way to render CGImage which is my ultimate caching goal but at the moment I’m just looking to successfully render the cached image out however this has not been successful. UIImage* cachedImage = [cache objectForKey:identifier]; if(cachedImage){ NSLog(@"Cached %@",identifier); CGRect imageRect = CGRectMake(0, 0, cachedImage.size.width, cachedImage.size.height); if (NULL != UIGraphicsBeginImageContextWithOptions) UIGraphicsBeginImageContextWithOptions(imageRect.size, NO, 0); else UIGraphicsBeginImageContext(imageRect.size); //Use draw for now just to see if the image renders out ok CGContextDrawImage(context, imageRect, cachedImage.CGImage); UIGraphicsEndImageContext(); }

    Read the article

  • Cached data accessed by reference?

    - by arthurdent510
    I am running into an odd problem, and this is the only thing I can think of. I'm storing a list in cache, and I am randomly losing items from my list as users use the site. I have a class that is called that either goes to cache and returns the list from there, or if the cache is over a certain time frame old, it goes to the database and refreshes the cache. So when I pull the data from cache, this is what it looks like.... results = (List<Software>)cache["software"]; And then I return results and do some processing, filter for security, and eventually it winds up on the screen. For each Software record, there can be multiple resources attached to it, and based on how the security goes they may see some, all, or none of the records. So in the security check it will remove some of those resources from the software record. So my question is.... when I return my results list, is it a reference directly to the cache object? So when I remove a resource from the software object, it is really removing from cache as well? If that is the case, is there any way to not return it as a reference? Thanks!

    Read the article

  • Can't access my accelerated hard disk from msdos after installing linux on ssd cache

    - by Chibueze Opata
    I mistakenly installed Ubuntu on my ssd (forgot my PC actually came with one), when it detected a ~31GiB disk that it wanted to install to, I was a bit confused since I had brought out 30Gb in my primary disk for it, but I clicked continue. After installation, I tried to boot back into my Windows and it brought out some Intel Raid Disk Utility stuff saying I should disable acceleration on a disk something couldn't be found, I canceled it but whatever I tried, recovery tools, setups etc, I couldn't just access the drive which was apparently using the SSD as cache. Since then I've been stuck. I tried setting the 'raid' flag to the disk from 'gParted', still I couldn't. I tried the diskraid utility from windows recover disk, it said it couldn't detect any raid, diskpart sees the partition but doesn't see the volume, when I remove the raid flag, it sees the volume as one of raw type, and I can't access anything. I can however mount the drive from terminal in Ubuntu and access my files, but I don't have any backup media at the moment so I can do a factory re-install. Please how do I go about solving the issue, precisely I would like to know how to boot into the drive again. Thanks!

    Read the article

  • Naming a class that decides to retrieve things from cache or a service + architecture evaluation

    - by Thomas Stock
    Hi, I'm a junior developer and I'm working on a pet project that I want to learn as much as possible from. I have the following scenario: There's a WCF service that I use to retrieve and update data, lets say Cars. So it's called CarWCFService and has a GetCars(), SaveCar(), ... . It implements interface ICarService. This isn't the Actual WCF service but more like a wrapper around it. Upon retrieving data from the service, I want to store them in local memory, as cache. I have made a class for this called CarCacheService which also implements interface ICarService. (I will explain later why it implements ICarService) I don't want client code to be calling these implementations. Instead, I want to create a third implementation for ICarService that tries to read from the CarCacheService before calling the WCFCarService, stores retrieved data in the CarCacheService, etc. 3 questions: How do I name this third class? I was thinking about something as simple as CarService. This does not really says what the service does exactly, tho. Is the naming for the other classes good? Would this naming and architecture be obvious for future programmers? This is my biggest concern. Does this architecture make sense? The reason that I implement ICarService on the CarCacheService is mainly because it allows me to fake the WCFService while debugging. I can store dummy data in a CarCacheService instance and pass it to the CarService, together with an(other) empty CarCacheService. If I made CacheCarService and WCFService public I could let client code decide if they want to drop the caching and just work directly on the WCFService.

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • NFS server generating "invalid extent" on EXT4 system disk?

    - by Stephen Winnall
    I have a server running Xen 4.1 with Oneiric in the dom0 and each of the 4 domUs. The system disks of the domUs are LVM2 volumes built on top of an mdadm RAID1. All the domU system disks are EXT4 and are created using snapshots of the same original template. 3 of them run perfectly, but one (called s-ub-02) keeps on being remounted read-only. A subsequent e2fsck results in a single "invalid extent" diagnosis: e2fsck 1.41.14 (22-Dec-2010) /dev/domu/s-ub-02-root contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Inode 525418 has an invalid extent (logical block 8959, invalid physical block 0, len 0) Clear<y>? yes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/domu/s-ub-02-root: 77757/655360 files (0.3% non-contiguous), 360592/2621440 blocks The console shows typically the following errors for the system disk (xvda2): [101980.903416] EXT4-fs error (device xvda2): ext4_ext_find_extent:732: inode #525418: comm apt-get: bad header/extent: invalid extent entries - magic f30a, entries 12, max 340(340), depth 0(0) [101980.903473] EXT4-fs (xvda2): Remounting filesystem read-only I have created new versions of the system disk. The same thing always happens. This, and the fact that the disk is ultimately on a RAID1, leads me to preclude a hardware disk error. The only obvious distinguishing feature of this domU is the presence of nfs-kernel-server, so I suspect that. Its exports file looks like this: /exports/users 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/media/music 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/media/pictures 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/opt 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/users and /exports/opt are LVM2 volumes from the same volume group as the system disk. /exports/media is an EXT2 volume. (There is an issue where clients see /exports/media/pictures as being a read-only volume, which I mention for completeness.) With the exception of the read-only problem, the NFS server appears to work correctly under light load for several hours before the "invalid extent" problem occurs. There are no helpful entries in /var/log. All of a sudden, no more files are written, so you can see when the disk was remounted read-only, but there is no indication of what the cause might be. Can anyone help me with this problem? Steve

    Read the article

  • dpkg crashing while trying to install a package

    - by Jonathan
    While attempting to install a package via apt-get the following. The first error I get is: E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. And, if I run that command the box spins out of control and I get the following in /var/log/syslog Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398546] ------------[ cut here ]------------ Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398552] WARNING: at /build/buildd/linux-3.0.0/arch/x86/xen/multicalls.c:182 xen_mc_flush+01c0() Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398561] Modules linked in: acpiphp Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398568] Pid: 31063, comm: java Tainted: G D W 3.0.0-14-virtual #23-Ubuntu Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398576] Call Trace: Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398580] [<c0648265>] ? printk+0x2d/0x2f Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398586] [<c0150462>] warn_slowpath_common+0x72/0xa0 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398593] [<c0104883>] ? xen_mc_flush+0x1b3/0x1c0 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398599] [<c0104883>] ? xen_mc_flush+0x1b3/0x1c0 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398605] [<c01504b2>] warn_slowpath_null+0x22/0x30 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398611] [<c0104883>] xen_mc_flush+0x1b3/0x1c0 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398617] [<c0104e7a>] ? xen_extend_mmu_update+0x4a/0x70 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398624] [<c0106565>] xen_set_pud_hyper+0x75/0x80 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398630] [<c01065b9>] xen_set_pud+0x49/0x60 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398636] [<c0132105>] pud_populate+0x45/0x60 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398642] [<c0208a24>] __pmd_alloc+0x74/0x90 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398648] [<c0208cb7>] handle_mm_fault+0x277/0x2c0 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.398655] [<c065f45b>] do_page_fault+0x15b/0x4a0 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.401923] [<c020ba24>] ? remove_vma+0x44/0x60 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.401923] [<c020d9b6>] ? sys_mmap_pgoff+0x106/0x1c0 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.401923] [<c065f300>] ? vmalloc_fault+0x190/0x190 Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.401923] [<c065c79f>] error_code+0x67/0x6c Aug 29 20:21:08 ip-10-202-191-4 kernel: [20571563.401923] ---[ end trace 0b105e2a179ad013 ]---

    Read the article

  • Java - System design with distributed Queues and Locks

    - by sunny
    Looking for inputs to evaluate a design for a system (java) which would have a distributed queue serving several (but not too many) nodes. These nodes would process objects present in the distributed queue and on occasion require a distributed lock across the cluster on an arbitrary (distributed) data structures. These (distributed) data structures could potentially lie in a distributed cache. Eliminating Terracotta (DSO),Hazelcast and Akka what could be alternative choices. Currently considering zookeeper as a distributed locking mechanism. Since the recommendation of a znode is not to exceed the 1M size , the understanding is that zookeeper should not be used a distributed queue. And also from Netflix curator tech note 4. So should a distributed cache, say like memcached, or redis be used to emulate a distributed queue ? i.e. The distributed queue will be stored in the caches and will be locked cluster-wide via zookeeper. Are there potential pitfalls with this high-level approach. The objects don't need to be taken off the queue. The object will pass through a lifecycle which will determine its removal from the queue. There would be about 10k+ objects in a queue at a given time changing states and any node could service one stage of the object's lifecycle. (Although not strictly necessary .. i.e. one node could serve the entire lifecycle if that is more efficient.) Any suggestions/alternatives ? sidenote: new to zookeeper ; redis etc.

    Read the article

  • Appcache and jquery mobile on a CMS powered site?

    - by user793011
    Has anyone used the cache manifest to make a CMS site work offline? I've made a demo with static html files which seems to work fine, so I'm assuming it wouldn't be too hard to achieve the same thing with a CMS. The way that you tell browsers that files have changed (and so need to be downloaded again) is by adding a comment to the cache manifest file so its byte size changes. I'm not quite sure how to do this with a CMS, but maybe some sort of server cron could run periodically? Personally I'm more interested in having a site that works offline rather than achieving ideal performance, so if the file was modified every hour rather than when content actually changed that would be fine for me. If anyone has used appcache with a CMS, has anyone done so with jquery mobile at the same time? What I'm after is a fully native feel to a site that's accessible offline, in other words I want to mimic a native App. My static demo does this perfectly with jquery mobile, so again I would have thought this would be achievable in a CMS.

    Read the article

  • Tomcat6 Manager Webapp is 404 on apt-get install on Ubuntu 10.10

    - by Noel
    http://localhost:8080/manager/html gives a 404 error on apt-get install of tomcat6 (6.0.28 on JVM 1.6.0_20-b20 on 2.6.35-27-generic amd64). http://localhost:8080/host-manager/html works. Lists one Host name, localhost. cat /usr/share/tomcat6/conf/tomcat-users.xml <tomcat-users> <role rolename="admin"/> <role rolename="manager" /> <user username="tomcatuser" password="Password1" roles="admin,manager"/> </tomcat-users> cat /usr/share/tomcat6/conf/Catalina/localhost/manager.xml <Context path="/manager" docBase="/usr/share/tomcat6-admin/manager" antiResourceLocking="false" privileged="true" /> <role name="manager" /> <user name="manager" password="Password1" roles="manager" /> <user name="tomcatuser" password="Password1" roles="manager" /> Those two files are the only documentation I've seen on how to setup the Manager webapp, and they seem to be compliant with the requirements.

    Read the article

  • upgrade glibc on RHEL4 without breaking anything

    - by SpliFF
    I have a static version of wkhtmltopdf which requires glibc-2.4 wkhtmltopdf: /lib/tls/libc.so.6: version `GLIBC_2.4' not found (required by wkhtmltopdf) I have apt installed with the DAG repos. Other than that the server is pretty stock standard except for Coldfusion MX7. My question is, is it safe to just "apt update glibc"? Will the updated glibc clobber the old one or will they co-exist? Should I "apt upgrade" the whole server? I'm pretty sure everything else (Apache2, Postgres8, etc) will handle the upgrade but Coldfusion concerns me due to its proprietry nature.

    Read the article

  • Nothing happens when trying to upgrade from Linux Mint 12 to 13

    - by Ares
    I am trying to upgrade from Linux Mint 12 to 13 using apt-get. After running the following commands, nothing seems to happen. I am fairly new to linux, what am I doing wrong? user@olympus /etc/default $ sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. user@olympus /etc/default $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

    Read the article

  • apt-get install phpmyadmin on debian doesn't install /etc/phpmyadmin/apache.conf

    - by Christian Nikkanen
    I'm trying to install phpmyadmin on my webserver, using this guide: http://www.howtoforge.com/ubuntu_debian_lamp_server I did that once, and it worked like a dream, but I hated the looks of phpmyadmin (maybe the oldest layout ever) and decided to delete it, and didn't know that deleting is done with apt-get remove phpmyadmin and did in phpmyadmin directory rm * and thought that it's done. However, as I can't find the debian build of phpmyadmin anywhere, I want to install it again, but when I add Include /etc/phpmyadmin/apache.conf to /etc/apache2/apache2.conf, and restart apache, it give's me this error: apache2: Syntax error on line 73 of /etc/apache2/apache2.conf: Could not open configuration file /etc/phpmyadmin/apache.conf: No such file or directory Action 'configtest' failed. The Apache error log may have more information. failed! No matter how I try, I always get this error, and phpmyadmin isn't there.

    Read the article

  • apticron, apt-get dist-upgrade and aptitude

    - by Kai
    I'm confused, on my debian server I've gotten the daily "updates available" message by apticron. I normally then just use aptitude to install the upgrades. Today I got a message which shows two upgrades. But they don't show up in aptitude. When I do a apt-get dist-upgrade they show up as "NEW" packages to be installed. aptitude dist-upgrade seems to ignore them. Can anyone explain to me why this is happening and how to get rid of the messages (It doesn't seem like I really need the new packages)

    Read the article

  • can't install sun jdk on ubuntu karmic 9.10

    - by pstanton
    i've just started using a new ec2 install of ubuntu karmic 9.10 and my first task was to install the jdk. when running apt-get install sun-java6-jdk i get the following: Reading package lists... Done Building dependency tree Reading state information... Done Package sun-java6-jdk is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package sun-java6-jdk has no installation candidate i've tried apt-get update as well as adding deb http://us.archive.ubuntu.com/ubuntu/ karmic universe to /etc/apt/sources.list any ideas fellas?

    Read the article

  • nginx 1.2.3 installed but remains at 1.1.19

    - by Nyxynyxx
    I've installed nginx 1.2.3 by adding a new ppa sudo add-apt-repository ppa:nginx/stable sudo apt-get update sudo apt-get install nginx However, nginx -v still gives me 1.1.19. What happened? Output The following packages will be upgraded: nginx 1 upgraded, 0 newly installed, 0 to remove and 46 not upgraded. Need to get 61.8 kB of archives. After this operation, 3,072 B of additional disk space will be used. Get:1 http://ppa.launchpad.net/nginx/stable/ubuntu/ precise/main nginx all 1.2.3-0ubuntu0ppa3~precise [61.8 kB] Fetched 61.8 kB in 0s (89.7 kB/s) (Reading database ... 79914 files and directories currently installed.) Preparing to replace nginx 1.1.19-1 (using .../nginx_1.2.3-0ubuntu0ppa3~precise_all.deb) ... Unpacking replacement nginx ... Setting up nginx (1.2.3-0ubuntu0ppa3~precise) ... root@precise64:/var/www/apadment# nginx -v nginx version: nginx/1.1.19

    Read the article

  • Minimal Lunix distribution with sshd and apt

    - by Sergey Mikhanov
    When I signed up for my Debian Linux VPS hosting and first logged on and invoked ps, there was the only user process running: sshd. As I can see, this was minimal Linux with only two things installed and configured: sshd and apt (plus all dependencies, of course). I want to build (or use existing) similar Linux distro, any advice on how to build (or pick) one? Googling "minimum linux", or "linux with sshd only" usually brings up Debian's netinstall, which is not what I want. Thanks in advance.

    Read the article

  • Minimal Linux distribution with sshd and apt

    - by Sergey Mikhanov
    When I signed up for my Debian Linux VPS hosting and first logged on and invoked ps, there was the only user process running: sshd. As I can see, this was minimal Linux with only two things installed and configured: sshd and apt (plus all dependencies, of course). I want to build (or use existing) similar Linux distro, any advice on how to build (or pick) one? Googling "minimum linux", or "linux with sshd only" usually brings up Debian's netinstall, which is not what I want. Thanks in advance.

    Read the article

  • Failed to download repository information Check your Internet connection

    - by Luca Brazza
    I need to check if I have updates for Ubuntu. I think it is 11.05 As you can see this is what it says: Failed to download repository information Check your Internet connection. Details: W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/main/binary-amd64/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/restricted/binary-amd64/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch http://ppa.launchpad.net/ferramroberto/java/ubuntu/dists/precise/main/source/Sources 404 Not Found , W:Failed to fetch http://ppa.launchpad.net/ferramroberto/java/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found , E:Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Rosegarden plugin list is empty

    - by Wes
    I've just installed rosegarden through apt. I've also installed jack and a low latency kernel through a PPA https://launchpad.net/~abogani/+archive/ppa. I've started jack-server and through the control panel I can see its wiring things through rosegarden. I've also tried installing qsynth and dssi both through apt. However I can't see any plug ins in the synth plug-in list. Therefore I'm unable to test if this works. I've tried launching qsynth before rosegarden and I've tried a few things however I just can't see any plugins. Does anyone know how to get this to work? I'm using ubuntu 11.04 or 11.10 I think. sudo apt-get install rosegarden -o APT::Install-Suggests=true synaptic sudo apt-get install synaptic synaptic sudo synaptic synaptic sudo apt-add-repository ppa:abogani/ppa sudo apt-get install linux-lowlatency sudo apt-get install dssi sudo apt-get install alsa-firmware-loaders alsa-tools alsa-tools-gui alsa-firmware sudo apt-get install alsa-firmware-loaders alsa-tools alsa-tools-gui sudo apt-get install blop caps cmt fil-plugins rev-plugins swh-plugins tap-plugins sudo apt-get install blepvco mcp-plugins omins

    Read the article

  • Help finding missing mumble-server dependencies

    - by Otoris
    I'm trying to install the mumble-server package using apt-get install mumble-server on Ubuntu 11.10 Server Edition on Rackspace Cloud. Problem is it can't find dependencies it should have found because they exist on launchpad.net? Dependencies message: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: mumble-server : Depends: libavahi-compat-libdnssd1 (>= 0.6.16) but it is not installable Depends: libprotobuf7 but it is not installable Depends: libqt4-dbus (>= 4:4.5.3) but it is not installable Depends: libqt4-network (>= 4:4.5.3) but it is not installable Depends: libqt4-sql (>= 4:4.5.3) but it is not installable Depends: libqt4-xml (>= 4:4.5.3) but it is not installable Depends: libqtcore4 (>= 4:4.7.0~beta1) but it is not installable Depends: libqt4-sql-sqlite but it is not installable E: Unable to correct problems, you have held broken packages. Any ideas on if I might be missing sources? I've been googling around and haven't found anyone else in this situation or anyone else not able to install the aforementioned packages. Thanks for your time! sources.list: deb http://mirror.rackspace.com/ubuntu/ oneiric restricted deb-src http://mirror.rackspace.com/ubuntu/ oneiric restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://mirror.rackspace.com/ubuntu/ oneiric-updates restricted deb-src http://mirror.rackspace.com/ubuntu/ oneiric-updates restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://mirror.rackspace.com/ubuntu/ oneiric universe deb-src http://mirror.rackspace.com/ubuntu/ oneiric universe deb http://mirror.rackspace.com/ubuntu/ oneiric-updates universe deb-src http://mirror.rackspace.com/ubuntu/ oneiric-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://mirror.rackspace.com/ubuntu/ oneiric multiverse deb-src http://mirror.rackspace.com/ubuntu/ oneiric multiverse deb http://mirror.rackspace.com/ubuntu/ oneiric-updates multiverse deb-src http://mirror.rackspace.com/ubuntu/ oneiric-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb http://us.archive.ubuntu.com/ubuntu/ oneiric-backports restricted universe multiverse # deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric-backports restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. This software is not part of Ubuntu, but is ## offered by Canonical and the respective vendors as a service to Ubuntu ## users. # deb http://archive.canonical.com/ubuntu oneiric partner # deb-src http://archive.canonical.com/ubuntu oneiric partner deb http://security.ubuntu.com/ubuntu oneiric-security restricted deb-src http://security.ubuntu.com/ubuntu oneiric-security restricted deb http://security.ubuntu.com/ubuntu oneiric-security universe deb-src http://security.ubuntu.com/ubuntu oneiric-security universe deb http://security.ubuntu.com/ubuntu oneiric-security multiverse deb-src http://security.ubuntu.com/ubuntu oneiric-security multiverse # Cool Kid Webmin/Usermin Here Brah deb http://download.webmin.com/download/repository sarge contrib deb http://webmin.mirror.somersettechsolutions.co.uk/repository sarge contrib

    Read the article

  • Upgrading 10.04LTS -> 10.10 using custom sources

    - by Boatzart
    I'm trying to upgrade to 10.10 from 10.04 LTS using a custom sources.list file that points to an unofficial mirror*. The mirror does have maverick, but I get the following output when upgrading: boatzart@somecomputer: > sudo do-release-upgrade Checking for a new ubuntu release Done Upgrade tool signature Done Upgrade tool Done downloading extracting 'maverick.tar.gz' authenticate 'maverick.tar.gz' against 'maverick.tar.gz.gpg' tar: Removing leading `/' from member names Reading cache Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Updating repository information WARNING: Failed to read mirror file No valid mirror found While scanning your repository information no mirror entry for the upgrade was found. This can happen if you run a internal mirror or if the mirror information is out of date. Do you want to rewrite your 'sources.list' file anyway? If you choose 'Yes' here it will update all 'lucid' to 'maverick' entries. If you select 'No' the upgrade will cancel. Continue [yN] y WARNING: Failed to read mirror file 96% [Working] Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: The package 'update-manager-kde' is marked for removal but it is in the removal blacklist. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Here is the relevant section from /var/log/dist-upgrade/main.log: 2010-11-18 14:05:52,117 DEBUG The package 'update-manager-kde' is marked for removal but it's in the removal blacklist 2010-11-18 14:05:52,136 ERROR Dist-upgrade failed: 'The package 'update-manager-kde' is marked for removal but it is in the removal blacklist.' 2010-11-18 14:05:52,136 DEBUG abort called *I'm located inside of USC, and for some crazy reason any sustained downloads to anywhere outside of the University are throttled down to 5kbps inside of my lab. Because of this I need to use the following sources.list: deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-updates main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-backports main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-security main restricted universe multiverse I've tried adding four more entries to the sources.list with s/lucid/maverick/ but that didn't help. Does anyone know how to fix this? Thanks!

    Read the article

  • LRU cache design

    - by user297850
    Least Recently Used (LRU) Cache is to discard the least recently used items first How do you design and implement such a cache class? The design requirements are as follows: 1) find the item as fast as we can 2) Once a cache misses and a cache is full, we need to replace the least recently used item as fast as possible. How to analyze and implement this question in terms of design pattern and algorithm design?

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >