Search Results

Search found 8997 results on 360 pages for 'apt cache'.

Page 14/360 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • How do I force apt-get to install a package that will not install due to a bug introduced by Ubuntu 11.10?

    - by Hemm
    Ubuntu 11.10 separated out python-profiler from the Python standard library due to licensing philosophies. (According to what I could Google, correct me if I'm wrong.) This is an active bug since October for 11.10. I have Python 2.7.2 installed, so the dependency errors are wrong. 'apt-get check' does not resolve the problem. What is the best way to resolve to this? Thank you. sudo apt-get install python-profiler Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: python-profiler : Depends: python (>= 2.5) but it is not going to be installed Depends: python (< 2.8) but it is not going to be installed E: Unable to correct problems, you have held broken packages.

    Read the article

  • sudo apt-get update error

    - by Kapil Anand
    I got the following error Reading package lists... Done W: GPG error: http://extras.ubuntu.com oneiric Release: Unknown error executing gpgv executing gpgv ---- W: GPG error: http://archive.ubuntu.com oneiric-updates Release: Unknown error executing gpgv Then after googling it i found and followed the following but that caused one error **sudo -i apt-get clean cd /var/lib/apt mv lists lists.old mkdir -p lists/partial apt-get clean apt-get update** while running i got the error " kapil@ubuntu:/var/lib/apt$ sudo mv lists lists.old mv: cannot move `lists' to `lists.old/lists': Directory not empty " so once again running the update command I got the same error again. Please help me what should i do?

    Read the article

  • Aggregating cache data from OCEP in CQL

    - by Manju James
    There are several use cases where OCEP applications need to join stream data with external data, such as data available in a Coherence cache. OCEP’s streaming language, CQL, supports simple cache-key based joins of stream data with data in Coherence (more complex queries will be supported in a future release). However, there are instances where you may need to aggregate the data in Coherence based on input data from a stream. This blog describes a sample that does just that. For our sample, we will use a simplified credit card fraud detection use case. The input to this sample application is a stream of credit card transaction data. The input stream contains information like the credit card ID, transaction time and transaction amount. The purpose of this application is to detect suspicious transactions and send out a warning event. For the sake of simplicity, we will assume that all transactions with amounts greater than $1000 are suspicious. The transaction history is available in a Coherence distributed cache. For every suspicious transaction detected, a warning event must be sent with maximum amount, total amount and total number of transactions over the past 30 days, as shown in the diagram below. Application Input Stream input to the EPN contains events of type CCTransactionEvent. This input has to be joined with the cache with all credit card transactions. The cache is configured in the EPN as shown below: <wlevs:caching-system id="CohCacheSystem" provider="coherence"/> <wlevs:cache id="CCTransactionsCache" value-type="CCTransactionEvent" key-properties="cardID, transactionTime" caching-system="CohCacheSystem"> </wlevs:cache> Application Output The output that must be produced by the application is a fraud warning event. This event is configured in the spring file as shown below. Source for cardHistory property can be seen here. <wlevs:event-type type-name="FraudWarningEvent"> <wlevs:properties type="tuple"> <wlevs:property name="cardID" type="CHAR"/> <wlevs:property name="transactionTime" type="BIGINT"/> <wlevs:property name="transactionAmount" type="DOUBLE"/> <wlevs:property name="cardHistory" type="OBJECT"/> </wlevs:properties </wlevs:event-type> Cache Data Aggregation using Java Cartridge In the output warning event, cardHistory property contains data from the cache aggregated over the past 30 days. To get this information, we use a java cartridge method. This method uses Coherence’s query API on credit card transactions cache to get the required information. Therefore, the java cartridge method requires a reference to the cache. This may be set up by configuring it in the spring context file as shown below: <bean class="com.oracle.cep.ccfraud.CCTransactionsAggregator"> <property name="cache" ref="CCTransactionsCache"/> </bean> This is used by the java class to set a static property: public void setCache(Map cache) { s_cache = (NamedCache) cache; } The code snippet below shows how the total of all the transaction amounts in the past 30 days is computed. Rest of the information required by CardHistory object is calculated in a similar manner. Complete source of this class can be found here. To find out more information about using Coherence's API to query a cache, please refer Coherence Developer’s Guide. public static CreditHistoryData(String cardID) { … Filter filter = QueryHelper.createFilter("cardID = :cardID and transactionTime :transactionTime", map); CardHistoryData history = new CardHistoryData(); Double sum = (Double) s_cache.aggregate(filter, new DoubleSum("getTransactionAmount")); history.setTotalAmount(sum); … return history; } The java cartridge method is used from CQL as seen below: select cardID, transactionTime, transactionAmount, CCTransactionsAggregator.execute(cardID) as cardHistory from inputChannel where transactionAmount1000 This produces a warning event, with history data, for every credit card transaction over $1000. That is all there is to it. The complete source for the sample application, along with the configuration files, is available here. In the sample, I use a simple java bean to load the cache with initial transaction history data. An input adapter is used to create and send transaction events for the input stream.

    Read the article

  • Why use apt-get upgrade instead of apt-get dist-upgrade?

    - by jimirings
    I usually use apt-get update && apt-get upgrade to run my updates and upgrades instead of the GUI because it seems to run more quickly. However, I've noticed lately that I often get a message that one of my upgrades was held back. I then usually run dist-upgrade to run it through and it works fine. As far as I can tell after reading this question and its answers, dist-upgrade does all the same things and then some. So, my question is: Why use apt-get upgrade at all? Why not use apt-get dist-upgrade all the time? Why does apt-get upgrade even exist?

    Read the article

  • The clock hands of the buffer cache

    - by Tony Davis
    Over a leisurely beer at our local pub, the Waggon and Horses, Phil Factor was holding forth on the esoteric, but strangely poetic, language of SQL Server internals, riddled as it is with 'sleeping threads', 'stolen pages', and 'memory sweeps'. Generally, I remain immune to any twinge of interest in the bowels of SQL Server, reasoning that there are certain things that I don't and shouldn't need to know about SQL Server in order to use it successfully. Suddenly, however, my attention was grabbed by his mention of the 'clock hands of the buffer cache'. Back at the office, I succumbed to a moment of weakness and opened up Google. He wasn't lying. SQL Server maintains various memory buffers, or caches. For example, the plan cache stores recently-used execution plans. The data cache in the buffer pool stores frequently-used pages, ensuring that they may be read from memory rather than via expensive physical disk reads. These memory stores are classic LRU (Least Recently Updated) buffers, meaning that, for example, the least frequently used pages in the data cache become candidates for eviction (after first writing the page to disk if it has changed since being read into the cache). SQL Server clearly needs some mechanism to track which pages are candidates for being cleared out of a given cache, when it is getting too large, and it is this mechanism that is somewhat more labyrinthine than I previously imagined. Each page that is loaded into the cache has a counter, a miniature "wristwatch", which records how recently it was last used. This wristwatch gets reset to "present time", each time a page gets updated and then as the page 'ages' it clicks down towards zero, at which point the page can be removed from the cache. But what is SQL Server is suffering memory pressure and urgently needs to free up more space than is represented by zero-counter pages (or plans etc.)? This is where our 'clock hands' come in. Each cache has associated with it a "memory clock". Like most conventional clocks, it has two hands; one "external" clock hand, and one "internal". Slava Oks is very particular in stressing that these names have "nothing to do with the equivalent types of memory pressure". He's right, but the names do, in that peculiar Microsoft tradition, seem designed to confuse. The hands do relate to memory pressure; the cache "eviction policy" is determined by both global and local memory pressures on SQL Server. The "external" clock hand responds to global memory pressure, in other words pressure on SQL Server to reduce the size of its memory caches as a whole. Global memory pressure – which just to confuse things further seems sometimes to be referred to as physical memory pressure – can be either external (from the OS) or internal (from the process itself, e.g. due to limited virtual address space). The internal clock hand responds to local memory pressure, in other words the need to reduce the size of a single, specific cache. So, for example, if a particular cache, such as the plan cache, reaches a defined "pressure limit" the internal clock hand will start to turn and a memory sweep will be performed on that cache in order to remove plans from the memory store. During each sweep of the hands, the usage counter on the cache entry is reduced in value, effectively moving its "last used" time to further in the past (in effect, setting back the wrist watch on the page a couple of hours) and increasing the likelihood that it can be aged out of the cache. There is even a special Dynamic Management View, sys.dm_os_memory_cache_clock_hands, which allows you to interrogate the passage of the clock hands. Frequently turning hands equates to excessive memory pressure, which will lead to performance problems. Two hours later, I emerged from this rather frightening journey into the heart of SQL Server memory management, fascinated but still unsure if I'd learned anything that I'd put to any practical use. However, I certainly began to agree that there is something almost Tolkeinian in the language of the deep recesses of SQL Server. Cheers, Tony.

    Read the article

  • PSQL 64bit driver error

    - by Alex Holsgrove
    I have an Ubuntu 12.04 64bit server setup under Hyper-V. I have installed Pervasive 64bit SQL drivers so that a stock-updater script can run daily (Updates external MySQL database from another local server running Exchequer software / PSQL database). These drivers seem to conflict, as I found out when trying to run any apt-get commands: apt-get update apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by apt-get) apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.15' not found (required by apt-get) apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by apt-get) apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by /usr/lib/x86_64-linux-gnu/libapt-pkg.so.4.12) apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by /usr/lib/x86_64-linux-gnu/libapt-pkg.so.4.12) apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.15' not found (required by /usr/lib/x86_64-linux-gnu/libapt-pkg.so.4.12) Any help would be great.

    Read the article

  • debian LTS : how to keep packages up-to-date with last security fixes using apt

    - by Quentin
    What's the best way to keep packages up-to-date (ie with the last security fixes) without worried about major version update ? For instance, apache2 for squeezeis is 2.2.16 (https://packages.debian.org/source/squeeze/apache2) However, last apache2 version for the 2.2.x branch is 2.2.27 Test repository can't be used since they use the 2.4.x versions and I'd like to stick on the 2.2.x (to avoid migration issues) How would you handle this situation and how can I update to 2.2.27 ?

    Read the article

  • How to reduce 3rd party repository priority in apt

    - by carlosz
    I'm using Debian Testing together with the Deb Multimedia (previously Debian Multimedia) repository for testing. I want to reduce the priority of the deb-multimedia packages so it only installs certain packages. I've tried with: Package: * Pin: release o="Unofficial Multimedia Packages" Pin-Priority: 10 and Package: * Pin: origin "mirror.home-dn.net" Pin-Priority: 10 But neither works, the packages still have the default priority (500). The Release file from the repository looks like this: Archive: testing Version: None Component: main Origin: Unofficial Multimedia Packages Label: Unofficial Multimedia Packages Architecture: amd64 What am I doing wrong? Edit: It worked when I used the Version information instead: Package: * Pin: release v=None Pin-Priority: 10 But I still don't know the reason the other filters didn't work.

    Read the article

  • Get changes to local config files with apt

    - by tstenner
    When I'm updating a package and a new configuration file is shipped, I'm asked whether I want to keep my version, install the new one or view the differences. As I have to document an older server, I'd like to start by explaining what I did (and why), how can I get a list of changes to local config files?

    Read the article

  • Disable linux read and write file cache on partition

    - by complistic
    How do i disable the linux file cache on a xfs partition (both read an write). We have a xfs partition over a hardware RAID that stores our RAW HD Video. Most of the shoots are 50-300gb each so the linux cache has a hit-rate of 0.001%. I have tryed the sync option but it still fills up the cache when copinging the files. ( about 30x over per shoot :P ) /etc/fstab: /dev/sdb1 /video xfs sync,noatime,nodiratime,logbufs=8 0 1 Im running debian lenny if it helps.

    Read the article

  • Error headers: ap_headers_output_filter() after putting cache header in htaccess file

    - by Brad
    Receiving error: [debug] mod_headers.c(663): headers: ap_headers_output_filter() after I included this within the htaccess file: # 6 DAYS <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=518400, public" </FilesMatch> # 2 DAYS <FilesMatch "\.(xml|txt)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> # 2 HOURS <FilesMatch "\.(html|htm)$"> Header set Cache-Control "max-age=7200, must-revalidate" </FilesMatch> Any help is appreciated as to what I could do to fix this?

    Read the article

  • Nginx proxy cache (proxy_pass $request_uri;)

    - by imastar
    I need to create proxy web using nginx. If I access http://myweb.com/http://www.target.com/ the proxy_pass should be http://www.target.com/ Here is my configuration: location / { proxy_pass $request_uri; proxy_cache_methods GET; proxy_set_header Referer "$request_uri"; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_ignore_headers Cache-Control; proxy_hide_header Pragma; proxy_hide_header Set-Cookie; proxy_set_header Cache-Control Public; proxy_cache cache; proxy_cache_valid 200 10h; proxy_cache_valid 301 302 1h; proxy_cache_valid any 1h; } Here is the log error 2013/02/05 12:58:51 [error] 2118#0: *8 invalid URL prefix in "/http://www.target.com/", client: 108.59.8.83, server: myweb.com, request: "HEAD /http://www.target.com/ HTTP/1.1", host: "myweb.com"

    Read the article

  • Error headers: ap_headers_output_filter() after putting cache header in htaccess file

    - by Brad
    Receiving error: [debug] mod_headers.c(663): headers: ap_headers_output_filter() after I included this within the htaccess file: # 6 DAYS <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=518400, public" </FilesMatch> # 2 DAYS <FilesMatch "\.(xml|txt)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> # 2 HOURS <FilesMatch "\.(html|htm)$"> Header set Cache-Control "max-age=7200, must-revalidate" </FilesMatch> Any help is appreciated as to what I could do to fix this?

    Read the article

  • ISA caching with no cache-related info in response header

    - by Mike M. Lin
    From the documentation, I can't figure out what criteria an ISA server uses to figure out if a cached file is valid when no cache-related info is in the response header. Let's say I got this header in my response on Thu, 13 Jan 2011 18:43:35 GMT: HTTP/1.1 200 OK Date: Thu, 13 Jan 2011 18:43:35 GMT Server: Apache/2.2.3 (Red Hat) Content-Language: en X-Powered-By: Servlet/2.5 JSP/2.1 Keep-Alive: timeout=15 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=ISO-8859-1 There's no cache directive, no last-modified field, no expires field. How will the ISA server decide for how long to cache this response?

    Read the article

  • Safari on Mac OS X lasts beyond Empty Cache

    - by Mitch
    So, I broke a website with some server changes oops. I roll back the changes I made, hit cmd-R, and oh noes, it is still broken. But I relax thinking, there must be something held in safari's cache so I press the handy 'Empty Cache' button. Hit cmd-R for refresh it is still broken. I'm really worried that I've done it and broken something bigtime. But first decide to check on a hand win xp computer, and voila it works. So the question is how do you "really" clear the cache w/o restart safari, I have many browser windows open a restart every time I make a server side change will ruin me. Any suggestions? Thanks!

    Read the article

  • How to find video in Firefox cache?

    - by Alegro
    I'm trying to find youtube video (just watched) in Firefox cache folder, but I cant find the folder. win xp sp3 Firefox 16.1 I tried C:\Documents and Settings\eDIN\Local Settings\Application Data\Mozilla\Firefox\Profiles\xp44aixq.default\Cache Also C:\Documents and Settings\eDIN\Local Settings\Application Data\Mozilla\Firefox\Profiles\49mvq84u.default\Cache In this folder I found the png thumbnail of visited youtube page C:\Documents and Settings\eDIN\Local Settings\Application Data\Mozilla\Firefox\Profiles\49mvq84u.default\thumbnails But, there is no video file. I also searched all files and folders arround (Default user, All users...etc). There is only one win user.

    Read the article

  • Do Chrome and Firefox share a browser cache?

    - by Davy8
    So I was having a problem with some stuff on SO not working that was similar to this post on Meta which recommended clearing the browser cache. Now it wasn't working in both Chrome and Firefox and I had never logged in to SO on FF before today so FF couldn't have cached the file itself. Tried hitting refresh multiple times and even tried Ctrl-F5 with no luck. I cleared the browser cache in Chrome only, and after that it started working on both Chrome and FF. How could clearing the cache of one browser affect the other?

    Read the article

  • windows cache not working as it should?

    - by piotrektt
    I run windows 2012 server with data center. The setup is with 60GB of RAM. I have one file shared on VHD and when I copy this file locally the RAM cache is all used up but when multiple computers connect to the share it the cache is not used. The network is 8Gb. The whole network is around 200 computers that need to read that one file but on this setup only 10 connection kills the server. Is there any way to check what is going on? What other solution can I use to manage cache in windows?

    Read the article

  • Make nginx avoid cache if response contains Vary Accept-Language

    - by gioele
    The cache module of nginx version 1.1.19 does not take the Vary header into account. This means that nginx will serve the same request even if the content of one of the fields specified in the Vary header has changed. In my case I only care about the Accept-Language header, all the others have been taken care of. How can I make nginx cache everything except responses that have a Vary header that contains Accept-Language? I suppose I should have something like location / { proxy_cache cache; proxy_cache_valid 10m; proxy_cache_valid 404 1m; if ($some_header ~ "Accept-Language") { # WHAT IS THE HEADER TO USE? set $contains_accept_language # HOW SHOULD THIS VARIABLE BE SET? } proxy_no_cache $contains_accept_language proxy_http_version 1.1; proxy_pass http://localhost:8001; } but I do not know what is the variable name for "the Vary header received from the backend".

    Read the article

  • How do I get the source code of packages installed through apt-get?

    - by dustyprogrammer
    I am assuming that all application installed through apt-get are open source; but for those that are available in that manner, where can I get the source code for these applications as well as update them? I have a couple applications I use regularly that aren't being actively developed any longer and I would like to add features. Where would I go to get the rights to update these applications? mainly: hellanzb in my case Please and thank you.

    Read the article

  • How do I open "apt" links in Firefox on Lubuntu?

    - by cipricus
    Many answers on Ask Ubuntu direct to links like this that in Xubuntu opened in Ubuntu Software Center. In Lubuntu I get this error message: In Firefox-Preferences/Applications cannot see something resembling to apt to associate to a program etc. Opening the same link in Chromium or Opera I get: Clicking "I'm running Ubuntu" results in an error message like in Firefox. What's the remedy? Can I install Ubuntu Software Center?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >