Search Results

Search found 8611 results on 345 pages for 'apt fast'.

Page 42/345 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Why do my CouchDB databases grow so fast?

    - by konrad
    I was wondering why my CouchDB database was growing to fast so I wrote a little test script. This script changes an attributed of a CouchDB document 1200 times and takes the size of the database after each change. After performing these 1200 writing steps the database is doing a compaction step and the db size is measured again. In the end the script plots the databases size against the revision numbers. The benchmarking is run twice: The first time the default number of document revision (=1000) is used (_revs_limit). The second time the number of document revisions is set to 1. The first run produces the following plot The second run produces this plot For me this is quite an unexpected behavior. In the first run I would have expected a linear growth as every change produces a new revision. When the 1000 revisions are reached the size value should be constant as the older revisions are discarded. After the compaction the size should fall significantly. In the second run the first revision should result in certain database size that is then keeps during the following writing steps as every new revision leads to the deletion of the previous one. I could understand if there is a little bit of overhead needed to manage the changes but this growth behavior seems weird to me. Can anybody explain this phenomenon or correct my assumptions that lead to the wrong expectations?

    Read the article

  • .net real time stream processing - needed huge and fast RAM buffer

    - by mack369
    The application I'm developing communicates with an digital audio device, which is capable of sending 24 different voice streams at the same time. The device is connected via USB, using FTDI device (serial port emulator) and D2XX Drivers (basic COM driver is to slow to handle transfer of 4.5Mbit). Basically the application consist of 3 threads: Main thread - GUI, control, ect. Bus reader - in this thread data is continuously read from the device and saved to a file buffer (there is no logic in this thread) Data interpreter - this thread reads the data from file buffer, converts to samples, does simple sample processing and saves the samples to separate wav files. The reason why I used file buffer is that I wanted to be sure that I won't loose any samples. The application doesn't use recording all the time, so I've chosen this solution because it was safe. The application works fine, except that buffered wave file generator is pretty slow. For 24 parallel records of 1 minute, it takes about 4 minutes to complete the recording. I'm pretty sure that eliminating the use of hard drive in this process will increase the speed much. The second problem is that the file buffer is really heavy for long records and I can't clean this up until the end of data processing (it would slow down the process even more). For RAM buffer I need at lest 1GB to make it work properly. What is the best way to allocate such a big amount of memory in .NET? I'm going to use this memory in 2 threads so a fast synchronization mechanism needed. I'm thinking about a cycle buffer: one big array, the Bus Reader saves the data, the Data Interpreter reads it. What do you think about it? [edit] Now for buffering I'm using classes BinaryReader and BinaryWriter based on a file.

    Read the article

  • Looking for a fast, compact, streamable, multi-language, strongly typed serialization format

    - by sanity
    I'm currently using JSON (compressed via gzip) in my Java project, in which I need to store a large number of objects (hundreds of millions) on disk. I have one JSON object per line, and disallow linebreaks within the JSON object. This way I can stream the data off disk line-by-line without having to read the entire file at once. It turns out that parsing the JSON code (using http://www.json.org/java/) is a bigger overhead than either pulling the raw data off disk, or decompressing it (which I do on the fly). Ideally what I'd like is a strongly-typed serialization format, where I can specify "this object field is a list of strings" (for example), and because the system knows what to expect, it can deserialize it quickly. I can also specify the format just by giving someone else its "type". It would also need to be cross-platform. I use Java, but work with people using PHP, Python, and other languages. So, to recap, it should be: Strongly typed Streamable (ie. read a file bit by bit without having to load it all into RAM at once) Cross platform (including Java and PHP) Fast Free (as in speech) Any pointers?

    Read the article

  • Perl's Devel::LeakTrace::Fast Pointing to blank files and evals

    - by kt
    I am using Devel::LeakTrace::Fast to debug a memory leak in a perl script designed as a daemon which runs an infinite loop with sleeps until interrupted. I am having some trouble both reading the output and finding documentation to help me understand the output. The perldoc doesn't contain much information on the output. Most of it makes sense, such as pointing to globals in DBI. Intermingled with the output, however, are several leaked SV(<LOCATION>) from (eval #) line # Where the numbers are numbers and <LOCATION> is a location in memory. The script itself is not using eval at any point - I have not investigated each used module to see if evals are present. Mostly what I want to know is how to find these evals (if possible). I also find the following entries repeated over and over again leaked SV(<LOCATION>) from line # Where line # is always the same #. Not very helpful in tracking down what file that line is in.

    Read the article

  • Fast way to manually mod a number

    - by Nikolai Mushegian
    I need to be able to calculate (a^b) % c for very large values of a and b (which individually are pushing limit and which cause overflow errors when you try to calculate a^b). For small enough numbers, using the identity (a^b)%c = (a%c)^b%c works, but if c is too large this doesn't really help. I wrote a loop to do the mod operation manually, one a at a time: private static long no_Overflow_Mod(ulong num_base, ulong num_exponent, ulong mod) { long answer = 1; for (int x = 0; x < num_exponent; x++) { answer = (answer * num_base) % mod; } return answer; } but this takes a very long time. Is there any simple and fast way to do this operation without actually having to take a to the power of b AND without using time-consuming loops? If all else fails, I can make a bool array to represent a huge data type and figure out how to do this with bitwise operators, but there has to be a better way.

    Read the article

  • C# / Silverlight / WPF / Fast rendering lots of circles

    - by Walt W
    I want to render a lot of circles or small graphics within either silverlight or wpf (around 1000-10000) as fast and as frequently as possible. If I have to go to DX or OGL, that's fine, but I'm wondering about doing this within either of those two frameworks first (read: it's OK if an answer is WPF-only or Silverlight-only). Also, if there is a way to access DX through WPF and render on a surface that way, I would be interested in that as well. So, what's the fastest way to draw a load of circles? They can be as plain as necessary, but they do need to have a radius. Currently I'm using DrawingVisual and a DrawingContext.DrawEllipse() command for each circle, then rendering the visual to a RenderTargetBItmap, but it becomes very slow as the number of circles rises. By the way, these circles move every frame, so caching isn't really an option unless you're going to suggest caching the individual circles . . . But their sizes are dynamic, so I'm not sure that's a great approach.

    Read the article

  • Way to store a large dictionary with low memory footprint + fast lookups (on Android)

    - by BobbyJim
    I'm developing an android word game app that needs a large (~250,000 word dictionary) available. I need: reasonably fast look ups e.g. constant time preferable, need to do maybe 200 lookups a second on occasion to solve a word puzzle and maybe 20 lookups within 0.2 second more often to check words the user just spelled. EDIT: Lookups are typically asking "Is in the dictionary?". I'd like to support up to two wildcards in the word as well, but this is easy enough by just generating all possible letters the wildcards could have been and checking the generated words (i.e. 26 * 26 lookups for a word with two wildcards). as it's a mobile app, using as little memory as possible and requiring only a small initial download for the dictionary data is top priority. My first naive attempts used Java's HashMap class, which caused an out of memory exception. I've looked into using the SQL lite databases available on android, but this seems like overkill. What's a good way to do what I need?

    Read the article

  • Fast find object by string property

    - by Andrew Kalashnikov
    Hello, colleagues. I've got task to fast find object by its string property. Object: class DicDomain { public virtual string Id{ get; set; } public virtual string Name { get; set; } } For storing my object I use List[T] dictionary where T is DicDomain for now . I've got 5-10 such lists, which contain about 500-20000 at each one. Task is find objects by its Name. I use next code now: List<T> entities = dictionary.FindAll(s => s.Name.Equals(word, StringComparison.OrdinalIgnoreCase)); I've got some questions: Is my search speed optimal. I think now. Data structure. It List good for this task. What about hashtable,sorted... Method Find. May be i should use string intern?? I haven't much exp at these tasks. Can u give me good advice for increase perfomance. Thanks

    Read the article

  • I need a fast runtime expression parser

    - by Chris Lively
    I need to locate a fast, lightweight expression parser. Ideally I want to pass it a list of name/value pairs (e.g. variables) and a string containing the expression to evaluate. All I need back from it is a true/false value. The types of expressions should be along the lines of: varA == "xyz" and varB==123 Basically, just a simple logic engine whose expression is provided at runtime. UPDATE At minimum it needs to support ==, !=, , =, <, <= Regarding speed, I expect roughly 5 expressions to be executed per request. We'll see somewhere in the vicinity of 100/requests a second. Our current pages tend to execute in under 50ms. Usually there will only be 2 or 3 variables involved in any expression. However, I'll need to load approximately 30 into the parser prior to execution. UPDATE 2012/11/5 Update about performance. We implemented nCalc nearly 2 years ago. Since then we've expanded it's use such that we average 40+ expressions covering 300+ variables on post backs. There are now thousands of post backs occurring per second with absolutely zero performance degradation. We've also extended it to include a handful of additional functions, again with no performance loss. In short, nCalc met all of our needs and exceeded our expectations.

    Read the article

  • Very fast document similarity

    - by peyton
    Hello, I am trying to determine document similarity between a single document and each of a large number of documents (n ~= 1 million) as quickly as possible. More specifically, the documents I'm comparing are e-mails; they are grouped (i.e., there are folders or tags) and I'd like to determine which group is most appropriate for a new e-mail. Fast performance is critical. My a priori assumption is that the cosine similarity between term vectors is appropriate for this application; please comment on whether this is a good measure to use or not! I have already taken into account the following possibilities for speeding up performance: Pre-normalize all the term vectors Calculate a term vector for each group (n ~= 10,000) rather than each e-mail (n ~= 1,000,000); this would probably be acceptable for my application, but if you can think of a reason not to do it, let me know! I have a few questions: If a new e-mail has a new term never before seen in any of the previous e-mails, does that mean I need to re-compute all of my term vectors? This seems expensive. Is there some clever way to only consider vectors which are likely to be close to the query document? Is there some way to be more frugal about the amount of memory I'm using for all these vectors? Thanks!

    Read the article

  • Fast serialization/deserialization of structs

    - by user256890
    I have huge amont of geographic data represented in simple object structure consisting only structs. All of my fields are of value type. public struct Child { readonly float X; readonly float Y; readonly int myField; } public struct Parent { readonly int id; readonly int field1; readonly int field2; readonly Child[] children; } The data is chunked up nicely to small portions of Parent[]-s. Each array contains a few thousands Parent instances. I have way too much data to keep all in memory, so I need to swap these chunks to disk back and forth. (One file would result approx. 2-300KB). What would be the most efficient way of serializing/deserializing the Parent[] to a byte[] for dumpint to disk and reading back? Concerning speed, I am particularly interested in fast deserialization, write speed is not that critical. Would simple BinarySerializer good enough? Or should I hack around with StructLayout (see accepted answer)? I am not sure if that would work with array field of Parent.children. UPDATE: Response to comments - Yes, the objects are immutable (code updated) and indeed the children field is not value type. 300KB sounds not much but I have zillions of files like that, so speed does matter.

    Read the article

  • C code Error: free(): invalid next size (fast):

    - by user1436057
    I got an error from my code, but I'm not sure where to fix it. Here's the explanation of what my code does: I'm writing some code that will read an input file and store each line as an object (char type) in an array. The first line of the input file is a number. This number tells me how many lines that I should read and store in the array. Here's my code: int main(int argc, char *argv[]){ FILE *fp; char **path; int num, i; ... /*after reading the first line and store the number value in num*/ path = malloc(num *sizeof(char)); i=0; while (!feof(fp)) { char buffer[500]; int length = 0; for (ch = fgetc(fp); ch != EOF && ch != '\n'; ch = fgetc(fp)) { buffer[length++] = ch; } if(ch == '\n' && ch!= EOF){ buffer[length] = '\0'; path[i] = malloc(strlen(buffer)+1); strcpy(path[i], buffer); i++; } } ... free(path); } After running the code, I get this *** glibc detected *** free(): invalid next size (fast): I have searched around and know this is malloc/free error, but I don't exactly know to fix it. Any help would be great. Thanks!

    Read the article

  • Generating equals / hashcode / toString using annotation

    - by Bruno Bieth
    I believe I read somewhere people generating equals / hashcode / toString methods during compile time (using APT) by identifying which fields should be part of the hash / equality test. I couldn't find anything like that on the web (I might have dreamed it ?) ... That could be done like that : public class Person { @Id @GeneratedValue private Integer id; @Identity private String firstName, lastName; @Identity private Date dateOfBirth; //... } For an entity (so we want to exlude some fields, like the id). Or like a scala case class i.e a value object : @ValueObject public class Color { private int red, green, blue; } Not only the file becomes more readable and easier to write, but it also helps ensuring that all the attributes are part of the equals / hashcode (in case you add another attribute later on, without updating the methods accordingly). I heard APT isn't very well supported in IDE but I wouldn't see that as a major issue. After all, tests are mainly run by continuous integration servers. Any idea if this has been done already and if not why ? Thanks

    Read the article

  • Winlibre - An Aptitude-Synaptic for Windows. Would that be useful?

    - by acadavid
    Hi everyone. Last year, in 2009 GSoC, I participated with an organization called Winlibre. The basic idea is having a project similar to Aptitude (or Apt-get) and a GUI like Synaptic but for Windows and just to hold (initially), only open source software. The project was just ok, we finished what we considered was a good starting point but unfortunately, due to different occupations of the developers, the project has been idle almost since GSoC finished. Now, I have some energy, time and interest to try to continue this development. The project was divided in 3 parts: A repository server (which i worked on, and which was going to store and serve packages and files), a package creator for developers, and the main app, which is apt-get and its GUI. I have been thinking about the project, and the first question that came to my mind is.. actually is this project useful for developers and Windows users? Keep in mind that the idea is to solve dependencies problems, and install packages "cleanly". I'm not a Windows developer and just a casual user, so i really don't have a lot of experience on how things are handled there, but as far as I have seen, all installers handle those dependencies. Will windows developers be willing to switch from installers to a packages way of handling installations of Open source Software? Or it's just ok to create packages for already existing installers? The packages concept is basically the same as .deb or .rpm files. I still have some other questions, but basically i would like to make sure that it's useful in someway to users and Windows developers, and if developers would find this project interesting. If you have any questions, feedback, suggestions or criticisms, please don't mind about posting them. Thanks!!

    Read the article

  • How to Make the Kindle Fire Silk Browser *Actually* Fast!

    - by The Geek
    Not that long ago, we reviewed the Kindle Fire, and one of our biggest complaints was how lousy the browser is—but we’ve discovered the trick to making it actually fast. Here’s how to fix it. How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review HTG Explains: How Hackers Take Over Web Sites with SQL Injection / DDoS

    Read the article

  • How do I deal with the problems of a fast side-scroller?

    - by Ska
    I'm making a side scrolling airplane game and when I begin going very fast I begin to experience some problems as a player: Elements are not distinguishable, like power-ups from bullets, etc I start to feel dizzy and uncomfortable There isn't enough time to see what's coming How can I sort this out? Do I use less details in all the grahpics? Tiny Wings has the same horizontal movement speed as in my game but it doesn't suffer from these problems. Are there any other really fast side-scrollers I could take as a reference?

    Read the article

  • Fast block placement algorithm, advice needed?

    - by James Morris
    I need to emulate the window placement strategy of the Fluxbox window manager. As a rough guide, visualize randomly sized windows filling up the screen one at a time, where the rough size of each results in an average of 80 windows on screen without any window overlapping another. It is important to note that windows will close and the space that closed windows previously occupied becomes available once more for the placement of new windows. The window placement strategy has three binary options: Windows build horizontal rows or vertical columns (potentially) Windows are placed from left to right or right to left Windows are placed from top to bottom or bottom to top Why is the algorithm a problem? It needs to operate to the deadlines of a real time thread in an audio application. At this moment I am only concerned with getting a fast algorithm, don't concern yourself over the implications of real time threads and all the hurdles in programming that that brings. So far I have two choices which I have built loose prototypes for: 1) A port of the Fluxbox placement algorithm into my code. The problem with this is, the client (my program) gets kicked out of the audio server (JACK) when I try placing the worst case scenario of 256 blocks using the algorithm. This algorithm performs over 14000 full (linear) scans of the list of blocks already placed when placing the 256th window. 2) My alternative approach. Only partially implemented, this approach uses a data structure for each area of rectangular free unused space (the list of windows can be entirely separate, and is not required for testing of this algorithm). The data structure acts as a node in a doubly linked list (with sorted insertion), as well as containing the coordinates of the top-left corner, and the width and height. Furthermore, each block data structure also contains four links which connect to each immediately adjacent (touching) block on each of the four sides. IMPORTANT RULE: Each block may only touch with one block per side. The problem with this approach is, it's very complex. I have implemented the straightforward cases where 1) space is removed from one corner of a block, 2) splitting neighbouring blocks so that the IMPORTANT RULE is adhered to. The less straightforward case, where the space to be removed can only be found within a column or row of boxes, is only partially implemented - if one of the blocks to be removed is an exact fit for width (ie column) or height (ie row) then problems occur. And don't even mention the fact this only checks columns one box wide, and rows one box tall. I've implemented this algorithm in C - the language I am using for this project (I've not used C++ for a few years and am uncomfortable using it after having focused all my attention to C development, it's a hobby). The implementation is 700+ lines of code (including plenty of blank lines, brace lines, comments etc). The implementation only works for the horizontal-rows + left-right + top-bottom placement strategy. So I've either got to add some way of making this +700 lines of code work for the other 7 placement strategy options, or I'm going to have to duplicate those +700 lines of code for the other seven options. Neither of these is attractive, the first, because the existing code is complex enough, the second, because of bloat. The algorithm is not even at a stage where I can use it in the real time worst case scenario, because of missing functionality, so I still don't know if it actually performs better or worse than the first approach. What else is there? I've skimmed over and discounted: Bin Packing algorithms: their emphasis on optimal fit does not match the requirements of this algorithm. Recursive Bisection Placement algorithms: sounds promising, but these are for circuit design. Their emphasis is optimal wire length. Both of these, especially the latter, all elements to be placed/packs are known before the algorithm begins. I need an algorithm which works accumulatively with what it is given to do when it is told to do it. What are your thoughts on this? How would you approach it? What other algorithms should I look at? Or even what concepts should I research seeing as I've not studied computer science/software engineering? Please ask questions in comments if further information is needed. [edit] If it makes any difference, the units for the coordinates will not be pixels. The units are unimportant, but the grid where windows/blocks/whatever can be placed will be 127 x 127 units.

    Read the article

  • Using fink to install PyODE

    - by None
    I installed fink and ran this command from terminal: sudo apt-get install python-pyode. I found this command on the internet and expected it to work, as I just installed fink. Am I using the wrong name? Is there a different command to use for Mac OS X (10.5)?

    Read the article

  • Installing MySQL on Ubuntu 12 fails on a clean installation

    - by Keenora Fluffball
    I do have the problem, that even if I uninstall mysql completely and do a restart, it still doesn't install mysql. This is the error I get: Paketlisten werden gelesen... Fertig Abhängigkeitsbaum wird aufgebaut Statusinformationen werden eingelesen... Fertig Die folgenden zusätzlichen Pakete werden installiert: libdbd-mysql-perl libmysqlclient18 mysql-client-5.5 mysql-client-core-5.5 mysql-common mysql-server-5.5 mysql-server-core-5.5 Vorgeschlagene Pakete: tinyca mailx Die folgenden NEUEN Pakete werden installiert: libdbd-mysql-perl libmysqlclient18 mysql-client-5.5 mysql-client-core-5.5 mysql-common mysql-server mysql-server-5.5 mysql-server-core-5.5 0 aktualisiert, 8 neu installiert, 0 zu entfernen und 0 nicht aktualisiert. Es müssen 26,2 MB an Archiven heruntergeladen werden. Nach dieser Operation werden 94,2 MB Plattenplatz zusätzlich benutzt. Möchten Sie fortfahren [J/n]? J Hole:1 http://de.archive.ubuntu.com/ubuntu/ quantal-updates/main mysql-common all 5.5.28-0ubuntu0.12.10.1 [13,4 kB] Hole:2 http://de.archive.ubuntu.com/ubuntu/ quantal-updates/main libmysqlclient18 amd64 5.5.28-0ubuntu0.12.10.1 [949 kB] Hole:3 http://de.archive.ubuntu.com/ubuntu/ quantal/main libdbd-mysql-perl amd64 4.021-1 [97,7 kB] Hole:4 http://de.archive.ubuntu.com/ubuntu/ quantal-updates/main mysql-client-core-5.5 amd64 5.5.28-0ubuntu0.12.10.1 [1.941 kB] Hole:5 http://de.archive.ubuntu.com/ubuntu/ quantal-updates/main mysql-client-5.5 amd64 5.5.28-0ubuntu0.12.10.1 [8.332 kB] Hole:6 http://de.archive.ubuntu.com/ubuntu/ quantal-updates/main mysql-server-core-5.5 amd64 5.5.28-0ubuntu0.12.10.1 [5.983 kB] Hole:7 http://de.archive.ubuntu.com/ubuntu/ quantal-updates/main mysql-server-5.5 amd64 5.5.28-0ubuntu0.12.10.1 [8.842 kB] Hole:8 http://de.archive.ubuntu.com/ubuntu/ quantal-updates/main mysql-server all 5.5.28-0ubuntu0.12.10.1 [11,6 kB] Es wurden 26,2 MB in 1 min 5 s geholt (399 kB/s) Vorkonfiguration der Pakete ... Vormals nicht ausgewähltes Paket mysql-common wird gewählt. (Lese Datenbank ... 68073 Dateien und Verzeichnisse sind derzeit installiert.) Entpacken von mysql-common (aus .../mysql-common_5.5.28-0ubuntu0.12.10.1_all.deb) ... Vormals nicht ausgewähltes Paket libmysqlclient18:amd64 wird gewählt. Entpacken von libmysqlclient18:amd64 (aus .../libmysqlclient18_5.5.28-0ubuntu0.12.10.1_amd64.deb) ... Vormals nicht ausgewähltes Paket libdbd-mysql-perl wird gewählt. Entpacken von libdbd-mysql-perl (aus .../libdbd-mysql-perl_4.021-1_amd64.deb) ... Vormals nicht ausgewähltes Paket mysql-client-core-5.5 wird gewählt. Entpacken von mysql-client-core-5.5 (aus .../mysql-client-core-5.5_5.5.28-0ubuntu0.12.10.1_amd64.deb) ... Vormals nicht ausgewähltes Paket mysql-client-5.5 wird gewählt. Entpacken von mysql-client-5.5 (aus .../mysql-client-5.5_5.5.28-0ubuntu0.12.10.1_amd64.deb) ... Vormals nicht ausgewähltes Paket mysql-server-core-5.5 wird gewählt. Entpacken von mysql-server-core-5.5 (aus .../mysql-server-core-5.5_5.5.28-0ubuntu0.12.10.1_amd64.deb) ... Trigger für man-db werden verarbeitet ... mysql-common (5.5.28-0ubuntu0.12.10.1) wird eingerichtet ... Vormals nicht ausgewähltes Paket mysql-server-5.5 wird gewählt. (Lese Datenbank ... 68251 Dateien und Verzeichnisse sind derzeit installiert.) Entpacken von mysql-server-5.5 (aus .../mysql-server-5.5_5.5.28-0ubuntu0.12.10.1_amd64.deb) ... Vormals nicht ausgewähltes Paket mysql-server wird gewählt. Entpacken von mysql-server (aus .../mysql-server_5.5.28-0ubuntu0.12.10.1_all.deb) ... Trigger für man-db werden verarbeitet ... Trigger für ureadahead werden verarbeitet ... libmysqlclient18:amd64 (5.5.28-0ubuntu0.12.10.1) wird eingerichtet ... libdbd-mysql-perl (4.021-1) wird eingerichtet ... mysql-client-core-5.5 (5.5.28-0ubuntu0.12.10.1) wird eingerichtet ... mysql-client-5.5 (5.5.28-0ubuntu0.12.10.1) wird eingerichtet ... mysql-server-core-5.5 (5.5.28-0ubuntu0.12.10.1) wird eingerichtet ... mysql-server-5.5 (5.5.28-0ubuntu0.12.10.1) wird eingerichtet ... AppArmor parser error for /etc/apparmor.d/usr.sbin.mysqld in /etc/apparmor.d/usr.sbin.mysqld at line 9: >>abstractions/mysql<< konnte nicht ge?ffnet werden start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: Fehler beim Bearbeiten von mysql-server-5.5 (--configure): Unterprozess installiertes post-installation-Skript gab den Fehlerwert 1 zurück dpkg: Abhängigkeitsprobleme verhindern Konfiguration von mysql-server: mysql-server hängt ab von mysql-server-5.5; aber: Paket mysql-server-5.5 ist noch nicht konfiguriert. dpkg: Fehler beim Bearbeiten von mysql-server (--configure): Abhängigkeitsprobleme - verbleibt unkonfiguriert Trigger für libc-bin werden verarbeitet ... ldconfig deferred processing now taking place Es wurde kein Apport-Bericht verfasst, da die Fehlermeldung darauf hindeutet, dass dies lediglich ein Folgefehler eines vorherigen Problems ist. Fehler traten auf beim Bearbeiten von: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) Do you have any clue, whats going on here?

    Read the article

  • Debian - "WARNING: untrusted versions of the following packages will be installed!"

    - by user1794469
    When i try to install or update any packages I get: Untrusted packages could compromise your system's security. You should only proceed with the installation if you are certain that this is what you want to do. I strongly suspect this is related to the error i get on update: $ sudo aptitude update Get: 1 http://ftp.us.debian.org wheezy InRelease [208 kB] Get: 2 http://debian.lcs.mit.edu wheezy InRelease [208 kB] Ign http://ftp.us.debian.org wheezy InRelease Hit http://ftp.us.debian.org wheezy/main amd64 Packages/DiffIndex Hit http://ftp.us.debian.org wheezy/contrib amd64 Packages/DiffIndex Hit http://ftp.us.debian.org wheezy/non-free amd64 Packages/DiffIndex Hit http://ftp.us.debian.org wheezy/contrib Translation-en Hit http://ftp.us.debian.org wheezy/main Translation-en Hit http://ftp.us.debian.org wheezy/non-free Translation-en Get: 3 http://debian.lcs.mit.edu wheezy-updates InRelease [116 kB] Ign http://debian.lcs.mit.edu wheezy InRelease Ign http://debian.lcs.mit.edu wheezy-updates InRelease Hit http://debian.lcs.mit.edu wheezy/main Sources/DiffIndex Hit http://debian.lcs.mit.edu wheezy/main amd64 Packages/DiffIndex Hit http://debian.lcs.mit.edu wheezy/main Translation-en Ign http://ftp.us.debian.org wheezy/contrib Translation-en_US Ign http://debian.lcs.mit.edu wheezy-updates/main Sources/DiffIndex Ign http://debian.lcs.mit.edu wheezy-updates/main amd64 Packages/DiffIndex Ign http://ftp.us.debian.org wheezy/main Translation-en_US Ign http://ftp.us.debian.org wheezy/non-free Translation-en_US Hit http://debian.lcs.mit.edu wheezy-updates/main Sources Hit http://debian.lcs.mit.edu wheezy-updates/main amd64 Packages Ign http://debian.lcs.mit.edu wheezy/main Translation-en_US Ign http://debian.lcs.mit.edu wheezy-updates/main Translation-en_US Ign http://debian.lcs.mit.edu wheezy-updates/main Translation-en Fetched 531 kB in 1s (304 kB/s) W: GPG error: http://ftp.us.debian.org wheezy InRelease: Unknown error executing gpgv W: GPG error: http://debian.lcs.mit.edu wheezy InRelease: Unknown error executing gpgv W: GPG error: http://debian.lcs.mit.edu wheezy-updates InRelease: Unknown error executing gpgv I have tried reinstalling the key ring: sudo aptitude reinstall debian-archive-keyring (which surprisingly doesn't cause a warning).

    Read the article

  • Want to install FlightGear but 'libapr1' dependency cannot be satisfied

    - by Jonathan Reno
    When I want to install the flightgear:amd64 package, it requests to install the simgear2.8.0:amd64 package onto my Linux Mint 13 KDE 64-bit system. But I cannot install simgear2.8.0:amd64 because I could not find it in the GetDeb repositories, could not install it from PlayDeb, and could not find a .deb file online. So, I tried to install simgear2.8.0:i386, but it wants me to install (read reinstall) libapr1, but it is already installed properly with its dependencies. By the way, libapr1 is necessary for my Apache installation. Could you help me fix my problem?

    Read the article

  • Regarding Reinstalling PostgreSQL

    - by Vivalavista
    I was using PostgreSQL 8.4. I tried removing it through Synaptic Manager and then I tried to install 9.1, but I still version 8.4. I deleted all the files associated with postgresql. Now I am unable to install any version of PostgreSQL. When I try I get this error: Setting up postgresql-9.1 (9.1.3-1~lucid) ... .: 12: Can't open /usr/share/postgresql-common/maintscripts-functions dpkg: error processing postgresql-9.1 (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of postgresql: postgresql depends on postgresql-9.1; however: Package postgresql-9.1 is not configured yet. dpkg: error processing postgresql (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: postgresql-9.1 postgresql E: Sub-process /usr/bin/dpkg returned an error code (1) Please tell me the way to remove postgres completely so I can install a fresh version.

    Read the article

  • Port binding conflicts with "switch user" on Windows 7

    - by C-dizzle
    We are using the switch user function within Windows 7 under an active directory network. We have one application in particular that gives us an error: Only one usage of each socket address (protocol/network address/port) is normally permitted. bind Port 10001 Are there any other ports that can only be used at one time that might have an adverse effect on the other user? We try to mentor our users to use the log off function instead of switch user, but that doesn't always happen. As an alternative, is it possible to disable the 'switch user' button on our machines?

    Read the article

  • Turning down the sound on a switched off user

    - by soandos
    If one switches users (i.e. user one switches off, and then another users logs in) and there is sound playing on the first user, that sound will continue to play for the second user. If the second user has admin rights, they can guess what program is causing the sound and then kill it, but that is very clumsy, and far more than what "needs" to be done. When I open the mixer, it just shows that sound is playing. How can I stop it?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >