Search Results

Search found 10328 results on 414 pages for 'behavior tree'.

Page 84/414 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Can decoupling hurt maintainability in certain situations?

    - by Ceiling Gecko
    Can the fact that the business logic is mapped to interfaces instead of implementations actually hinder the maintenance of the application in certain situations? A naive example with the Java's Hibernate framework would be, that for example (provided I don't have the whole code-base in my head, the project structure is a mess and classes are named with arbitrary names) if I wish to see what's going on in a certain DAO, to see if it actually is doing what it's supposed to do, then instead of traversing backwards up the tree from the point where the data service is invoked (where the tree will end in an interface with no implementation details whatsoever apart from the signature) I have to for example go and look for a configuration XML file to see which class is mapped to said interface as the implementation before being able to access the actual implementation details. Are there any situations where having loose coupling can actually hurt maintainability?

    Read the article

  • What's the term describing this system for generating user interfaces?

    - by mjfgates
    So, there's this idea, which you already know: Define the layout of your UI by creating a tree of panels. The leaf nodes on the tree are what we used to call 'controls' way back in the day-- the things that the user interacts with, radio buttons and listboxes and such. The internal nodes are mostly concerned with layout; this kind of panel stacks its child panels vertically, that kind puts its children into a grid, etc. It's COMMON. Most of the UI-generating systems I've seen in the past twenty years are implementations of this, and the ones that aren't borrow from it. What's the word for this idea?

    Read the article

  • Grouping IP Addresses based on ranges [on hold]

    - by mustard
    Say I have 5 different categories based on IP Address ranges for monitoring user base. What is the best way to categorize a list of input IP addresses into one of the 5 categories depending on which range it falls into? Would sorting using a segment tree structure be efficient? Specifically - I'm looking to see if there are more efficient ways to sort IP addresses into groups or ranges than using a segment sort. Example: I have a list of IP address ranges per country from http://dev.maxmind.com/geoip/legacy/geolite/ I am trying to group incoming user requests on a website per country for demographic analysis. My current approach is to use a segment tree structure for the IP address ranges and use lookups based on the structure to identify which range a given ip address belongs to. I would like to know if there is a better way of accomplishing this.

    Read the article

  • Is it better to cut and store all sprites needed from a spritesheet in memory, or cut them out just-in-time?

    - by xLite
    I'm not sure what's best practice here as I have little experience with this. Essentially what I am asking is... if it's better to get your single PNG with all your different sprites on it for use in-game, cut out every sprite on startup and store them in memory, then access the already-cut-out sprite from memory quickly or Only have the single PNG with all the different sprites residing in memory, and when you need, for example, a tree. You cut out the tree from the PNG and then continue to use it as normal. I imagine the former is more CPU friendly than the latter but less memory friendly, vice versa for the latter. I want to know what the norm is for game dev. This is a pixel based game using 2D art. Each PNG is actually an avatar's sprite sheet with each body part separated and then later joined to form the full body of the avatar.

    Read the article

  • Data Source Security Part 2

    - by Steve Felts
    In Part 1, I introduced the default security behavior and listed the various options available to change that behavior.  One of the key topics to understand is the difference between directly using database user and password values versus mapping from WLS user and password to the associated database values.   The direct use of database credentials is relatively new to WLS, based on customer feedback.  Some of the trade-offs are covered in this article. Credential Mapping vs. Database Credentials Each WLS data source has a credential map that is a mechanism used to map a key, in this case a WLS user, to security credentials (user and password).  By default, when a user and password are specified when getting a connection, they are treated as credentials for a WLS user, validated, and are converted to a database user and password using a credential map associated with the data source.  If a matching entry is not found in the credential map for the data source, then the user and password associated with the data source definition are used.  Because of this defaulting mechanism, you should be careful what permissions are granted to the default user.  Alternatively, you can define an invalid default user to ensure that no one can accidentally get through (in this case, you would need to set the initial capacity for the pool to zero so that the pool is populated only by valid users). To create an entry in the credential map: 1) First create a WLS user.  In the administration console, go to Security realms, select your realm (e.g., myrealm), select Users, and select New.  2) Second, create the mapping.  In the administration console, go to Services, select Data sources, select your data source name, select Security, select Credentials, and select New.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureCredentialMappingForADataSource.html for more information. The advantages of using the credential mapping are that: 1) You don’t hard-code the database user/password into a program or need to prompt for it in addition to the WLS user/password and 2) It provides a layer of abstraction between WLS security and database settings such that many WLS identities can be mapped to a smaller set of DB identities, thereby only requiring middle-tier configuration updates when WLS users are added/removed. You can cut down the number of users that have access to a data source to reduce the user maintenance overhead.  For example, suppose that a servlet has the one pre-defined, special WLS user/password for data source access, hard-wired in its code in a getConnection(user, password) call.  Every WebLogic user can reap the specific DBMS access coded into the servlet, but none has to have general access to the data source.  For instance, there may be a ‘Sales’ DBMS which needs to be protected from unauthorized eyes, but it contains some day-to-day data that everyone needs. The Sales data source is configured with restricted access and a servlet is built that hard-wires the specific data source access credentials in its connection request.  It uses that connection to deliver only the generally needed day-to-day information to any caller. The servlet cannot reveal any other data, and no WebLogic user can get any other access to the data source.  This is the approach that many large applications take and is the reasoning behind the default mapping behavior in WLS. The disadvantages of using the credential map are that: 1) It is difficult to manage (create, update, delete) with a large number of users; it is possible to use WLST scripts or a custom JMX client utility to manage credential map entries. 2) You can’t share a credential map between data sources so they must be duplicated. Some applications prefer not to use the credential map.  Instead, the credentials passed to getConnection(user, password) should be treated as database credentials and used to authenticate with the database for the connection, avoiding going through the credential map.  This is enabled by setting the “use-database-credentials” to true.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureOracleParameters.html "Configure Oracle parameters" in Oracle WebLogic Server Administration Console Help. Use Database Credentials is not currently supported for Multi Data Source configurations.  When enabled, it turns off credential mapping on Generic and Active GridLink data sources for the following attributes: 1. identity-based-connection-pooling-enabled (this interaction is available by patch in 10.3.6.0). 2. oracle-proxy-session (this interaction is first available in 10.3.6.0). 3. set client identifier (this interaction is available by patch in 10.3.6.0).  Note that in the data source schema, the set client identifier feature is poorly named “credential-mapping-enabled”.  The documentation and the console refer to it as Set Client Identifier. To review the behavior of credential mapping and using database credentials: - If using the credential map, there needs to be a mapping for each WLS user to database user for those users that will have access to the database; otherwise the default user for the data source will be used.  If you always specify a user/password when getting a connection, you only need credential map entries for those specific users. - If using database credentials without specifying a user/password, the default user and password in the data source descriptor are always used.  If you specify a user/password when getting a connection, that user will be used for the credentials.  WLS users are not involved at all in the data source connection process.

    Read the article

  • VMware Player and Ubuntu 12.04 - Full Screen

    - by DotNetStudent
    I have installed VMware Player 4.0.2 under Ubuntu 12.04 (Final) and, apart from having to patch the modules, everything went smoothly. However, there's an irritating behavior when toggling full screen mode: toggling full screen (using Virtual Machine Toggle Full Screen or Ctrl + Alt + Return), minimizing the player and maximizing it again changes the resolution of the guest to some strange one and the player gets "nested" between GNOME3's taskbar as every other of Ubuntu's native windows. To switch to full screen again I have to Ctrl + Alt + Return twice. Can anyone please tell me if this is the nromal, expected behavior? Is there any way of "correcting" it? The host operating system is Ubuntu 12.04 (Final) and the guest is Windows 7 (both 64 bits).

    Read the article

  • Cloudflare displaying cached CSS in-line?

    - by esqew
    I recently enabled CloudFlare on my domain, and when I make HTML statements like this: <link href="css/main.css" rel="stylesheet" /> The CSS gets displayed in-line, like this: <style>body{padding-top:40px}span.light{font-weight:lighter!important}span.title{font-size:60px;line-height:1;letter-spacing:-1px;color:inherit}</style> When I update the file via FTP, the changes are not reflected, which leads me to believe that this is a caching issue. Is this due to CloudFlare? If so, how do I disable the behavior? EDIT I also came to the conclusion that caching is behind the behavior after being able to see changes in the page after re-naming the CSS file.

    Read the article

  • What has 'rm -r ~' done to my home directory?

    - by GUI Junkie
    gedit creates hidden backup files ending with '~'. I wanted to do a recursive cleanup of my directory tree. The command rm *~ will delete all local files ending with '~' I thought rm -r *~ . would delete all files in the whole tree, but I typo-ed rm -r ~. There was a message some directory could not be deleted and I quit the command. The question is: What have I been deleting? I did notice that my Filezilla configuration was gone. Does this command delete all hidden directories from the home dir?

    Read the article

  • Google Analytics: Track user usage and flow

    - by Quintin Par
    Can someone help to query Google analytics to track a specific user behavior and usage pattern? Currently I pass user id’s to GA as _setCustomVar(2, 'id', id, 1); This is session based. But I am yet to master how I can utilize this to view usage pattern & behavior for the passed id. Say, I need to understand the visualization flow for one id or the page view count for that id etc Rephrasing, can I filter all existing reports for a specific id that I can select?

    Read the article

  • Writing a spell checker similar to "did you mean"

    - by user888734
    I'm hoping to write a spellchecker for search queries in a web application - not unlike Google's "Did you mean?" The algorithm will be loosely based on this: http://catalog.ldc.upenn.edu/LDC2006T13 In short, it generates correction candidates and scores them on how often they appear (along with adjacent words in the search query) in an enormous dataset of known n-grams - Google Web 1T - which contains well over 1 billion 5-grams. I'm not using the Web 1T dataset, but building my n-gram sets from my own documents - about 200k docs, and I'm estimating tens or hundreds of millions of n-grams will be generated. This kind of process is pushing the limits of my understanding of basic computing performance - can I simply load my n-grams into memory in a hashtable or dictionary when the app starts? Is the only limiting factor the amount of memory on the machine? Or am I barking up the wrong tree? Perhaps putting all my n-grams in a graph database with some sort of tree query optimisation? Could that ever be fast enough?

    Read the article

  • Samba: map domain group to local one

    - by user285467
    I have a problem with mapping pure domain group to one existing on UNIX system. When I map NT domain account by default samba picks local SID - one that can be acquired via the command; net getlocalsid Instead of SID that comes from domain; net getdomainsid This is the behavior that I do not understand. I can explicitly set the SID to the domain one. E.g.: net groupmap add sid=[DOMAIN SID]-[RID] ntgroup=[DOMAIN group] unixgroup=[UNIX group] type=l However the command getent group | grep 'DOMAIN group indicates this group to be domain one - GID created in accordance to RID backend in use, not the GID of 'UNIX group' as expected. Worth to mention I use the winbind. Strange thing is that I already have such mapping in place for other 'DOMAIN group2' that getent group reports with GID of local UNIX group with all members of the 'DOMAIN group2'. Now the question is how to populate such behavior for other of my groups???

    Read the article

  • Testing complex compositions

    - by phlipsy
    I have a rather large collection of classes which check and mutate a given data structure. They can be composed via the composition pattern into arbitrarily complex tree-like structures. The final product contains a lot of these composed structures. My question is now: How can I test those? Albeit it is easy to test every single unit of these compositions, it is rather expensive to test the whole compositions in the following sense: Testing the correct layout of the composition-tree results in a huge number of test cases Changes in the compositions result in a very laborious review of every single test case What is the general guideline here?

    Read the article

  • How is this paradigm/style called?

    - by McMannus
    I have the following situation: I'm developing an add-in for a UML modeling tool. The models that can be created by the user are stored inside the main application and a limited access to the models is given through its API. However, the add-in has a lot of callbacks for events that are triggered by the main application, when changes to the model occur by the user. Since the models are already stored once in the main application, I considered it not practicable to duplicate the models in the add-in, which leads to the fact that I have only behavior in the add-in, rather than having a state. This behavior is mainly expressed by static functions, that are organized in functional cohesive classes. The callbacks for the events have always references to the model elements relevant for the specifc event that ocurred. First, it seemed to me that this is a procedural style in general, but procedural style doesn't consider events/callbacks, so this boils down to the question. How is this programming style called?

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • Setup of HP ProCurve 2810-24G for iSCSI?

    - by 3molo
    Hi, I have a pair of ProCurve 2810-24G that I will use with a Dell Equallogic SAN and Vmware ESXi. Since ESXi does MPIO, I am a little uncertain on the configuration for links between the switches. Is a trunk the right way to go between the switches? I know that the ports for the SAN and the ESXi hosts should be untagged, so does that mean that I want tagged VLAN on the trunk ports? This is more or less the configuration: trunk 1-4 Trk1 Trunk snmp-server community "public" Unrestricted vlan 1 name "DEFAULT_VLAN" untagged 24,Trk1 ip address 10.180.3.1 255.255.255.0 no untagged 5-23 exit vlan 801 name "Storage" untagged 5-23 tagged Trk1 jumbo exit no fault-finder broadcast-storm stack commander "sanstack" spanning-tree spanning-tree Trk1 priority 4 spanning-tree force-version RSTP-operation The Equallogic PS4000 SAN has two controllers, with two network interfaces each. Dell recommends each controller to be connected to each of the switches. From vmware documentation, it seems creating one vmkernel per pNIC is recommended. With MPIO, this could allow for more than 1 Gbps throughput.

    Read the article

  • Mail.app doesn't detect sender in Address Book

    - by CoreSandello
    Hi there. I don't understand, how does 'smart addresses' in Mail.app work. Recently I mentioned, that for some emails I don't see person's full name in 'From' column. I started to dig into this behavior and found out, that I have few contacts in my Address Book, that are not recognized by Mail.app. Here how it looks: I have a person in Address Book with filled email entry and filled first/last name (localized). I have an incoming email from that person (from email specified in Address Book), but first/last name in the email itself doesn't match with ones specified in Address Book (e. g. 'From' field in email looks like 'John [work] <[email protected]>' while Address Book entry is 'John Smith' (localized, in Russian)). And Mail.app doesn't recognize that this mail is originating from that person in Address Book: if I click on 'From' field, it suggests to me to add sender to Address Book, while for others' emails I have 'Show in Address Book' menu entry (especially for ones with full localized name in 'From' field). I'm wondering, is that behavior correct or I'm missing something? I'm using Snow Leopard & Mail 4.0; my system language set to English, if that matters. I'd like to have some clarifications on that Mail.app behavior: whenever it fixable or not (and if it's fixable, I'd like to see a fix). By the way, is it possible to match sender's address against Address Book entry in filter rules or not? That would be great, if I can create rules like 'move all mail from that person to that folder' without specifying exact source address. Thanks, Ivan.

    Read the article

  • Windows file locks allowing multiple users to write to open file over network

    - by JPbuntu
    I have 6 windows computers (xp,vista,7) that need to access a samba share (Ubuntu 12.04). I am trying to make it so only one client can open a file at a given time. I thought this was pretty standard behavior of file locks, but I can't get it to work. The way it is right now a file can be open by two users, and changed and saved by either one of them. The last file saved overwrites what ever changes the other user made. At first I thought this was a Samba configuration problem, but I get this behavior even between two windows machines. So far I have only tested: Windows Xp Windows Vista Windows XP Samba << Windows Vista and both give the same behavior. When I tested the Samba configuration, I had set strict locking = yes and get errors logged like this: close_remove_share_mode: Could not get share mode lock for file _prod/part_number_list_COPY.xlsx Eventually all of the files are going to be moved onto the Samba share, so that is the configuration I am most concerned about fixing. Any ideas? Thanks in advance. EDIT: I tested an excel file again, and it is now working properly in both above mentioned cases, I am also no longer getting the above mentioned error. I don't know what happened, perhaps a restart fixed it? (also works with strict locking = no) Although I still need to find a solution for the CAD/CAM files we use, the software is Vector and it does not seem to be using file locks. Is there any software that I can use to manage these files, so two people can't open/edit them at a time? Maybe a windows application that forces file locks? Or a dirt simple version control system? (the only ones I have seen at are too complicated for our needs).

    Read the article

  • SATA HDD stuttering on reads (LED on)

    - by jdehaan
    I hope I can get some new ideas to solve the issue. What I have: AMD Phenom II X4 905e Box, Sockel AM3 Gigabyte GA-MA790XTA-UD4, AMD 790X, AM3 ATX WD Caviar GreenPower 1,5TB SATA II 4GB Kit OCZ DDR3 PC3-10666 Platinum Low-Voltage CL7 HIS HD 4670 iSilence4 DDR3 1024MB Native HDMI Dual-DVI, PCI-Express Behavior: Everything works normally excepting the HDD accesses. The drive seems to get stuck in reading data (LED drive by Mainboard is on for a while) then recovers and goes on. There is a delay of more than 30s sometimes. Often it is a matter of 5-10s. This does not seem related to the workload of the disk. What I tried so far: Use different SATA port: I switched from the SATA3 to SATA2 then the SATA driven by the CPU, no difference. Turned AHCI off, drove the SATA in IDE mode, same behavior, now I felt like the HDD has problems Deactivated energy saving settings (BeQuiet, ...), disabled 3 CPUs (msconfig) supposing some race condition or trouble with changing freqs, same trouble With Windows7 Professional 64Bit, I installed the hotfix http://support.microsoft.com/?scid=kb%3Ben-us%3B976418&x=8&y=9 with no success. The problem description matches what I experienced, so I felt lucky but was quite in a sad mood after being deceived. Burned the DLG 5.04f DataLifeGuard CD from WD to check the drive. All tests passed, to my surprise! Installed an Ubuntu on the same drive i386 flavour. Drive still in IDE mode, exactly the same behavior. So the OS does not matter. What I did not try yet: http://www.gigabyte.eu/Support/Motherboard/BIOS_Model.aspx?ProductID=3263 I have a F2 BIOS and there is a F3a Beta, did someone have similar issues that were solved by this update? It is a beta so I fear upgrading a bit and I could find no release notes at all on the net, not really serious work maybe they had issues translating from Chinese??? Using another HDD drive Does someone have other hints or a similar issue? Maybe it is neither the HDD nor the Mainboard/BIOS?

    Read the article

  • SELinux blocking Samba directory listing

    - by Sean M
    I am running Samba on a CentOS server, and I am experiencing a problem where it allows me to connect to the server and see a share, but shows the share as an empty directory. I find this behavior strange. Here is the stanza in my smb.conf for the given share: [seanm] path = /home/seanm writeable = yes valid users = seanm, root read only = No Here's what I see on the server side: [seanm@server ~]$ ls -l -rw-r--r-- 1 seanm seanm 40 Jan 4 13:45 pangram.txt And yet: [seanm@client ~]$ smbclient //server/seanm -U seanm -W WORKGROUP Enter seanm's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.33-3.29.el5_5.1] smb: \> ls . D 0 Fri Jan 7 10:08:55 2011 .. D 0 Fri Jan 7 07:58:31 2011 58994 blocks of size 262144. 50356 blocks available This behavior is present on both a Windows client and a Linux client system. The behavior is present with the firewall on and with the firewall off, so it's not that. Neither /var/log/messages nor /var/log/secure have any complaints about Samba. I doubt that SELinux is a problem: just in case, here are the relevant settings. [root@server ~]# getsebool -a | grep samba samba_domain_controller --> off samba_enable_home_dirs --> on samba_export_all_ro --> off samba_export_all_rw --> off samba_share_fusefs --> off samba_share_nfs --> off use_samba_home_dirs --> on virt_use_samba --> off What am I doing wrong here, and what can I do to fix it? Edit: SELinux probably is the problem, judging by the fact that the issue goes away when I set SELinux to "permissive" or issue setsebool -P samba_export_all_rw on - both of which are unacceptable for production environments. What the heck kind of context does a directory need to have on it for Samba users to actually get files from it? I consider rolling your own rules and/or context to be deeply sub-optimal.

    Read the article

  • Mail.app doesn't detect sender in Address Book

    - by CoreSandello
    I don't understand, how does 'smart addresses' in Mail.app work. Recently I mentioned, that for some emails I don't see person's full name in 'From' column. I started to dig into this behavior and found out, that I have few contacts in my Address Book, that are not recognized by Mail.app. Here how it looks: I have a person in Address Book with filled email entry and filled first/last name (localized). I have an incoming email from that person (from email specified in Address Book), but first/last name in the email itself doesn't match with ones specified in Address Book (e. g. 'From' field in email looks like 'John [work] <[email protected]>' while Address Book entry is 'John Smith' (localized, in Russian)). And Mail.app doesn't recognize that this mail is originating from that person in Address Book: if I click on 'From' field, it suggests to me to add sender to Address Book, while for others' emails I have 'Show in Address Book' menu entry (especially for ones with full localized name in 'From' field). I'm wondering, is that behavior correct or I'm missing something? I'm using Snow Leopard & Mail 4.0; my system language set to English, if that matters. I'd like to have some clarifications on that Mail.app behavior: whenever it fixable or not (and if it's fixable, I'd like to see a fix). By the way, is it possible to match sender's address against Address Book entry in filter rules or not? That would be great, if I can create rules like 'move all mail from that person to that folder' without specifying exact source address. Thanks, Ivan.

    Read the article

  • Windows 7 task bar stuck in hiding, how to fix?

    - by Rainer Blome
    In Windows 7, I use the "Auto-hide the task bar" feature. Usually, it works fine: As soon as the pointer touches the screen bottom, the task bar pops up. However sometimes, it refuses to rise. Pressing the "Windows" key (or Ctrl-ESC) makes the start menu appear, forcing the task bar from hinding as well. Once I've done this, the task-bar auto-rises again. This is annoying, it interrupts flow. Has anyone else noticed this? How do I avoid this? Searching for "Windows 7 task bar auto-raise" shows that at least one other person experienced this problem: http://answers.microsoft.com/en-us/windows/forum/windows_7-desktop/how-can-i-fix-the-taskbars-auto-hide/8cdf6369-7354-4d29-9249-b7096ed0e28b?msgId=6dac3361-9d0f-4a9e-8642-b91a72826ba4 To answer the question posed by the "helpful" support engineer on the above page, of course I am running some apps when this happens, usually Explorer, Firefox, Eclipse, Cygwin/X, Xterm, Emacs, Notes, VPN client, Firewall. If my memory serves correctly, I have seen this behavior on earlier versions of Windows as well, XP at least. To reproduce this behavior, I tried switching between apps, and bringing apps to open other windows. I am unable to reproduce this behavior so far. So far, it appears to happen out of the blue, sometimes multiple times a day. Looks like a bug to me. The task bar should raise no matter what.

    Read the article

  • Unexplained cache RAM drops on Linux machine

    - by FunkyChicken
    I run a CentOS 5.7 64 machine with 24gb ram and running kernel 2.6.18-274.12.1.el5. This machine runs only Nginx, php-fpm and Xcache as extra applications. Since about 3 weeks my memory behavior on this machine has changed and I cannot explain why. There are no crons running which flush anything like this. There are also no large numbers of files being deleted/changed during these drops. The 'cached' memory gets dropped about every few hours, but it's never a set gap between flushes, this indicates to me that some bottleneck gets reached instead. It also always seems to be when total memory usages gets to about 18GB, but again, not always exactly 18GB. This is a graph of my memory usage: As you can see in the graph the 'buffers' always stay more or less the same, it is mainly the 'cache' that gets dropped. Running vmstat -m I have outputted the memory usage just before and just after a memory drop. The output is here: http://pastebin.com/diff.php?i=hJqZqztm 'old version' being before, 'new version' being after a drop. About 3 weeks ago my server crashed during a heavy DDOS attack, after I rebooted the machine this odd behavior started. I have checked a bunch of logs, restarted the machine again, and cannot find any indication what changed. During these 'cache' memory drops, my iNode usage drops at the same time. Does anyone have any idea what might be causing this behavior? Clearly my RAM isn't full, so I am curious why this could be happening.

    Read the article

  • Can't install xclip on Ubuntu 10.10

    - by wildster
    I'm trying to load an SSH key to Github from a new machine and this command is not working: sudo apt-get install xclip Reading package lists... Done Building dependency tree Reading state information... Done Package xclip is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package xclip has no installation candidate when I try: sudo aptitude install xclip Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done No candidate version found for xclip No candidate version found for xclip The following partially installed packages will be configured: synaptics-dkms 0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 0B will be used. Writing extended state information... Done Setting up synaptics-dkms (1.1.1) ... Loading new synaptics-1.1.1 DKMS files... Error! Cannot locate /usr/src/synaptics-1.1.1.dkms.tar.gz. File does not exist. dpkg: error processing synaptics-dkms (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: synaptics-dkms E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up synaptics-dkms (1.1.1) ... Loading new synaptics-1.1.1 DKMS files... Error! Cannot locate /usr/src/synaptics-1.1.1.dkms.tar.gz. File does not exist. dpkg: error processing synaptics-dkms (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: synaptics-dkms Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Any idea how I can install this? Mucho thanks in advance

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >