Search Results

Search found 8380 results on 336 pages for 'manage py'.

Page 256/336 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. We'd like to avoid any kind of local storage (share a disk on a desktop or something) since we're a geographically distributed team). So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). FTP and other client-based apps don't seem to support this at all. Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. In my mind, the magical tool would be some combination of TrueCrypt and rsync. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • Tasks not appearing in Mac Outlook 2011

    - by Tama
    My current workplace uses Macs and my old workplaces used Windows. In my old workplaces I heavily used Outlook's Task functionality to manage my workload. I understand that the Task functionality in Outlook 2011 for Mac is heavily limited so I was very pleased to find this useful "how-to" on making the most of Tasks. My problem is that my tasks don't appear in the Task folder, or anywhere else for that matter. Even if I search for a the title of a task I've recently found I still can't find them. After some Googling I found this forum thread that suggests it may be a problem with the Outlook database, which points to a Microsoft KB. So I went through all of the recommended steps on rebuilding/ adding a new identity using the "Microsoft Database Utility" - the theory being that if I create a new identity I can test the task creation using a "blank slate" identity. When I change the default identity to my newly created identity using the Microsoft Database Utility (have to restart the computer) Task creation still doesn't work. Any ideas appreciated, I really miss the task functionality in Outlook 2010 for Windows.

    Read the article

  • New Dell PE R710 - Storage Question

    - by rihatum
    Hi All, Dell PE R710, received from Dell in the following state : Windows Disk 0 1800GB ( Volume C & D ) Windows Disk 1 526 GB (Volume E ) Perc6i Integrated Raid Controller 6 x 500GB Nearline SAS 7200RPM HDDs Raid 5 Configuration with two Virtual Disks I have installed Dell open Manage and it shows the following : Virtual Disk 0 - State : Background Initialization ( 7% ) Virtual Disk 1 - State : Background Initialization ( 25% ) Now when I click on Virtual Disk 0 it shows me all 6 Disks and the same happens when I click on Virtual Disk 1 it displays all 6 disks. But when I click on Storage Perc6i Connector 0 I get 4 Physical disks with the following numbers : Physical Disk 0:0:0 Physical Disk 0:0:1 Physical Disk 0:0:2 Physical Disk 0:0:3 When I click on Storage Perc6i Connector 1 I get 2 Physical Disks Listed in the following way : Physical Disk 1:0:4 Physical Disk 1:0:5 I am a little confused in this description, does this 1:0:4 interprets to Controller1, Disk4. Does this integrated raid card have two controllers coming out of it ? Also, When I first switched on the machine, the boot partition was showing 1GB Available out of 40GB, now its showing 38GB available out of 40GB. Is this because the Virtual Disks are still Initializing ? Any recommendations or suggestions ? Also, this server have 6 x 500GB NearLine SAS Hard drives, what would be a good raid config ? We are planning to use it for Hyper-V with quite a few (7 or 8) virtual servers, your suggestions would be helpful. Also, while the virtual disks are in a initialization state, can I destroy and re-create the raid configuration ? I would have to do it at the BIOS CTRL-M ? Thanks and Regards

    Read the article

  • IIS7 bulk bindings in vbscript , how to remove a binding

    - by minus4
    I have a script to manage adding over a thousand domains to a single site bindings, this has gone fine, but now client wants about 20 of them removed, Microsoft programmers don't think it would be nice to sort the bindings alphabetically, so does anyone know the code to remove the domains ( array list ) in bulk please. this is what i am using: Set adminManager = createObject("Microsoft.ApplicationHost.WritableAdminManager") adminManager.CommitPath = "MACHINE/WEBROOT/APPHOST" Set sitesSection = adminManager.GetAdminSection("system.applicationHost/sites", "MACHINE/WEBROOT/APPHOST") Set sitesCollection = sitesSection.Collection siteElementPos = FindElement(sitesCollection, "site", Array("name", "microsites")) If siteElementPos = -1 Then WScript.Echo "Element not found!" WScript.Quit End If on error resume next Set siteElement = sitesCollection.Item(siteElementPos) Set bindingsCollection = siteElement.ChildElements.Item("bindings").Collection Dim arrFileNames : arrFileNames = Array("list of domains") Dim objDict : Set objDict = CreateObject("Scripting.Dictionary") Dim strFileName, strTemp For Each strFileName In arrFileNames Set bindingElement = bindingsCollection.CreateNewElement("binding") bindingElement.Properties.Item("protocol").Value = "http" bindingElement.Properties.Item("bindingInformation").Value = "192.168.100.19:80:" & strFileName bindingsCollection.AddElement(bindingElement) Next adminManager.CommitChanges() WScript.Echo "Job Completed" WScript.Quit Function FindElement(collection, elementTagName, valuesToMatch) For i = 0 To CInt(collection.Count) - 1 Set element = collection.Item(i) If element.Name = elementTagName Then matches = True For iVal = 0 To UBound(valuesToMatch) Step 2 Set property = element.GetPropertyByName(valuesToMatch(iVal)) value = property.Value If Not IsNull(value) Then value = CStr(value) End If If Not value = CStr(valuesToMatch(iVal + 1)) Then matches = False Exit For End If Next If matches Then Exit For End If End If Next If matches Then FindElement = i Else FindElement = -1 End If End Function so as you can see it is easy to add, but i can find no code or manual or instructions for the removal. i cant seem to run appcmd either. at first i tried creating a batch file using the appcmd but this never worked, saying appcmd can not be found. thanks

    Read the article

  • Moving from WDS to MDT + WDS - Prestaged Computer Name

    - by MSCF
    We previously used just WDS to deploy our images. WDS was setup to request approval for new machines. We used the "Name and Approve" option to name the machines as we added them. If it was pre-existing, it would just use the existing computer name from AD. Then in our unattend.xml file we had Computername=%MACHINENAME%. This picked up the name we gave it during approval and set the computer name accordingly. We are now implementing MDT to manage our images and drivers. But upon testing, we noticed it would assign random computer names. I went into the Unattend.xml for the deploy task sequence and added that value under Specialize amd64_Microsoft-Windows-Shell-Setup_neutral Computername=%MACHINENAME%. But when we try applying the image, it errors out at that point of the install. How can an MDT deployment be configured to leverage the pre-staged name? Some additional info: Error message during the imaging process: Windows could not parse or process the unattend answer file for pass [specialize]. The settings specified in the answer file cannot be applied. The error was detected while processing settings for component [Microsoft-Windows-Shell-Setup].? setuperr.log: 2014-07-22 14:02:13, Error [setup.exe] [Action Queue] : Unattend action failed with exit code 4 2014-07-22 14:02:13, Error [setup.exe] Execution of unattend GCs failed; hr = 0x0; pResults-hrResult = 0x8030000b

    Read the article

  • Trying to run an ASP.NET MVC application using Mono on Apache with FastCGI

    - by Arda Xi
    I have a hosting account with DreamHost and I would like to use the same account to run ASP.NET applications. I have an application deployed in a subdomain, a .htaccess with a handler like this: # Define the FastCGI Mono launcher as an Apache handler and let # it manage this web-application (its files and subdirectories) SetHandler monoWrapper Action monoWrapper /home/arienh4/<domain>/cgi-bin/mono.fcgi virtual My mono.fcgi is set up as such: #!/bin/sh #umask 0077 exec >>/home/arienh4/tmp/mono-fcgi.log exec 2>>/home/arienh4/tmp/mono-fcgi.err echo $(date +"[%F %T]") Starting fastcgi-mono-server2 cd / chmod 0700 /home/arienh4/tmp/mono-fcgi.sock echo $$>/home/arienh4/tmp/mono-fcgi.pid # stdin is the socket handle export PATH="/home/arienh4/mono/bin:$PATH" export LD_LIBRARY_PATH="/home/arienh4/mono/lib:$LD_LIBRARY_PATH" export TMP="/home/arienh4/tmp" export MONO_SHARED_DIR="/home/arienh4/tmp" exec /home/arienh4/mono/bin/mono /home/arienh4/mono/lib/mono/2.0/fastcgi-mono-server2.exe \ /logfile=/home/arienh4/logs/fastcgi-mono-web.log /loglevels=All \ /applications=/:/home/arienh4/<domain> I took this from the Mono site for CGI. I'm not sure if I'm doing it correctly though. This code is resulting in this error: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. I have no idea what's causing this. As far as I can see, Mono isn't even hit (no log files are created).

    Read the article

  • Where to get glib-config for Kubuntu?

    - by Carl Smotricz
    I'm trying to compile Midnight Commander on a KUbuntu 9.10 (Karmic) box with no root access. I've set up a directory under $HOME, downloaded the mc source package and various stuff required for building, such as autotools. I've unpacked the CONTENTS of all those packages into this working directory such that I have the usual ./usr, ./lib, ./etc hierarchy. I manage to get configure through a lot of tests, but I can't seem to fool it into finding glib. checking for glib-2.0... checking for glib-config... no checking for glib12-config... no checking for glib-config... no checking for GLIB - version >= 1.2.6... no *** The glib-config script installed by GLIB could not be found *** If GLIB was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GLIB_CONFIG environment variable to the *** full path to glib-config. configure: error: Test for glib failed. GNU Midnight Commander requires glib 1.2.6 or above. My system has glib installed: /lib/libglib-2.0.so.0 /lib/libglib-2.0.so.0.2200.3 ... and I've also downloaded and unpacked the glib package into my working directory: libglib2.0-0_2.22.2-0ubuntu1_i386.deb libglib2.0-dev_2.22.2-0ubuntu1_i386.deb ... but still the elusive glib-config is nowhere to be found. It's not in any debian package for Karmic, either. So I'd appreciate any help getting over this hurdle. Please note, again, that I don't have root, so I can't just merrily apt-get stuff.

    Read the article

  • Mac OS X - User home directories shared via NFS

    - by Hugh
    I've run into some problems with how I've got user home directories set up on our system here. Our server is an XServe, using Open Directory to manage the user accounts. The majority of our workstations are OS X, but there are a few running Linux (Centos 5.3), and, as time goes on, we expect the proportion of Linux workstations to increase (at some point, we expect to move the server side over to Linux too, but for now we're running with what we've already got) To ensure that the Linux and OS X workstations both see user's home directories in the same place, I shared the home directories using NFS. On the server end, the home directories are stored in: /Volumes/data/company_users This is mounted on the workstations to: /mount/company_users This work fine on the Linux workstations, but there is some weirdness under OS X. For the user who is logged in through the GUI, it all works just fine. However, if a user tries to SSH into a machine that they are not the primary user on, they often have no access to their own home directory. It looks as though OS X is trying to do something else to the user home directories mount point when you log in through the GUI.... For example, on this machine (nv001), I (hugh) am logged into the GUI. Last login: Mon Mar 8 18:17:52 on ttys011 [nv001:~] hugh% ls -al /mount/company_users total 40 drwxrwxrwx 26 hugh wheel 840 27 Jan 19:09 . drwxr-xr-x 6 admin admin 204 19 Dec 18:36 .. drwx------+ 128 hugh staff 4308 27 Feb 23:36 hugh drwx------+ 26 matt staff 840 4 Dec 14:14 matt [nv001:~] hugh% So Matt's home directory is accessible to him. However, if I try to switch to him: [nv001:~] hugh% su - matt Password: su: no directory [nv001:~] hugh% Or: [nv001:~] hugh% su matt Password: tcsh: Permission denied tcsh: Trying to start from "/mount/company_users/matt" tcsh: Trying to start from "/" [nv001:/] matt% Does anyone have any idea why it might be doing this? It's causing me all sorts of problems at the moment... The only machine that I can successfully switch users at the moment is the server that the user directories are stored on, where /mount/company_users is actually just a symlink to /Volumes/data/company_users

    Read the article

  • Cygwin - Repo with Separate Git/Working Dir Doesn't Work

    - by Kyle Lacy
    Since I've switched to OS X and Vim, I've found it easiest to manage all of my 'dotfiles' (all of my configuration files and miscellaneous scripts) with Git. Having already set up my dotfiles in a repo following this tutorial, I figured it would also be easy enough to migrate all of my settings into my Cygwin setup on my Windows partition. Already having the repo setup on Github, I simply clone'd the repo, and moved all of the files over to my home directory, making it a mirror of my OS X home directory. Unfortunately, I cannot seem to use the actual repo any further within Cygwin. The problem is that I cannot use my dotfiles repo with git within Cygwin. The setup is unique from most normal git repos, in that the working directory and the git directory are in different locations. Specifically, the working directory is $HOME (/Users/kyle on OS X, /home/kyle in Cygwin), and the git repo is $HOME/.dotfiles.git. So, if I wanted to get the status of the repo, for example, I would type the following command (which I alias to reduce typing, of course): git --work-tree=$HOME --git-dir=$HOME/.dotfiles.git status -uno While this works fine on OS X, this refuses to work within Cygwin. Regardless of whether or not I use my alias, or whether or not I substitute $HOME by hand, I get the following git error: fatal: Not a git repository: /home/Kyle/dotfiles/.git/modules/.build/git I don't understand where this error comes from, but the path /home/Kyle/dotfiles was the original location of the git repo when I initially cloned it. Additionally, it's important to note that the repo relies heavily on submodules. If specifics are necessary, the repo in question can be found on GitHub. The commands I ran to setup the repo in Cygwin can also be found within the Readme file.

    Read the article

  • Outgrew MongoDB … now what?

    - by samsmith
    We dump debug and transaction logs into mongodb. We really like mongodb because: Blazing insert perf document oriented Ability to let the engine drop inserts when needed for performance But there is this big problem with mongodb: The index must fit in physical RAM. In practice, this limits us to 80-150gb of raw data (we currently run on a system with 16gb RAM). Sooooo, for us to have 500gb or a tb of data, we would need 50gb or 80gb of RAM. Yes, I know this is possible. We can add servers and use mongo sharding. We can buy a special server box that can take 100 or 200 gb of RAM, but this is the tail wagging the dog! We could spend boucoup $$$ on hardware to run FOSS, when SQL Server Express can handle WAY more data on WAY less hardware than Mongo (SQL Server does not meet our architectural desires, or we would use it!) We are not going to spend huge $ on hardware here, because it is necessary only because of the Mongo architecture, not because of the inherent processing/storage needs. (And sharding? Please! Cost aside, who needs the ongoing complexity of three, five, or more servers to manage a relatively small load?) Bottom line: MongoDB is FOSS, but we gotta spend $$$$$$$ on hardware to run it? We sould rather buy commercial SW! I am sure we are not the first to hit this issue, so we ask the community: Where do we go next? (We already run Mongo v2) Thanks!!

    Read the article

  • How do you host multiple public facing websites on a VPS?

    - by pedroarvy
    We host about 30 websites using typical shared hosting plans using ASP.NET and SQL 2000/2005/2008. I am now wondering about hosting all of these websites using our own virtual private server. This is clearly cheaper but comes with a lot of questions I need answers to: Is the risk of having to keep this VPS server up and running worth it? Until now, the host provider has managed the server and we have not had to worry about crashes, downtime, software patches etc. We are not server administrators, we are programmers, so this is not really our expertise. On the other hand, it may not be hard to learn. When we make a website live, we log in to a domain management control panel and change the primary and secondary name servers to point to our shared web host: Eg ns1.sharedwebhost.com and ns2.sharedwebhost.com These name servers are going to have to change when we have a VPS. I don’t understand anything about how to set this up. Is there some useful info anyone could direct me to? Or is there software we need to install to make the primary and secondary name servers work on our VPS? The control panel we have for shared hosting comes with DNS management like this: http://www.yart.com.au/stackoverflow/dns.png What software would I need to install to create this for each site we host at a VPS? The control panel we have for shared hosting also comes with a POP email interface that allows email addresses to be added easily by our customers. Is this something that can be easily set up at a VPS so clients can manage their own email addresses? Is there software we need to install to make this work?

    Read the article

  • How do I enable the confluence-users group?

    - by M. Joanis
    I've got an issue with Atlassian Confluence. Normal users can't log in, but administrators can... Details below! I manage users using an Apple Open Directory (LDAP). I created two groups: "confluence-administrators" and "confluence-users". I've added team leaders and managers to both groups, and I've added some users to "confluence-users". Everyone in "confluence-administrators" can log in easily. People in "confluence-users" can't log in at all. When I look at the user list (in Confluence), and select a user to examine the list of groups he or she belongs to, I can see that the Confluence Administrators are indeed members of the "confluence-administrators" group, but not a single user is a member of the "confluence-users" group. Not event the Confluence Administrators, which are members of both groups! So I tried to have one of the "confluence-users" log in while watching the Confluence logs. Here's the result: 2012-07-05 14:50:19,698 ERROR [http-8090-11] [core.event.listener.AutoGroupAdderListener] handleEvent Could not auto add user to group: Group <confluence-users> is read-only and cannot be updated at com.atlassian.crowd.directory.DbCachingRemoteDirectory.addUserToGroup(DbCachingRemoteDirectory.java:461) ... So it says the group group is read-only... I'm not sure why it is a problem. Well confluence-administrators too is read-only and it doesn't complain. Some things I don't think are part of the problem: I've synchronized Confluence with LDAP many, many times. I have verified many times that I didn't make a typo while setting the groups on the LDAP server. LDAP synchronization goes well. No errors in the logs (only INFO level log messages). The user exists. Errors in the logs are different when a user doesn't exist. Any help is most welcome!

    Read the article

  • How to scope access to a service to set of users, using OpenLDAP, and only OUs

    - by JDS
    Okay, here goes. Solving this will solve several problems for me (as I can reapply this knowledge to several extant, similar problems), but luckily I have a very specific, concise problem to describe. Enough preamble. Our hosting partner is setting up VPN access for us and is connecting it to our LDAP server. They are using Cisco VPN, the docs on setting this up are here: http://www.cisco.com/en/US/products/ps6120/products_configuration_example09186a00808c3c45.shtml#maintask1 Specifically, note the screenshot in (5), under "ASDM" Now, I do NOT want to provide access to all of our users. I only want to provide access to our IT group. But I do not see a configuration option for LDAP groups on that web reference for the Cisco VPN. We are using: OpenLDAP 2.4 Static groups (i.e. "Group has the following members...") Single user OU, "ou=users,dc=mycompany,dc=com" Is it possible to provide an alias of some kind in OpenLDAP that creates another OU, "itusers", say, and lets me alias the members of that OU somehow? Something like: "cn=Jeff Silverman,ou=itusers,dc=mycompany,dc=com" is an alias for "cn=Jeff Silverman,ou=users,dc=mycompany,dc=com" And is NOT a separate, unique user account. Alternatively, should I just create a separate OU and manage it separately? It is a pain, but only 12-15 users will have to be managed that way, with two separate user accounts. But I hate this option - messy, unmanageable, unscalable. You know what I mean. I am open to any options. I've searched and read all over but I can't quite find an directly analagous example. I can't possibly be the only one who's had this problem! Thanks!

    Read the article

  • Windows or Linux for VPN-VPN Bridge

    - by James
    I have the following network layout: Network1 ----VPN1-----Network2----VPN2----Network3 I can administer everything in Network1 only and my goal is to get to a box on Network3. I've been told by the admins of Network2 that it's not possible for them to route traffic from Network1 to Network3. I've finally been authorised to host a box in Network2 and I'm hoping with this I can set something up to resolve the issue. My question is should I set this up as a Windows or a Linux box. My initial thought was to use iptables to reroute requests but with my lack of experience with Windows Server (used for something or other in Network2) I'm not sure if this will work. My head's full of questions like: - can I get an ip without logging in to a windows domain? - if I do get an ip, do Windows Servers manage routing through the VPN? - can I make a linux box authenticate with Windows Server to log on to the domain? - would it just be easier to set up a windows box? - is it possible to configure a windows box to do routing from Network1 to Network3? Has anyone done anything like this before? Had experience managing Windows Server? Authenticated (or not as the case may be) to a Windows domain? I'd really appreciate your advice. It might be worth mentioning that the overall objective is to establish a telnet connection from a box on Network1 to a box on Network3.

    Read the article

  • Unable to connect to APNS with java-apns

    - by Mac
    I've got a Java program running on a firewalled server that is intended to send push notifications to my iPhone app by using java-apns. Problem is, whenever I try to send a notification the library fails to connect to the APNS server. From the stack trace, it seems that when creating the required SSL connection, the connection is being refused at some point (a java.net.ConnectException with a detail message of "connection refused" is being thrown when the library calls SSLSocketFactory's createSocket method). It would not surprise me at all if the firewall is blocking the connection, but unfortunately as I do not manage the server I am unable to verify that that is indeed the case. The fact that the program works fine from my (non-firewalled) desktop seems to support the theory. My question is, does anyone know of any method by which I can find the root cause of the problem, and/or can anyone tell me what I should tell the server admin to change to get things to work (if it is indeed the firewall that's the problem)? For reference, the server is a Linux box and I'm using version 0.1.2 of java-apns.

    Read the article

  • Mapping printers using Group Policy Preferences; works on Windows XP, not on Windows 7 x64

    - by Graeme Donaldson
    I'm trying to use Group Policy Preferences to manage user connections to shared printers. The print server is Windows Server 2003 R2 Std edition. Several printers are installed, and I've added x64 editions of all the drivers to the print server as well. I've created a new GPO containing the printer preference settings. Printer mappings are targeted based on AD security group membership. I log on to a Windows XP PC with the Group Policy CSEs installed and the printer maps perfectly. I log on to a Windows 7 x64 PC and it doesn't map. If I manually connect to the shared printer, I get a prompt which asks me to confirm if I trust the server before installing the driver, and then it works perfectly. I have domain admin rights and my UAC settings have not been changed from the default, i.e. UAC is enabled and the default level is selected. Is the printer mapping failing because it's unable to prompt me to install the driver, or is there something else afoot?

    Read the article

  • Hyper V Server 2012 Remote Management Using Workgroup

    - by Chris Kolenko
    I'm trying to remotely manage Hyper V server 2012 from a windows 8 pc, both client and server are on a workgroup. I've spent about 3-4 hours trying to get this working with no luck so far trying the following: Creating a new administrator on the server with the same details as the client ie. username / password. Add an entry into my hosts file to point to the remote ip by server name. Tried using HVRemote. Disabled both firewalls. The error that I'm getting is RPC Service Unavailable. How can I accomplish what I'm trying to do? Update Some of the operations on the Hyper-V Manager work. IE. Virtual Switch Works. I can open the New VM Wizard. I run into an error when creating a new Virtual Hard Disk tho. I've tried creating a VM without a hard disk, which works. Using the new hard disk wizard does not work either. I still can not see any Virtual Machines. RPC server unavailable. Unable to establish communication between 'ServerName' and 'ClientName'

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • Bind9 DNS help with psuedo domains

    - by Tempname
    I have setup a dns server on my home network to manage some apps that I have written for home. Currently I have 3 "domains" that I am using: controller devserver fileserver The first issue that I am having is that when I attempt to ping the parent domain of any of these 3 I am unable to. I simply get ping: unknown host controller. I however can ping any of the subdomains I have setup for these 3 parent domains. The second issue is I am unable to ping any of the 3 parent domains or any child domains from my window machines. I have verified that these domains work on other devices in my house (ipod touch, ipad, cell phone). Any help with this is greatly appreciated Here is bind data file for my parent domain controller: ; ; BIND data file for local loopback interface ; $TTL 604800 @ IN SOA controller. admin.controller. ( 9 604800 86400 2419200 604800 ) ; @ IN NS controller. @ IN A 192.168.1.104 controller IN A 192.168.1.194 admin.controller. IN A 192.168.1.104

    Read the article

  • How do you guys handle custom yum repository?

    - by luckytaxi
    I have a bunch of tools (nagios, munin, puppet, etc...) that gets installed on all my servers. I'm in the process of building a local yum repository. I know most folks just dump all the rpms into a single folder (broken down into the correct path) and then run createrepo inside the directory. However, what would happen if you had to update the rpms? I ask because I was going to throw each software into its own folder. Example one, put all packages inside one folder (custom_software) /admin/software/custom_software/5.4/i386 /admin/software/custom_software/5.4/x86_64 /admin/software/custom_software/4.6/i386 /admin/software/custom_software/4.6/x86_64 What I'm thinking of ... /admin/software/custom_software/nagios/5.4/i386 /admin/software/custom_software/nagios/5.4/x86_64 /admin/software/custom_software/nagios/4.6/i386 /admin/software/custom_software/nagios/4.6/x86_64 /admin/software/custom_software/puppet/5.4/i386 /admin/software/custom_software/puppet/5.4/x86_64 /admin/software/custom_software/puppet/4.6/i386 /admin/software/custom_software/puppet/4.6/x86_64 Ths way, if I had to update to the latest version of puppet, I can save manage the files accordingly. I wouldn't know which rpms belong to which software if I threw them into one big folder. Makes sense?

    Read the article

  • ipvsadm lists a few hosts by IP only, rest by name

    - by dmourati
    We use keepalived to manage our Linux Virtual Server (LVS) load balancer. The LVS VIPs are setup to use a FWMARK as configured in iptables. virtual_server fwmark 300000 { delay_loop 10 lb_algo wrr lb_kind NAT persistence_timeout 180 protocol TCP real_server 10.10.35.31 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.31" misc_timeout 30 } } real_server 10.10.35.32 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.32" misc_timeout 30 } } real_server 10.10.35.33 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.33" misc_timeout 30 } } real_server 10.10.35.34 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.34" misc_timeout 30 } } } http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.fwmark.html [root@lb1 ~]# iptables -L -n -v -t mangle Chain PREROUTING (policy ACCEPT 182G packets, 114T bytes) 190M 167G MARK tcp -- * * 0.0.0.0/0 w1.x1.y1.4 multiport dports 80,443 MARK set 0x493e0 62M 58G MARK tcp -- * * 0.0.0.0/0 w1.x1.y2.4 multiport dports 80,443 MARK set 0x493e0 [root@lb1 ~]# ipvsadm -L IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn FWM 300000 wrr persistent 180 -> 10.10.35.31:0 Masq 24 1 0 -> dis2.domain.com:0 Masq 24 3 231 -> 10.10.35.33:0 Masq 24 0 208 -> 10.10.35.34:0 Masq 24 0 0 At the time the realservers were setup, there was a misconfigured dns for some hosts in the 10.10.35.0/24 network. Thereafter, we fixed the DNS. However, the hosts continue to show up as only their IP numbers (10.10.35.31,10.10.35.33,10.10.35.34) above. [root@lb1 ~]# host 10.10.35.31 31.35.10.10.in-addr.arpa domain name pointer dis1.domain.com. OS is CentOS 6.3. Ipvsadm is ipvsadm-1.25-10.el6.x86_64. kernel is kernel-2.6.32-71.el6.x86_64. Keepalived is keepalived-1.2.7-1.el6.x86_64. How can we get ipvsadm -L to list all realservers by their proper hostnames?

    Read the article

  • Subversion and Quickbooks Files

    - by Jorge Fernandez
    I currently have a large problem on one of the file servers I manage for an Accounting Firm. Quickbooks has a tendency to create multiple files of the same thing over and over to prevent data loss. This is a good thing when you handle just a few files. But at an accounting firm it becomes a problem. Some of the older clients have 5-10 files in their respective folders, each with a different cut off date. Because of user error some of these file aren't labeled properly with their correct cutoff dates. This is where Subversion came to mind. Using the revision system would allow for 1 file to be master and have all of its revisions. Has anyone ever tried this with Quickbooks files? I've only used SVN with code for applications making each file size much smaller. How does SVN stand up with larger files like 10-25MB? I'm not exactly sure how SVN handles revisions - does it keep a duplicate of the files and duplicates the disk space space needed?

    Read the article

  • Does any Certificate Authority support both SAN and wildcards?

    - by nicholas a. evans
    My basic quandry is that wildcard certificates don't support subdomains of subdomains, nor do they help with alternate domain names. Basically, if my CN is example.com, I want a Subject Alternative Name field that looks roughly like so: DNS:example.com DNS*.example.com DNS:*.beta.example.com DNS:example.net DNS:*.example.net DNS:*.beta.example.net Using a self-signed cert, I verified that the browsers will work just fine with this. Unfortunately, none of the Certificate Authorities that I looked into (Thawte, GoDaddy, Verisign, Digicert) seemed to support both wildcard certs and Subject Alternative Name (sometimes referred to as "Multiple Domain UCC"). I even called up GoDaddy tech support to confirm. Is there a CA (trusted by 99% of browsers) that supports wildcards for the Subject Alternative Name? One little restriction: I'm saddled with Amazon EC2's single Elastic IP per instance limitation. Here are what I see as my backup plans: set up three extra EC2 instances, each configured for a different IP address and cert, and nginx reverse proxy from three of them into the app server(s) introduces latency(?), and even the cheapest EC2 instance isn't that cheap instead of dedicated reverse proxy instances, setup the four or more almost identical EC2 app servers, with nginx using the port to determine which cert to deliver, and use haproxy to distribute the traffic amongst themselves. complicated to configure and manage? I'm not using the cheapest EC2 instance type for my app servers. If I don't need 4+ app servers for the load, it raises the cost. set up an external server (outside of EC2) that doesn't have EC2's Elastic IP address restrictions, setup all of the alternate IP addresses and certificates on that server, and nginx reverse proxy from that server into the EC2 app servers. extra IP addresses are almost free (still need to pay for the server of course), but don't come with the robust "elasticity" that Amazon's Elastic IPs provide. even more latency than in the first scenario. Are these approaches crazy or reasonable? Do you have another one to suggest?

    Read the article

  • Why does bash sometimes think my $HOME isn't the correct directory?

    - by Adam Yanalunas
    Like the title says it seems that bash sometimes misidentifies my $HOME. This cropped up after a seemingly unique series of events that I will now replay in broad strokes. Running OS X 10.6 with normal, local account Work binds my account to Active Directory Much time passes with no issues Set up rvm to manage Ruby installs (this becomes important later) Upgraded to OS X 10.7 a few days ago After successful install, attempted to log in, was presented with "Must reset password" dialog that never allowed a password to be reset. Would simply shake the box after new password was entered. Much googling was done. Much more googling was done. Swearing was had. Logged in as root, created new account, set as admin, deleted /Users/[new account], renamed /Users/[old account] to /Users/[new account] Logged out of root, logged into new account with no issues After OS X asking for a my account password a few times to update Keychain and other system-level stuff it was back to business as usual. Opened Terminal, cd to project folder, tried "rails server" and was presented with: /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:247:in to_specs': Could not find rails (>= 0) amongst [] (Gem::LoadError) from /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:256:into_spec' from /usr/local/lib/ruby/1.9.1/rubygems.rb:1210:in gem' from /usr/local/bin/rails:18:in' Ran through a few exercises, decided to rm -rf ~/.rvm and reinstall. Running a --trace on the rvm installer shows it dies on this line: mkdir: /Users/[old account]: Permission denied Scrolling back through the --trace log I see many more mentions of /Users/[old account]. When inspect the install script the offending line is looking at "${HOME}/.rvm" as it tries to run the mkdir. To my confusion I also see mentions of /Users/[new account] in the log. I've tried exporting a new HOME in my .bash_profile to no luck. Can anyone guess why /Users/[old account] would still be kicking around?

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >