Search Results

Search found 18139 results on 726 pages for 'private cloud'.

Page 48/726 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • javascript inherance? with Private vars and methods too

    - by Totty
    Hy i need to inherent a class from another. Parent has private var "_data" and private method "_test" and public method "add" Now the child have a public method "add", that uses the private method from the parent "_test" and the private var "_data". How do i do this? var Parent = function(){ var _data = {}; var _test = function(){alert('test')}; this.add = function(){alert('add from parent')} } var Child = function(){ this.add = function(){// here uses the _data and _test from the Parent class alert('add from child') } } // something to inherent Child from Parent // instance Child and use the add method And i think im miss the prototype concept (http://stackoverflow.com/questions/892467/javascript-inheritance)

    Read the article

  • UEC - Unable to SSH into instance, asks for password

    - by nijanthanh
    I am attempting to set up a private cloud using Ubuntu 10.04.4 Server. I am following the tutorial at http://cssoss.wordpress.com/2010/11/26/eucabook-v1-1/ . All steps go on perfectly apart from that I am unable to SSH into the running instances. I have created the instance using a public key and try to SSH into instance using the private key. However it asks for password. I am able to ping the instance though. I have tried using both the pre-built images from the store as well as custom built images. Several people seem to have the same problem but there does not seem to be any working solution. Thanks in advance.

    Read the article

  • Iterator not accessible because of private inheritance

    - by Bo Tian
    I've created a new class that composes std::deque by private inheritance, i.e, class B : private std::deque<A> { ... }; and in my source code I tried to use iterator of B, i.e., B::iterator it The compiler error is error C2247: 'std::deque<_Ty>::iterator' not accessible because 'B' uses 'private' to inherit from 'std::deque<_Ty>' So the question is, how can I make the iterator accessible?

    Read the article

  • Should developers *really* have private offices?

    - by Aron Rotteveel
    We will probably be moving within a year, so we have to make some decisions regarding office layout. At the moment, our company is basically one big office. When our developers can't bother to be disturbed at all, we all have our own headphones to mute the outside world. Still, it seems a lot of people feel that private offices are no doubt the way to go. From Joel's article Private Offices Redux: Not every programmer in the world wants to work in a private office. In fact quite a few would tell you unequivocally that they prefer the camaradarie and easy information sharing of an open space. Don't fall for it. They also want M&Ms for breakfast and a pony. Open space is fun but not productive. Even though I can understand the benefit on productivity, does having a private office really result in more net productivity? There seem to be plenty of companies that create wide open spaces and still maintain good productivity. Or so it seems. (I should mention many of them use cubicles, though) What is your opinion on this? What does your company do? Is there some middle ground in this? Some more related information on this matter: Private Offices Redux The new Fog Creek office A Field Guide to Developers Gmail recruitment page. Found this last one somewhat remarkable since the Gmail recruitment page promotes the "wide open space" idea.

    Read the article

  • removing dependancy of a private function inside a public function using Rhino Mocks

    - by L G
    Hi All, I am new to mocking, and have started with Rhino Mocks. My scenario is like this..in my class library i have a public function and inside it i have a private function call, which gets output from a service.I want to remove the private function dependency. public class Employee { public virtual string GetFullName(string firstName, string lastName) { string middleName = GetMiddleName(); return string.Format("{0} {2} {1}", firstName, lastName,middleName ); } private virtual string GetMiddleName() { // Some call to Service return "George"; } } This is not my real scenario though, i just wanted to know how to remove dependency of GetMiddleName() function and i need to return some default value while unit testing. Note : I won't be able to change the private function here..or include Interface..Keeping the functions as such, is there any way to mock this.Thank

    Read the article

  • C++ private inheritance and static members/types

    - by WearyMonkey
    I am trying to stop a class from being able to convert its 'this' pointer into a pointer of one of its interfaces. I do this by using private inheritance via a middle proxy class. The problem is that I find private inheritance makes all public static members and types of the base class inaccessible to all classes under the inheriting class in the hierarchy. class Base { public: enum Enum { value }; }; class Middle : private Base { }; class Child : public Middle { public: void Method() { Base::Enum e = Base::value; // doesn't compile BAD! Base* base = this; // doesn't compile GOOD! } }; I've tried this in both VS2008 (the required version) and VS2010, neither work. Can anyone think of a workaround? Or a different approach to stopping the conversion? Also I am curios of the behavior, is it just a side effect of the compiler implementation, or is it by design? If by design, then why? I always thought of private inheritance to mean that nobody knows Middle inherits from Base. However, the exhibited behavior implies private inheritance means a lot more than that, in-fact Child has less access to Base than any namespace not in the class hierarchy!

    Read the article

  • Static member class - declare class private and class member package-private?

    - by Helper Method
    Consider you have the following class public class OuterClass { ... private static class InnerClass { int foo; int bar; } } I think I've read somewhere (but not the official Java Tutorial) that if I would declare the static member classes attributes private, the compiler had to generate some sort of accessor methods so that the outer class can actually access the static member class's (which is effectively a package-private top level class) attributes. Any ideas on that?

    Read the article

  • Filezilla/Puttygen doesn't recognize private key file

    - by devzoner
    I have generated a key for an Ubuntu Virtual Machine running on Azure Cloud Services http://www.windowsazure.com/en-us/manage/linux/how-to-guides/ssh-into-linux/ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myPrivateKey.key -out myCert.pem When loading the private key into Filezilla, it asks me to convert the format, however, when converting the key it fails, the same happens with puttygen from linux console, using this: puttygen myPrivateKey.key -o myKey.ppk In both cases I have the following error: puttygen: error loading `myPrivateKey.key': unrecognised key type By the way, this key doesn't have a passphrase. I found an old thread about it, but I'm using 0.6.3 version which is newer than what this thread recommends: http://fixunix.com/ssh/541874-puttygen-unable-import-openssh-key.html I've managed to solve this issue by using another gui client Fugu for Mac, but one of my co-worker uses windows and I still have to figure this out. Since Filezilla is the de-facto ftp client, I thought it would be easier to solve it there. Thanks

    Read the article

  • Big Success for the 2012 edition of the Oracle EMEA Cloud CRM Partner Community Forum!

    - by Richard Lefebvre
    The 2012 edition of the Oracle EMEA Cloud Partner Community Forum took place on march 28&29 in Madrid. 100 participants from all over Europe had a chance to interact with the Oracle Cloud CRM Product Management team about multiple subjects such as Oracle Cloud CRM and Social Network solutions strategy, RightNow acquisition, Fusion CRM Business opportunities for partners, etc. During his opening keynote, Anthony Lye (Oracle Senior Vice-President and head of Oracle CRM) presented the current Fusion CRM business status, disclosed the overall Oracle CRM product strategy and responded to many questions from the audience. Later on that day, 8 Oracle ISV's presented their Oracle Cloud CRM add-on's and highlighted the value that System Integrators can benefit from as part of a Cloud CRM project. After a very friendly networking diner in a Spanish restaurant, the second day was dedicated to Fusion CRM, with a deeper dive into its major components (Sales, Sales Planning, Marketing) including the Fusion Composers. Briefings on Oracle Consulting Services Fusion CRM dedicated offerings and Fusion CRM Partner Programs concluded the day and the event. All participants rated the event as "good" to "Excellent" and mentioned that it was meaningful for them to plan their Oracle Cloud CRM based business in the near future. We look forward to organise a similar event next year and to welcome even more partners! Richard Lefebvre

    Read the article

  • How to let maas' cloud-init client select internal mirror?

    - by Michael
    Our maas lan can't access internet and have a internal apt-mirror site 192.168.3.6. I changed mirror set of maas server's snippets/maas_proxy file like: d-i mirror/country string manual d-i mirror/http/hostname string 192.168.3.6 d-i mirror/http/directory string /ubuntu d-i mirror/http/proxy string I deployed two maas node ok. Dashboard show the two node's state are ready. But node's cloud-init client changed the apt's sources.list like this: ## Note, this file is written by cloud-init on first boot of an instance ## modifications made here will not survive a re-bundle. ## if you wish to make changes you can: ## a.) add 'apt_preserve_sources_list: true' to /etc/cloud/cloud.cfg ## or do the same in user-data ... deb http://archive.ubuntu.com/ubuntu precise main deb-src http://archive.ubuntu.com/ubuntu precise main ... Directly use cobbler install node(without maas), the node apt's sources.list like: ... deb http://192.168.3.6/ubuntu precise main deb-src http://192.168.3.6/ubuntu precise main ... My question is: How to set user-data in maas? So that I can set cloud-init's mirror's url to 192.168.3.6 or prevent cloud-init to change mirror's url. Maas node's file /home/ubuntu/.ssh/authorized_keys is empty. Is it caused by the mirror's setup?

    Read the article

  • Looking ahead at 2011-with Gartner

    - by andrea.mulder
    Speaking of forecasting the future. Gartner highlighted the top 10 technologies and trends that will be strategic for most organizations in 2011. While Gartner's predictions are not specific to CRM, you just cannot help but notice some of the common themes in store for 2011. The top 10 strategic technologies for 2011 include: Cloud Computing Mobile Applications and Media Tablets Social Communications and Collaborations Video Next Generation Analytics Social Analytics Context-Aware Computing Storage Class Memory Ubiquitous Computing Fabric-Based Infrastructure and Computers

    Read the article

  • CI Deployment Of Azure Web Roles Using TeamCity

    - by srkirkland
    After recently migrating an important new website to use Windows Azure “Web Roles” I wanted an easier way to deploy new versions to the Azure Staging environment as well as a reliable process to rollback deployments to a certain “known good” source control commit checkpoint.  By configuring our JetBrains’ TeamCity CI server to utilize Windows Azure PowerShell cmdlets to create new automated deployments, I’ll show you how to take control of your Azure publish process. Step 0: Configuring your Azure Project in Visual Studio Before we can start looking at automating the deployment, we should make sure manual deployments from Visual Studio are working properly.  Detailed information for setting up deployments can be found at http://msdn.microsoft.com/en-us/library/windowsazure/ff683672.aspx#PublishAzure or by doing some quick Googling, but the basics are as follows: Install the prerequisite Windows Azure SDK Create an Azure project by right-clicking on your web project and choosing “Add Windows Azure Cloud Service Project” (or by manually adding that project type) Configure your Role and Service Configuration/Definition as desired Right-click on your azure project and choose “Publish,” create a publish profile, and push to your web role You don’t actually have to do step #4 and create a publish profile, but it’s a good exercise to make sure everything is working properly.  Once your Windows Azure project is setup correctly, we are ready to move on to understanding the Azure Publish process. Understanding the Azure Publish Process The actual Windows Azure project is fairly simple at its core—it builds your dependent roles (in our case, a web role) against a specific service and build configuration, and outputs two files: ServiceConfiguration.Cloud.cscfg: This is just the file containing your package configuration info, for example Instance Count, OsFamily, ConnectionString and other Setting information. ProjectName.Azure.cspkg: This is the package file that contains the guts of your deployment, including all deployable files. When you package your Azure project, these two files will be created within the directory ./[ProjectName].Azure/bin/[ConfigName]/app.publish/.  If you want to build your Azure Project from the command line, it’s as simple as calling MSBuild on the “Publish” target: msbuild.exe /target:Publish Windows Azure PowerShell Cmdlets The last pieces of the puzzle that make CI automation possible are the Azure PowerShell Cmdlets (http://msdn.microsoft.com/en-us/library/windowsazure/jj156055.aspx).  These cmdlets are what will let us create deployments without Visual Studio or other user intervention. Preparing TeamCity for Azure Deployments Now we are ready to get our TeamCity server setup so it can build and deploy Windows Azure projects, which we now know requires the Azure SDK and the Windows Azure PowerShell Cmdlets. Installing the Azure SDK is easy enough, just go to https://www.windowsazure.com/en-us/develop/net/ and click “Install” Once this SDK is installed, I recommend running a test build to make sure your project is building correctly.  You’ll want to setup your build step using MSBuild with the “Publish” target against your solution file.  Mine looks like this: Assuming the build was successful, you will now have the two *.cspkg and *cscfg files within your build directory.  If the build was red (failed), take a look at the build logs and keep an eye out for “unsupported project type” or other build errors, which will need to be addressed before the CI deployment can be completed. With a successful build we are now ready to install and configure the Windows Azure PowerShell Cmdlets: Follow the instructions at http://msdn.microsoft.com/en-us/library/windowsazure/jj554332 to install the Cmdlets and configure PowerShell After installing the Cmdlets, you’ll need to get your Azure Subscription Info using the Get-AzurePublishSettingsFile command. Store the resulting *.publishsettings file somewhere you can get to easily, like C:\TeamCity, because you will need to reference it later from your deploy script. Scripting the CI Deploy Process Now that the cmdlets are installed on our TeamCity server, we are ready to script the actual deployment using a TeamCity “PowerShell” build runner.  Before we look at any code, here’s a breakdown of our deployment algorithm: Setup your variables, including the location of the *.cspkg and *cscfg files produced in the earlier MSBuild step (remember, the folder is something like [ProjectName].Azure/bin/[ConfigName]/app.publish/ Import the Windows Azure PowerShell Cmdlets Import and set your Azure Subscription information (this is basically your authentication/authorization step, so protect your settings file Now look for a current deployment, and if you find one Upgrade it, else Create a new deployment Pretty simple and straightforward.  Now let’s look at the code (also available as a gist here: https://gist.github.com/3694398): $subscription = "[Your Subscription Name]" $service = "[Your Azure Service Name]" $slot = "staging" #staging or production $package = "[ProjectName]\bin\[BuildConfigName]\app.publish\[ProjectName].cspkg" $configuration = "[ProjectName]\bin\[BuildConfigName]\app.publish\ServiceConfiguration.Cloud.cscfg" $timeStampFormat = "g" $deploymentLabel = "ContinuousDeploy to $service v%build.number%"   Write-Output "Running Azure Imports" Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\*.psd1" Import-AzurePublishSettingsFile "C:\TeamCity\[PSFileName].publishsettings" Set-AzureSubscription -CurrentStorageAccount $service -SubscriptionName $subscription   function Publish(){ $deployment = Get-AzureDeployment -ServiceName $service -Slot $slot -ErrorVariable a -ErrorAction silentlycontinue   if ($a[0] -ne $null) { Write-Output "$(Get-Date -f $timeStampFormat) - No deployment is detected. Creating a new deployment. " } if ($deployment.Name -ne $null) { #Update deployment inplace (usually faster, cheaper, won't destroy VIP) Write-Output "$(Get-Date -f $timeStampFormat) - Deployment exists in $servicename. Upgrading deployment." UpgradeDeployment } else { CreateNewDeployment } }   function CreateNewDeployment() { write-progress -id 3 -activity "Creating New Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: In progress"   $opstat = New-AzureDeployment -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Creating New Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: Complete, Deployment ID: $completeDeploymentID" }   function UpgradeDeployment() { write-progress -id 3 -activity "Upgrading Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: In progress"   # perform Update-Deployment $setdeployment = Set-AzureDeployment -Upgrade -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service -Force   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Upgrading Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: Complete, Deployment ID: $completeDeploymentID" }   Write-Output "Create Azure Deployment" Publish   Creating the TeamCity Build Step The only thing left is to create a second build step, after your MSBuild “Publish” step, with the build runner type “PowerShell”.  Then set your script to “Source Code,” the script execution mode to “Put script into PowerShell stdin with “-Command” arguments” and then copy/paste in the above script (replacing the placeholder sections with your values).  This should look like the following:   Wrap Up After combining the MSBuild /target:Publish step (which creates the necessary Windows Azure *.cspkg and *.cscfg files) and a PowerShell script step which utilizes the Azure PowerShell Cmdlets, we have a fully deployable build configuration in TeamCity.  You can configure this step to run whenever you’d like using build triggers – for example, you could even deploy whenever a new master branch deploy comes in and passes all required tests. In the script I’ve hardcoded that every deployment goes to the Staging environment on Azure, but you could deploy straight to Production if you want to, or even setup a deployment configuration variable and set it as desired. After your TeamCity Build Configuration is complete, you’ll see something that looks like this: Whenever you click the “Run” button, all of your code will be compiled, published, and deployed to Windows Azure! One additional enormous benefit of automating the process this way is that you can easily deploy any specific source control changeset by clicking the little ellipsis button next to "Run.”  This will bring up a dialog like the one below, where you can select the last change to use for your deployment.  Since Azure Web Role deployments don’t have any rollback functionality, this is a critical feature.   Enjoy!

    Read the article

  • I'm creating my own scalable, rapid prototyping web server. How should I design it?

    - by Mike Willliams
    I'm going to create my own web server that focuses on scalability, rapid prototyping and the use of JavaScript as the server's scripting language, much like node.js. It will use a Model-View-Controller design pattern so a web application can support more concurrent users just by adding hardware -- and not having to redesign the software. Basically, I'm aiming to produce a framework that allows for fast and easy development of cloud applications without the need to write lots of boiler plate code. I've got some questions about this... How hard will it be to put MySQL in the cloud? How could I go about implementing this and make the resulting product free? Will I have to write my own engine or modify an existing one, if I do what should I watch out for? To make this scalable I need to adjust from one server to hundreds of servers this creates the requirement for the servers to be load balancing, how should I do this? If I balance based on the work load per server I would need gateway to handle all the incoming requests. Is it the right idea to have all the servers check into the gateway and update there status. By having the servers run through a gateway if the gateway dies all the incoming requests are ignored. I'm thinking that having all the servers maintain a list of each other, or at least a few I could rebuild the list of servers and establish a new gateway. Is it worth it? Or should I have a backup gateway that could switch out? Should I let the user choose? How should I pick which server handles the database and which handles the page serving? Should I spread the database so that queries are preformed on multiple servers? Which would theoretically improve performance. The servers would need to mirror the database at least once so that if a server goes down the database isn't corrupted. So this brings up writing another question, should I broadcast SQL queries so that all the servers can take a bit of the work load? If I do it that way wouldn't a query clog up the network so that other queries couldn't be preformed? What are my alternatives? Finally, is there a free solution already out there that might need a little modification that suits my needs?

    Read the article

  • What should I do to scale out an high-traffic website?

    - by makerofthings7
    What Best Practices should be undertaken for a Website that needs to "scale out" to handle capacity? This is especially relevant now that people are considering the cloud, but may be missing out on the fundamentals. I'm interested in hearing about anything you consider a best practice from development-level tasks, to infrastructure, to management. Use your best judgement when posting multiple answers, since it may make sense to post them separately for voting purposes. (hint: you'll likely get more reputation points for many small answers than one large answer)

    Read the article

  • Tunnelblick cannot load private key file

    - by Patrick
    I got a certificate from my network administrator and the passphrase for it. Put everything in the Tunnelblick configuration folder, but always get an error: 2010-11-20 13:22:10 Cannot load private key file vpn-pass.key: error:06065064:digital envelope routines:EVP_DecryptFinal:bad decrypt: error:0906A065:PEM routines:PEM_do_header:bad decrypt: error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib Everything was copy&paste and it works on a windows machine. How can I get this to work?

    Read the article

  • Dynamic DNS on D-Link DWR-112 3G router uses a private IP address

    - by user270151
    I'm using a D-Link DWR-112 3G router to connect to the internet by using Celcom broadband plug-in. How can I do the port forwarding to my server? I already have correctly configured my DynDNS, but every time the DynDNS will not set to public address but local private address with in the range 10.xxx.xxx.xxx. My router address is 192.168.1.1 and server address is 192.168.1.5. Can someone give me some guideline about this issue?

    Read the article

  • Virtual Private Hosting DNS configuration

    - by Ciel
    I did a great deal of reading here before posting this because I didn't want to post a duplicate - but I'm on a bit of a deadline and getting frustrated, so here goes... I very, very, very sincerely apologize if this is long winded or hard to read. Please - please just ask for any information or clarification and I will give it as quickly as I possibly can. This has become very frustrating to me and this is the last place I know to turn. I have no experience with setting up DNS, no experience with nameservers, and no peers to go to for help. So this is kind of my last ditch effort. The task of setting up a private server has, through circumstances beyond my control, fallen into my lap. I own a domain (hereafter referred to as yyy.com) and have always used shared hosting - I buy a package and just point it to the domain nameservers they give me. It's always been simple. yyy.com is registered with network solutions Now I have purchased a Virtual Private Hosting package from GoDaddy.com - and it comes with Plesk 11. I have no earthly idea how to begin to get the right nameserver for yyy.com. I have gone through the instructions and have wound up exceedingly frustrated. I have 2 IP addresses from GoDaddy for the server. This is what I have so far, and I cannot tell if it is working (Since propogation takes so long, it is extremely hard to test for me) IP 1 : XX.XX.XX.XX IP 2 : YY.YY.YY.YY (obviously hidden for privacy) Now after going through the documentation setup and waiting a few days, this is the setup I have - and so far it does not appear to be working. Host Record type Value XX.XX.XX.XX / 24 PTR yyy.com. yyy.com. NS ns1.yyy.com. yyy.com. A XX.XX.XX.XX yyy.com. MX (10) mail.yyy.com. ftp.yyy.com. CNAME yyy.com. ipv4.yyy.com. A XX.XX.XX.XX mail.yyy.com. A XX.XX.XX.XX mssql.yyy.com. A XX.XX.XX.XX ns1.yyy.com. A XX.XX.XX.XX ns2.yyy.com. A YY.YY.YY.YY webmail.yyy.com. A XX.XX.XX.XX www.yyy.com. CNAME yyy.com. yyy.com is pointing to both ns1.yyy.com and ns2.yyy.com Can anyone give me some assistance here? This is a learning experience for me and days of documentation have left me very confused.

    Read the article

  • Discovering proxy servers on a private network

    - by AIB
    Suppose that I am in a private network of computers (say each having ip addresses 192.168.. ). Some of the machines( we have no information regarding their ip, name and no physical access to the servers) in the network are connected to internet and they run an http proxy at some port say 3128. Is there a program which can be run on Windows or Linux which will give me the list of machines(ip addresses and ports if possible) acting as proxy servers?

    Read the article

  • How can the Private Bytes of a process be significantly less than its effect on the system commit charge?

    - by bacar
    On a 64-bit Windows Server 2003, I can see using taskmgr or process explorer that the total commit charge is around 3.5GB, yet when I sum the Private Bytes consumed by each process (by running pslist -m and adding all values under the Priv column) the total comes in at 1.6GB. I know which process seems to be causing this (sqlservr.exe) as when I kill the process, the commit charge drops dramatically. However the process in question is consuming only ~220MB of Private Bytes yet killing the process drops the commit charge by ~1.6GB. How is this possible? How can the commit charge be so significantly greater than Private Bytes, which should represent the amount of committed memory? If some other factor contributes to the commit charge, what is that factor and how can I view its impact in process explorer? Note: I claim that I understand the difference between reserved and committed memory already: my investigations above relate specifically to Private Bytes which includes only committed memory and excludes reserved memory. the Virtual Size of the process in this case is over 4GB, but this should be irrelevant - Virtual Size in procexp represents reserved, not committed memory, and should not contribute to the commit charge. I'm particularly interested in generalised answers to this question: I'm assuming that if sqlservr.exe can behave in this way, that any process potentially could. Further Investigations I notice that pointing Sysinternals VMMap at this process reports a committed "Private Data" of 1.6GB despite Procexp's reported a Private Bytes of 220MB. This is particularly strange given that the documentation for this field in the "Windows® Sysinternals Administrator's Reference" states that: Private Data memory is memory that is allocated by VirtualAlloc and that is not further handled by the Heap Manager or the .NET runtime, or assigned to the Stack category... VMMap’s definition of “Private Data” is more granular than that of Process Explorer’s “private bytes.” Procexp’s “private bytes” includes all private committed memory belonging to the process. i.e. that VMMap's committed "Private Data" should be smaller than procexp's "Private Bytes". Also, after reading the 'Process committed memory' section of Mark Russinovich's excellent Pushing the Limits of Windows: Virtual Memory, he highlights two cases which won't show up in Private Bytes: File mapping views with copy-on-write semantics (however, according to VMMap there is no significant space allocated to Mapped Files). pagefile-backed virtual memory (however, I tried testlimit with the -l flag as suggested, and no significant memory is consumed by pagefile-backed sections)

    Read the article

  • How to set up a software VPN when moving a server to the cloud

    - by Neal L
    I work in a small company with one office in Dallas and another in Los Angeles. We run a Fedora server at our Dallas location and use a Linksys RV042 at each location to create a VPN connection between the sites. Every time the power or internet goes out in Dallas, our server is inaccessible so the entire company goes down. Because of this, we would like to use a shared server in the cloud (something like Linode) to avoid this problem. As a relative novice to VPN configurations, I would like to know if it is possible to set up a software VPN on the cloud server and connect our local networks in Dallas and LA to that VPN. I've read about openvpn and ssh vpns, but I don't know it is the best option. Could anyone with some experience point me in the right direction on the right combination of software VPN and hardware for this? We're open to new hardware to make this happen. Thanks!

    Read the article

  • RHEL 6.x on Rackspace Cloud and Dedicated hardware experiencing Redis Timeouts

    - by zhallett
    I just recently set up a mixture of RHEL 6.1 Rackspace cloud hosts and RHEL 6.2 dedicated hosts using Rackconnect. I am experiencing intermittent Redis timeouts from within our Rails 3.2.8 app with Redis 2.4.16 running on the RHEL 6.2 dedicated hosts. There is no network latency or packet loss. Also there are no errors on any interfaces on our cloud or dedicated servers or on the managed firewall from Rackspace. When Redis timesout, there is nothing logged within redis even though it is set up to do debug logging. The only error we receive is from Airbrake saying there was a Redis timeout. Network topology: RHEL 6.1 cloud hosts <--> Alert logic IDS <--> Cisco ASA 5510 <--> RHEL 6.2 dedicated hosts (web nodes) (two way NAT) (db hosts running redis) Ping from db host to web host: 64 bytes from 10.181.230.180: icmp_seq=998 ttl=64 time=0.520 ms 64 bytes from 10.181.230.180: icmp_seq=999 ttl=64 time=0.579 ms 64 bytes from 10.181.230.180: icmp_seq=1000 ttl=64 time=0.482 ms --- web1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999007ms rtt min/avg/max/mdev = 0.359/0.535/5.684/0.200 ms Ping from web host to db host: 64 bytes from 192.168.100.26: icmp_seq=998 ttl=64 time=0.544 ms 64 bytes from 192.168.100.26: icmp_seq=999 ttl=64 time=0.452 ms 64 bytes from 192.168.100.26: icmp_seq=1000 ttl=64 time=0.529 ms --- data1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999017ms rtt min/avg/max/mdev = 0.358/0.499/6.120/0.201 ms Redis config: daemonize yes pidfile /var/run/redis/6379/redis_6379.pid port 6379 timeout 0 loglevel debug logfile /var/lib/redis/log syslog-enabled yes syslog-ident redis-6379 syslog-facility local0 databases 16 save 900 1 save 300 10 save 60 10000 rdbcompression yes dbfilename dump-6379.rdb dir /var/lib/redis maxclients 10000 maxmemory-policy volatile-lru maxmemory-samples 3 appendfilename appendonly-6379.aof appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb slowlog-log-slower-than 10000 slowlog-max-len 1024 vm-enabled no vm-swap-file /tmp/redis.swap vm-max-memory 0 vm-page-size 32 vm-pages 134217728 vm-max-threads 4 hash-max-zipmap-entries 512 hash-max-zipmap-value 64 list-max-ziplist-entries 512 list-max-ziplist-value 64 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 activerehashing yes Redis-cli info: redis-cli info redis_version:2.4.16 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:64 multiplexing_api:epoll gcc_version:4.4.6 process_id:4174 uptime_in_seconds:79346 uptime_in_days:0 lru_clock:1064644 used_cpu_sys:13.08 used_cpu_user:19.81 used_cpu_sys_children:1.56 used_cpu_user_children:7.69 connected_clients:167 connected_slaves:0 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:6 used_memory:15060312 used_memory_human:14.36M used_memory_rss:22061056 used_memory_peak:15265928 used_memory_peak_human:14.56M mem_fragmentation_ratio:1.46 mem_allocator:jemalloc-3.0.0 loading:0 aof_enabled:0 changes_since_last_save:166 bgsave_in_progress:0 last_save_time:1352823542 bgrewriteaof_in_progress:0 total_connections_received:286 total_commands_processed:507254 expired_keys:0 evicted_keys:0 keyspace_hits:1509 keyspace_misses:65167 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:690 vm_enabled:0 role:master db0:keys=6,expires=0 edit 1: add redis-cli info output

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >