Search Results

Search found 8849 results on 354 pages for 'cloud hosting'.

Page 80/354 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Solaris 10 opencsw git package issue with bitbucket git hosting

    - by zephyrus00jp
    Has anyone tried using `git' from opencsw package in order to work with bitbucket source hosting service (under solaris10)? I tried to use git as the bitbucket documentation explains, and - under Debian GNU/Linux, it worked flawlessly as described, but - under Solaris 10, I got Authentication Failed message. I even tried to run truss to see anything is suspicious but could not find any smoking gun under solaris why it failed. ldd git-binary didnd't show anything suspicious either (except for the libcrypt library which could be a suspicious to think about export restrictions. Have they shipped incompatible version? BUT since the password is typed into https: connection, I suspect it is only a matter of web-level cryptography and should be universal these days.) I am now tempted to compile git suite under solaris 10, but I did find people who seem to be using git with bitbucket under solaris 10 and am wondering what could be wrong.

    Read the article

  • Setup secure shared hosting (Apache, PHP, MySQL)

    - by Apaz
    So I'm setting up a shared hosting with Apache, PHP, MySQL and the biggest question mark is how to do with PHP, since there is a million options out there how to configure it securely. The plan is: Chroot for MySQL (built in support for chroot) Chroot for Apache (mod_security) Each user executing their PHP-scripts as their own user (see below) Set open_basedir Disable all "evil" php-functions (allow_url_fopen, system, exec, and so on) Ive looked at suexec and suphp but they seems very slow; http://blog.stuartherbert.com/php/2007/12/18/using-suexec-to-secure-a-shared-server/ http://blog.stuartherbert.com/php/2008/01/18/using-suphp-to-secure-a-shared-server/ So I've looked some more and found some other solutions: apache2-mpm-itk + mod_php(?) mod_fcgid + php-fpm mod_fastcgi + php-fpm Ive tried a simple setup with mod_fastcgi + php-fpm and it seems to work, runs as correct user and so on, but the protection against directory traveling is still open_basedir(?) One solution for that could be to use php-fpm's chroot option, but that causes a lot of other issues like domain name resolver does not work sending mail does not work Tips?

    Read the article

  • SSL certificates with password encrypted key at hosting provider

    - by Jurian Sluiman
    We are a software company and offer hosting to our clients. We have a VPS at a large Dutch datacenter. For some of the applications, we need an SSL certificate which we'd like to encrypt with a password protected keyfile. Our VPS reboots now and then because of updates whatsoever, but that means our apache doesn't start right away because the passwords are needed. This results in downtime and is of course a real big problem. We can give the passwords to our VPS datacenter, or create certificates based on keyfiles without passwords. Both solutions seem not the best one, because they compromise the security of our certificates. What's the best solution for this issue?

    Read the article

  • Hosting services on ubuntu server VM

    - by Trevor Hartman
    I've got OSX Server running on a macbook, and I'm looking to run an ubuntu server VM on it via Parallels. I'm thinking about hosting all my apache inside linux, and possibly some other services. I'm curious what a viable config would be, having not done this before. I need to do bridged network right? How do I direct web traffic to the VM instead of OSX? Haven't got my head wrapped around how this works yet so any help would be appreciated.

    Read the article

  • best way to quickly share multiple photos without permanently hosting them

    - by dsollen
    I find that I'm often asked to share lots of photos with someone, enough that uploading each one individually to them gets tedious when I would like to drag and drop the whole bunch. I could put them on photobucket, but some of them are semi-private; private enough that I don't want them to be easily found on image hosting sites. Are there any convenient ways of sharing these photos quickly but still being able to remove them from the inter-webs afterwards (without too much hassle)? I have found Yahoo Messenger complete version has great photo sharing options; but not everyone has it and I can't expect people to download it just to see some photos.

    Read the article

  • How to bypass Forefront TMG for downloading from Adobe Cloud

    - by user1006272
    I hope that this question has not been asked as I've spent a couple of days googling around trying to find a solution. I have one computer that needs to download from Adobe Cloud to install applications like Photoshop etc... The issue I'm having is that Adobe uses a download manager program (AdobeApplicationManager.exe) that just keeps incrementing the time left on the download of any app like Photoshop. Is there a way to allow just the download manager from that one computer to bypass any filtering settings in Forefront TMG 2010? I have very little knowledge of servers / ISA servers / Forefront TMG and have been thrown into this position by luck I guess. Any help with this would be highly appreciated. Thanks in advance.

    Read the article

  • FreePBX: Asterisk in the Cloud (EC2) Audio Problems

    - by neezer
    Please pardon the newbie question, but I can't seem to figure this out. I followed the Voxilla's tut to the tee: http://voxilla.com/2009/10/15/voxill...p-by-step-1457 But in making calls, my softphones connect, yet no audio (in either direction). I know from poking around the forums that this is generally caused by two factors: NAT and audio codecs. I (being new to the arena), however, don't know which. I believe I have Asterisk and the clients restricted to just ulaw, and I also believe I have the correct ports open, and my externip set correctly (I think the Voxilla AMI does this automatically, since it's in the cloud). I'm a bit lost. I'd be happy to post whatever configuration files that might help, provided you tell me where they are on the filesystem. But like I said before, this is effectively a vanilla install of Voxilla's own FreePBX AMI. I'd appreciate any help or guidance here. Thanks!

    Read the article

  • FreePBX: Asterisk in the Cloud (EC2) Audio Problems

    - by neezer
    Please pardon the newbie question, but I can't seem to figure this out. I followed the Voxilla's tut to the tee: http://voxilla.com/2009/10/15/voxill...p-by-step-1457 But in making calls, my softphones connect, yet no audio (in either direction). I know from poking around the forums that this is generally caused by two factors: NAT and audio codecs. I (being new to the arena), however, don't know which. I believe I have Asterisk and the clients restricted to just ulaw, and I also believe I have the correct ports open, and my externip set correctly (I think the Voxilla AMI does this automatically, since it's in the cloud). I'm a bit lost. I'd be happy to post whatever configuration files that might help, provided you tell me where they are on the filesystem. But like I said before, this is effectively a vanilla install of Voxilla's own FreePBX AMI. I'd appreciate any help or guidance here. Thanks!

    Read the article

  • VirtualBox: VBoxManage modifyhd hosting on mac os x resize not supported

    - by dwstein
    I am a complete newbie. i'm hosting VM on OS X using virtualbox. I'm trying to resize the virtual hard drive by using the following command in the terminal. VBoxManage modifyhd "<absolute path including name and extension>" --resize 20480 I used a disk size of 25480 (i'm not really sure how to pick the correct size. and I got the following error: Progress state: VBOX_E_NOT_SUPPORTED VBoxManage: error: Resize hard disk operation for this format is not implemented yet! virtualbox version 4.1.18 I don't really even know what to ask. What am I doing wrong?

    Read the article

  • VPS hosting for a social network

    - by Jana
    Hi, I've developed a social network and I've been using shared hosting for that since it was launched. With that I wasn't able to send emails in bulk in cases like "newsletters" and "invitations to join my site". Plus most importantly most of the mails I send ended up in user's SPAM list.I'm planning to move into VPS as it may not have limits added. I'm wondering what's the cheapest VPS host available. I'm not pretty much familiar with Linux commands and seeking cPanel to do the work for me. Will the following configuration suit for a "new" social network like mine which has a less load? 1000Mhz Guaranteed 512MB Guaranteed RAM 20GB (RAID) Disk Space 1000GB/month Bandwidth 2 IP(s) & 5 Backups Semi Managed Thanks in advance

    Read the article

  • Setting up IIS7 to mimic a GoDaddy shared hosting plan

    - by NerdFury
    I host multiple domains on a GoDaddy shared hosting account. I would like to setup a website locally in IIS 7 that mimics the setup of my hosted account so that I can test and debug applications locally before deploying, as debugging after deploying, or discovering there are issues after deploying is frustrating. I have created a folder WebRoot, at put my main application in that folder. I created a website in IIS 7 and pointed it at that folder. I setup bindings with a fake domain, and created a matching entry in my hosts file to make the fake domain point at my 127.0.0.1. I then created a folder www.otherdomain.com under webroot. I then created an application underneath my website, and pointed it at this folder. I can't find how I can add bindings to the web application to have it referenced as a different fake domain, rather than a subdirectory under my root domain. What would be the proper way to setup IIS to best simulate the environment on the GoDaddy servers.

    Read the article

  • Transfered SSL Certificate to Rackspace Cloud Server - Occasional Errors

    - by ngl5000
    Okay, I recently tranfered my Comodo SSL certificate from my previous Bluehost account to my new rackspace cloud server. (LAMP stack) Basically I just copy pasted the server cert and key and checked to make sure it was properly installed which it was. Now I am running into some issues, occasionally I will hear from people that they are getting an 'Untrusted Connection Error' while others are not getting this error at all. Recently someone sent me a screen shot of their error and it said: This Certificate is not trusted because no issuer chain was provided. The browser they noticed this on was safari so I cleared all my history data in safari and opened the site but I am not seeing that error. Does anyone have any idea how to fix something like this? Thanks!

    Read the article

  • Cloud services can't be reached from complex customer infrastructure

    - by Nock
    We have several services running on a cloud, they all are hosted on Windows Server 2012 R2, have public IP address and specific port. Some of our customers can't reach them because for "some reason" the ports are cut between a firewall between them and us. (some customers are using a shared internet connection in a multi tenant office and they can't change firewall communication) Well, you get it, we don't have the possibility to make all the firewall "allowing" the communication. My customers all runs Windows 7 at least. What is the best counter solution in such case, using Microsoft (Windows Server) technologies? The best would be some kind of tunneling communication or VPN, but the customer should also be able to access his/her enterprise resources. Bby the way, today we using IPSec using Windows Firewall to secure the communication, is IPSec tunneling a solution for us? Otherwise, is there a service in Windows to enable some kind of VPN between a client and a server but only for a given set of servers?

    Read the article

  • svnrepo + trac hosting

    - by Shikhar
    Does anyone know of a good and economic svn + trac hosting site. Specific requirements 1) trac hooks should be in place, which enables commmit messages to be updated in trac issues. 2) It should have emailTotracScript or MailToTracPlugin installed, with which an issue can be reported via email. If its located in Asia pacific it would be great, as time delay from the US is very high. I am already using sourcerepo.com and its very good. Only short coming is they dont have emailtotrac and the time delay is significant. any other inputs would be helpful. TIA

    Read the article

  • Domain controller in cloud, how do we set up local BDC

    - by brian b
    We have a domain controller (exchange box) hosted at our hosting provider. We need to set up a local domain controller so we do a VPN and local authentication tasks. I can make the PDC accept all connections from our Office IP. How do I get the office router to correctly allow two way communications between the PDC (cloud) and the local DC. Is there a list of ports I need to pass through to the local DC? Thanks! "PDC" and "BDC" used for clarity--I know that the concept is obsolete.

    Read the article

  • Name-based virtual hosting in Apache

    - by malvikus
    I'd like to set up name-based virtual hosting in Apache, but I don't have DNS name (local private network). Thus I want to get something like that: http://192.168.0.1/wiki - First virtual host - wiki. http://192.168.0.1/redmine - Second virtual host - redmine. As I suggest I can be achievable by using ServerName option in section of both vhosts. But in Apache documentation has no mention that I can use for FQDN IP-addr. Is it possible? How can I reach my wishes? P.S.: I want to share my sites on the same subnet only. Thus any who can ping me can enter http://my_ip/wiki and get wiki, http://my_ip/redmine and get redmine.

    Read the article

  • Oracle Delivers Latest Release of Oracle Enterprise Manager 12c

    - by Scott McNeil
    Richer Service Catalog for Database and Middleware as a Service; Enhanced Database and Middleware Management Help Drive Enterprise-Scale Private Cloud Adoption News Summary IT organizations are adopting private clouds as a stepping-stone to business-driven, self-service IT. Successful implementations hinge on the ability to efficiently deploy and manage cloud services at enterprise scale. Having a complete cloud management solution integrated with an enterprise-class technology stack is a fundamental requirement for IT. Oracle Enterprise Manager 12c Release 4 meets that requirement by helping businesses become more agile and responsive, while reducing cost, complexity, and risk. News Facts Oracle Enterprise Manager 12c Release 4, available today, lets organizations rapidly adopt Oracle-based, enterprise-scale private clouds. New capabilities provide advanced technology stack management, secure database administration, and enterprise service governance, enabling Oracle customers and partners to maximize database and application performance and drive innovation using self-service IT platforms. The enhancements have been driven by customers and the growing Oracle Enterprise Manager Ecosystem, comprised of more than 750 Oracle PartnerNetwork (OPN) Specialized partners. Oracle and its partners and customers have built over 140 plug-ins and connectors for Oracle Enterprise Manager. Watch the video highlights. Automation for Broader Cloud Services Oracle Enterprise Manager 12c Release 4 allows for a rapid enterprise-wide adoption of database, middleware and infrastructure services in the private cloud, driven by an enhanced API-enabled service catalog. The release features “push button” style provisioning of complete environments such as SOA and Oracle Active Data Guard, and fast data cloning that enables rapid deployment and testing of enterprise applications. Out-of-the-box capabilities to detect data and configuration vulnerabilities provide enhanced cloud service governance along with greater operational control through a flexible and extensible showback mechanism. Enhanced Database Management A new performance warehouse enables predictive database diagnostics and trend analysis and helps identify database problems before they occur. New enterprise data-governance capabilities enhance security by helping systematically discover and protect sensitive data. Step-by-step orchestration of upgrades with the ability to rollback changes enables faster adoption of Oracle Database 12c. Expanded Fusion Middleware Management A new consolidated view of Oracle Fusion Middleware 12c deployments with a guided management capability lets administrators apply best management practices to diverse middleware environments and identify performance issues quickly. A Java VM Diagnostics as a Service feature allows governed access to diagnostics data for IT workers across multiple disciplines for accelerated DevOps resolutions of defects and performance optimization. New automated provisioning for SOA lets middleware administrators perform mass SOA provisioning with ease. Superior Enterprise-Grade Management Private roles and preferred credentials have been added to Oracle Enterprise Manager to provide additional fine-grained security for organizations with complex access control requirements. A new security console provides a single point of control for managing the security of Oracle Enterprise Manager environments. Support for the latest industry standard SNMP v3 protocol, including encryption, enables more secure heterogeneous management. “Smart monitoring” adapts to observed environmental changes and adds self-management capabilities to help Oracle Enterprise Manager run at peak performance, while demanding less IT supervision. Supporting Quotes “Lawrence Livermore National Laboratory has a strong tradition of technology breakthroughs and leadership. As a member of Oracle’s Customer Advisory Board for Oracle Enterprise Manager, we have consistently provided feedback and guidance in the areas of enterprise-scale cloud, self-diagnosability, and secure administration for the product,” said Tim Frazier, CIO, NIF and Photon Sciences, Lawrence Livermore National Laboratory. “We intend to take advantage of the Release 4 features that support enterprise-scale availability and fine-grained security capabilities for private cloud deployments.” “IDC's most recent CloudTrack survey shows that most enterprises plan to adopt hybrid cloud architectures over the next three years,” said Mary Johnston Turner, Research Vice President, Enterprise System Management Software, IDC. “These organizations plan to deploy a wide range of workloads into cloud environments including mission critical database and middleware services that require high levels of fault tolerance and disaster recovery. Such capabilities were traditionally custom configured for each application but cloud offers the possibility to incorporate such properties within the service definition, enabling organizations to adopt cloud without compromise. With the latest release of Oracle Enterprise Manager 12c, Oracle is providing customers with an out-of-the-box experience for delivering highly-resilient cloud services for databases and applications.” “Since its inception, Oracle has been leading the way in innovative, scalable and high performance solutions for the enterprise. With this release of Oracle Enterprise Manager, we are extending this leadership by providing enterprise-scale capabilities for planning, delivering, and managing private clouds. We call this ‘zero-to-cloud – accelerated.’ These enhancements help our customers to expedite their adoption of cloud computing and prepares them for the next generation of self-service IT,” said Prakash Ramamurthy, senior vice president of Systems and Cloud Management at Oracle. Supporting Resources Oracle Enterprise Manager 12c Video: Cerner Delivers High Performance Private Cloud Video: BIAS Achieves Outstanding Results with Private Cloud Press Release Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • Simplifying Human Capital Management with Mobile Applications

    - by HCM-Oracle
    By Aaron Green If you're starting to think 'mobility' is a recurring theme in your reading, you'd be right. For those who haven't started to build organisational capabilities to leverage it, it's fair to say you're late to the party. The good news: better late than never. Research firm eMarketer says the worldwide smartphone audience will total 1.75 billion this year, while communications technology and services provider Ericsson suggests smartphones will triple to 5.6 billion globally by 2019. It should be no surprise, smart phone adoption is reaching the farthest corners of the globe; the subsequent impact of enterprise applications enabled by these devices is driving business performance improvement and will continue to do so. Companies using advanced workforce analytics can add significantly to the bottom line, while impacting customer satisfaction, quality and productivity. It's a statement that makes most business leaders sit forward in their chairs. Achieving these three standards is like sipping The Golden Elixir for the business world. No-one would argue their importance. So what are 'advanced workforce analytics?' Simply, they're unprecedented access to workforce trends and performance markers. Many are made possible by a mobile world and the enterprise applications that come with it on smart devices. Some refer to it as 'the consumerisation of IT'. As this phenomenon has matured and become more widely appreciated it has impacted the spectrum of functional units within an enterprise differently, but powerfully. Whether it's sales, HR, marketing, IT, or operations, all have benefited from a more mobile approach. It has been the catalyst for improvement in, and management of, the employee experience. The net result of which is happier customers. The obvious benefits but the lesser realised impact Most people understand that mobility allows for greater efficiency and productivity, collaboration and flexibility, but how that translates into business outcomes within the various functional groups is lesser known. In actuality mobility has helped galvanise partnerships between cross-functional groups within the enterprise. Where in some quarters it was once feared mobility could fragment a workforce, its rallying cry of support is coming from what you might describe as an unlikely source - HR. As the bedrock of an enterprise, it is conceivable HR might contemplate the possible negative impact of a mobile workforce that no-longer sits in an office, at the same desks every day. After all, who would know what they were doing or saying? How would they collaborate? It's reasonable to see why HR might have a legitimate claim to try and retain as much 'perceived control' as possible. The reality however is mobility has emancipated human capital and its management. Mobility and enterprise applications are expediting decision making. Google calls it Zero Moment of Truth, or ZMOT. It enables smoother operation and can contribute to faster growth. From a collaborative perspective, with the growing use of enterprise social media, which in many cases is being driven by HR, workforce planning and the tangible impact of change is much easier to map. This in turn provides a platform from which individuals and teams can thrive. With more agility and ability to anticipate, staff satisfaction and retention is higher, and real time feedback constant. The management team can save time, energy and costs with more accurate data, which is then intelligently applied across the workforce to truly engage with staff, customers and partners. From a human capital management (HCM) perspective, mobility can help you close the loop on true talent management. It can enhance what managers can offer and what employees can provide in return. It can create nested relationships and powerful partnerships. IT and HR - partners and stewards of mobility One effect of enterprise mobility is an evolution in the nature of the relationship between HR and IT from one of service provision to partnership. The reason for the dynamic shift is largely due to the 'bring your own device' (BYOD) movement, which is transitioning to a 'bring your own application' (BYOA) scenario. As enterprise technology has in some ways reverse-engineered its solutions to help manage this situation, the partnership between IT (the functional owner) and HR (the strategic enabler) is deeply entrenched. And it has to be. The CIO and the HR leader are faced with compliance and regulatory issues and concerns around information security and personal privacy on a daily basis, complicated by global reach and varied domestic legislation. There are tens of thousands of new mobile apps entering the market each month and, unlike many consumer applications which get downloaded but are often never opened again after initial perusal, enterprise applications are being relied upon by functional groups, not least by HR to enhance people management. It requires a systematic approach across all applications in use within the enterprise in order to ensure they're used to best effect. No turning back, and no desire to With real time analytics on performance and the ability for immediate feedback, there is no turning back for managers. In my experience with Oracle, our customers' operational efficiency is at record levels. It's clear as a result of the combination of individual KPIs and organisational goals, CIOs have been able to give HR leaders the ability to build predictive models that feed into an enterprise organisations' evolving strategy. It also helps them ensure regulatory compliance much more easily. Once an arduous task, with mobile enabled automation and quality data, compliance is simpler. Their world has changed for the better. For the CIO, mobility also assists them to optimise performance. While it doesn't come without challenges, mobile-enabled applications and the native experience users have with them means employees don't need high-level technical expertise to train users. It reduces the training and engagement required from the IT team so they can focus on other things that deliver value to the bottom line; all the while lowering the cost of assets and related maintenance work by simplifying processes. Rewards of a mobile enterprise outweigh risks With mobile tools allowing us to increasingly integrate our personal and professional lives, terms like "office hours" are becoming irrelevant, so work/life balance is a cultural must. Enterprises are expected to offer tools that enable workers to access information from anywhere, at any time, from any device. Employees want simplicity and convenience but it doesn't stop at private enterprise. This is a societal shift. Governments, which traditionally have been known to be slower to adopt newer technology, are also offering support for local businesses to go mobile. Several state government websites have advice on how to create mobile apps and more. And as recently as last week the Victorian Minister for Technology Gordon Rich-Phillips unveiled his State government's ICT roadmap for the next two years, which details an increased use of the public cloud, as well as mobile communications, and improved access to online data-sets. Tech giants are investing significantly in solutions designed to simplify mobile deployment and enablement. The mobility trend is creating a wave of change in the industry and driving transformation in the enterprise. If you're not on that wave, the business risk continues to rise as your competitiveness drops. Aaron is the Vice President of HCM Strategy at Oracle Corporation where he is responsible for researching and identifying emerging trends in the practice of Human Resources and works to deliver industry-leading technology solutions. Other responsibilities include, ownership of Oracle's innovative HCM solutions across JAPAC and enabling organisations to transform and modernise their workforce tools. Follow him on Twitter @aaronjgreen

    Read the article

  • From HttpRuntime.Cache to Windows Azure Caching (Preview)

    - by Jeff
    I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever. When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire. You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure. The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism. So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly. First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this: public interface ICacheProvider {     void Add(string key, object item, int duration);     T Get<T>(string key) where T : class;     void Remove(string key); } Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime: public class AzureCacheProvider : ICacheProvider {     public AzureCacheProvider()     {         _cache = new DataCache("default"); // in Microsoft.ApplicationServer.Caching, see how-to      }         private readonly DataCache _cache;     public void Add(string key, object item, int duration)     {         _cache.Add(key, item, new TimeSpan(0, 0, 0, 0, duration));     }     public T Get<T>(string key) where T : class     {         return _cache.Get(key) as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } public class LocalCacheProvider : ICacheProvider {     public LocalCacheProvider()     {         _cache = HttpRuntime.Cache;     }     private readonly System.Web.Caching.Cache _cache;     public void Add(string key, object item, int duration)     {         _cache.Insert(key, item, null, DateTime.UtcNow.AddMilliseconds(duration), System.Web.Caching.Cache.NoSlidingExpiration);     }     public T Get<T>(string key) where T : class     {         return _cache[key] as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } Feel free to expand these to use whatever cache features you want. I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this: ObjectFactory.Initialize(x =>             {                 x.Scan(scan =>                         {                             scan.AssembliesFromApplicationBaseDirectory();                             scan.WithDefaultConventions();                         });                 if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable)                     x.For<ICacheProvider>().Use<AzureCacheProvider>();                 else                     x.For<ICacheProvider>().Use<LocalCacheProvider>();             }); If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider. Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this: public class MyRepo : IMyRepo {     public MyRepo(ICacheProvider cacheProvider)     {         _context = new MyDataContext();         _cache = cacheProvider;     }     private readonly MyDataContext _context;     private readonly ICacheProvider _cache;     public SomeType Get(int someTypeID)     {         var key = "somename-" + someTypeID;         var cachedObject = _cache.Get<SomeType>(key);         if (cachedObject != null)         {             _context.SomeTypes.Attach(cachedObject);             return cachedObject;         }         var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID);         _cache.Add(key, someType, 60000);         return someType;     } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so  When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it. The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database. So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime. That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences.

    Read the article

  • Pushing DNSSEC updates with offline keys

    - by eggyal
    In a non-professional capacity, I look after the DNS of some 18 domains: mostly personal/vanity domains for immediate family. I outsource the whole shebang to an inexpensive managed hosting provider with a web interface through which I manage the zones; since the provider also offers DNSSEC, I have successfully deployed that too. These domains are so unimportant that an attack targetted against them seems much less likely than a general compromise of my provider's systems, at which point the records of all their customers might be changed to misdirect traffic (perhaps with extremely long TTLs). DNSSEC could protect against such an attack, but only if the zone's private keys are not held by the hosting provider. So, I wonder: how can one keep DNSSEC private keys offline yet still transfer signed zones to an outsourced DNS host? The most obvious answer (to me, at least) is to run one's own shadow/hidden master (from which the provider can slave) and then copy offline-signed zonefiles to the master as required. The problem is that the only machine I (want to*) control is my personal laptop, which usually connects from a typical home ADSL (behind NAT over a dynamically-assigned IP address). Having them slave from that (e.g. with a very long Expiry time on the zone for periods when my laptop is offline/unavailable) would not only require a Dynamic DNS record from which they can slave (if indeed they can slave from a named host rather than a static IP address), but would also involve me running a DNS server on my laptop and opening both it and my home network up to the incoming zone transfer requests: not ideal. I would prefer a much more push-oriented design, whereby my laptop initiates transfer of offline-signed zonefiles/updates to the provider's servers. I looked into whether nsupdate could fit the bill: documentation is a little sketchy, but my testing (with BIND 9.7) suggests it can indeed update DNSSEC zones, but only where the server holds the keys to perform the zone signing; I have not found a way to have it take an update including the relevant RRSIG/NSEC/etc. records and have the server accept them. Is this a supported use-case? If not, I suspect the only solutions which could fit the bill will involve non-DNS-based transfer of the zone updates and would welcome recommendations that are supported by (hopefully inexpensive) hosting providers: SFTP/SCP? rsync? RDBMS replication? Proprietary API? Finally, what would be the practical implications of such a setup? Key rotation is jumping out at me as being an obvious difficulty, especially if my laptop is offline for extended periods. But the zones are extremely stable, so perhaps I could get away with long-lived ZSKs**...? * Whilst I could run a shadow/hidden master on e.g. an outsourced VPS, I dislike the overhead of having to secure / manage / monitor / maintain yet another system; not to mention the additional financial costs of so doing. ** Okay, this would enable a concerted attacker to replay outdated records—but the risk and impact of such are both tolerable in the case of these domains.

    Read the article

  • Hoster not fulfilling contract: how to get money back?

    - by plua
    For several years, we have as a small webdesign company rented a dedicated server at a large hosting provider. They had several support levels. When we signed up for this, we had very limited in-house knowledge about server maintenance, and were very worried about the security of our server. We therefore took one of the more expensive support packages. An important aspect in this were these claims: [PROVIDER] verifies the availability of the latest security updates and sends you a notification to see if you are interested to have them installed [PROVIDER] verifies the availability of the latest supported software updates and sends you a notification to see if you are interested to have them installed These items were clearly stated on their website as being part of the advantage of this package.; With not enough knowledge about installing and updating such software on a Linux server, we decided to go for this package. We paid a premium of $50 per month over the maintenance package that is next in line ($100 vs $50). Over the years, we have paid several thousand dollars for this service. Then came the moment that I learned more and more about server management. And I found out step by step that our server was horrendously outdated! We had an OS that was hardly updated, our anti-virus was not working because it needed certain more recent packages on the OS, and in general there were a whole bunch of security vulnerabilities and fixes that were lacking. Shocked, I wrote the provider. Turns out, they decided unilaterally that they would not send out any notifications to clients because clients would get too many e-mails. This is a quote from their explanation: [...] We have decided not to spam its clients with OS and security updates and only install them whenever asked by the client I was shocked! They had never mentioned that they would drop this service, and in fact the claims about updating their clients through e-mail was still on their website, after they apparently stopped doing this years ago! Upon finding this out, I requested they refund all that we have paid as a premium over the other package, and make it available as future credit with their own company. I thought this was a very reasonable request. However, they said they would only go back one year and provide credit for this one year. Mails went back and forth, but they were not willing to give credit for the whole period, which I felt I was entitled to. So ultimately I left the hosting company, and filed a complaint with the BBB a while ago. Now, I am not the kind of person who runs to a lawyer for any minor thing, but in this case I am really considering taking action. I have been paying for years for a service I did not receive (the premium package had a few other pluses, but we took it primarily for these two points, and I can prove that we did not use the other benefits). For our small company the hosting costs were a very large part of our budget, and I feel it is very unfair how this large provider just does not care about not fulfilling its obligations. So my question is: what action should I take? Is a lawyer the only next step, or are there other suggestions? And am I right here to claim this money, or are they right that there is some sort of statue of limitations on such claims? Any feedback is appreciated.

    Read the article

  • Cloud-aware programming and help choosing a good framework

    - by Shoaibi
    How can i write a cloud-aware application? e.g. an application that takes benefit of being deployed on cloud. Is it same as an application that runs or a vps/dedicated server? if not then what are the differences? are there any design changes? What are the procedures that i need to take if i am to migrate an application to cloud-aware? Also i am about to implement a web application idea which would need features like security, performance, caching, and more importantly free. I have been comparing some frameworks and found that django has least RAM/CPU usage and works great in prefork+threaded mode, but i have also read that django based sites stop to respond with huge load of connections. Other frameworks that i have seen/know are Zend, CakePHP, Lithium/Cake3, CodeIgnitor, Symfony, Ruby on Rails.... So i would leave this to your opinion as well, suggest me a good free framework based on my needs. Finally thanks for reading the essay ;)

    Read the article

  • Designing a different kind of tag cloud.

    - by animuson
    Rather than having a bunch of links that are all different sizes, I want all of my tags to be the same size. However, my goal is to minimize the amount of space required to make the cloud, aka minimizing the number of lines used. Take this example: Looks like any normal tag cloud. However, look at all that extra space around the 'roughdiamond' tag, which could be filled in by other tags like 'stone' down near the bottom, which could effectively eliminate an entire extra line from the cloud. How would I go about getting the words to fill in whatever space possible above them before starting a new line? I'm not talking about reorganizing them to find the absolute minimum number of lines required. If I was going through the list in the image, 'pendant', 'howlite', and 'igrice' would go to line 1 filling it up, 'roughdiamond' would go to line 2 because line 1 is full, 'tourmaline' would go to line 3 because it can't fit on lines 1 or 2, same with 'emberald', but 'pearl' would go to line 2 because it can fit there since there is extra space. I figure there would probably be some way of doing this in CSS that would simply cause the links to collapse into any fillable space it can fit in to.

    Read the article

  • PHP-FPM and APC for shared hosting?

    - by Tiffany Walker
    We are looking into finding a way to get APC to only create one cache per account / site. This can be done with Fastcgi (last update 2006…) but with Fastcgid APC will have to create multiple caches for multiple processes run by the same account. To get around this problem, we have been looking into PHP-FPM PHP process manager allows multiple PHP processes to share a single APC cache. But from what I have read (I hope I'm wrong) , even if you create a pool per process, all sites accross all pools will share the same APC cache. This brings us back to the same problem as with shared Memcached: it's not secure ! On php-fpm's site I read that you can chroot php-fpm pools and define a specific UID and GID per pool… if this is the case then shouldn't APC have to use this user and not have access to other pools cache ? An article here (in 2011) suggests that you would need to run one process per pool creating multiple launchers on different ports and different config files with one pool per config file : http://groups.drupal.org/node/198168 Is this still neceessary ? If so what would be the impact of running say 800 processes of php-fpm ? Would it be mainly memory ? If so how can I work out what the memory impact would be ? I guess that it would be better to run 800 times php-fpm then to have accounts creating multiple APC caches for a single site ? If on average an account creates a 50MB cache and creates 3 caches per account that makes 150Mb per account which makes 120GB… However if each account uses on average only 50Mb that would make 40GB We will have at least 128GB of ram on our next server so 40GB is acceptable if running 800 x PHP-FPM does not create an overhead of more than 20GB ! What do you think is PHP-FPM the best way to go to provide secure APC cache on shared hosting with a server that has a decent amount of memory ? Or should I be looking at another system ? Thanks !

    Read the article

  • local msmtp and ovh hosting

    - by klez
    I have my personal email hosted on OVH (personal hosting plan) and I'm not able to send mails using msmtp. Here's a typical session ignoring system configuration file /etc/msmtprc: File o directory non esistente loaded user configuration file /home/klez/.msmtprc using account default from /home/klez/.msmtprc host = ssl0.ovh.net port = 465 timeout = off protocol = smtp domain = localhost auth = choose user = federicoculloca%xxxxxxx password = * ntlmdomain = (not set) tls = on tls_starttls = off tls_trust_file = (not set) tls_crl_file = (not set) tls_fingerprint = (not set) tls_key_file = (not set) tls_cert_file = (not set) tls_certcheck = off tls_force_sslv3 = off tls_min_dh_prime_bits = (not set) tls_priorities = (not set) auto_from = off maildomain = (not set) from = federicoculloca@xxxxxxxx dsn_notify = (not set) dsn_return = (not set) keepbcc = off logfile = (not set) syslog = (not set) reading recipients from the command line TLS certificate information: Owner: Common Name: ssl0.ovh.net Organizational unit: Domain Control Validated Issuer: Common Name: OVH Secure Certification Authority Organization: OVH SAS Organizational unit: Low Assurance Country: FR Validity: Activation time: lun 31 gen 2011 01:00:00 CET Expiration time: mer 15 feb 2012 00:59:59 CET Fingerprints: SHA1: F9:DC:41:F9:A2:38:51:9B:56:E4:98:E6:CD:81:31:42:E6:0E:26:6D MD5: FC:EC:F3:8F:28:E4:7E:28:99:89:E6:BB:C9:DF:71:CE <-- 220 ns0.ovh.net ssl0.ovh.net. You connect to mail427.ha.ovh.net ESMTP --> EHLO localhost <-- 250-ssl0.ovh.net. You connect to mail427.ha.ovh.net <-- 250-AUTH LOGIN PLAIN <-- 250-AUTH=LOGIN PLAIN <-- 250-PIPELINING <-- 250-8BITMIME <-- 250 SIZE 109000000 --> AUTH PLAIN xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx <-- 235 ok, go ahead (#2.0.0) --> MAIL FROM:<federicoculloca@xxxxx> --> RCPT TO:<[email protected]> --> DATA <-- 250 ok <-- 250 ok <-- 354 go ahead --> hello world --> . <-- 554 mail server permanently rejected message (#5.3.0) And my configuration # ~/.msmtp # Mostly from Peter Garrett's examples # https://lists.ubuntu.com/archives/ubuntu-users/2007-September/122698.html # Accounts from Scott Robbins' `A Quick Guide to Mutt' # http://home.nyc.rr.com/computertaijutsu/mutt.html account xxxxx host ssl0.ovh.net from federicoculloca@xxxxxx auth on user federicoculloca%xxxxxx password xxxxxx tls on tls_certcheck off tls_starttls off Any idea?

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >