Search Results

Search found 6834 results on 274 pages for 'dojo require'.

Page 196/274 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Server redirect

    - by Tyy
    I have asp.net app XYZ which requires SSL. I am supposed to work with IIS that has only one Web Site - DefaultWebSite - containing multiple apps and virtual directories (3rd party). So, my app is located at domain.com/XYZ. I have to meet these conditions: 1.) requesting DefaultWebSite (domain.com) will run my app. It could redirect to ../XYZ, but I would rather not to. If it has to be done this way, requesting DefaultWebSite or my app over both HTTP and HTTPS will always ends up with redirecting to https://domain.com/XYZ 2.) I can't touch any other apps or virtual directories, can't create additional Web Site, can't set DefaultWebSite to require SSL, ... EDIT: 3.) transmitted data (GET or POST) must be preserved I tried to: set Web Site root directory to my app, but it this caused other apps to crash because of my Web.config (not sure why). set up HTTP redirect on DefaultWebSite to https://domain.com/XYZ. This seems to work correctly, but this doesn't work if user requests my app directly (redirected to domain.com/XYZ/XYZ, or redirect loop). set up Default Document, but this seems to work only if it is located in the Web Site root directory. I know I could write simple .aspx with Response.Redirect, but... is there any better solution? Am I missing something?

    Read the article

  • What steps should I take to secure Tomcat 6.x?

    - by PAS
    I am in the process of setting up an new Tomcat deployment, and want it to be as secure as possible. I have created a 'jakarta' user and have jsvc running Tomcat as a daemon. Any tips on directory permissions and such to limit access to Tomcat's files? I know I will need to remove the default webapps - docs, examples, etc... are there any best practices I should be using here? What about all the config XML files? Any tips there? Is it worth enabling the Security manager so that webapps run in a sandbox? Has anyone had experience setting this up? I have seen examples of people running two instances of Tomcat behind Apache. It seems this can be done using mod_jk or with mod_proxy... any pros/cons of either? Is it worth the trouble? In case it matters, the OS is Debian lenny. I am not using apt-get because lenny only offers tomcat 5.5 and we require 6.x. Thanks!

    Read the article

  • Creating a private wiki

    - by Hand-E-Food
    I want to create a simple, private wiki, but am really struggling to find what I need. I require the following features: Private wiki. Only I will read or write it. Some formatting capability: headings, bold, italic, bullets, block quotes Wiki Viewer for Windows 7. If it comes with an editor, I need to be able to hide it. Page Editor for Windows 7. Page Editor for iPhone. Synchronize by cloud but available offline in Windows. So far, my research has led me to Markdown language. I can easily edit this as plain text using Notepad++ for Windows and Elements for iPhone. I can sync these files through Dropbox and have them available offline. What I can't find is a suitable viewer for Windows. I'd prefer to steer away from using HTML due to its verbose formatting codes. Can anyone recommend a solution for me? If need be, I'll happy to make a small one-off payment for software.

    Read the article

  • SQL Server 2005 Disk Configuration: Single RAID 1+0 or multiple RAID 1+0s?

    - by mfredrickson
    Assuming that the workload for the SQL Server is just a normal OLTP database, and that there are a total of 20 disks available, which configuration would make more sense? A single RAID 1+0, containing all 20 disks. This physical volume would contain both the data files and the transaction log files, but two logical drives would be created from this RAID: one for the data files and one for the log files. Or... Two RAID 1+0s, each containing 10 disks. One physical volume would contain the data files, and the other would contain the log files. The reason for this question is due to a disagreement between me (SQL Developer) and a co-worker (DBA). For every configuration that I've done, or seen others do, the data files and transaction log files were separated at the physical level, and were placed on separate RAIDs. However, my co-workers argument is that by placing all the disks into a single RAID 1+0, then any IO that is done by the server is potentially shared between all 20 disks, instead of just 10 disks in my suggested configuration. Conceptually, his argument makes sense to me. Also, I've found some information from Microsoft that seems to supports his position. http://technet.microsoft.com/en-us/library/cc966414.aspx In the section titled "3. RAID10 Configuration", showing a configuration in which all 20 disks are allocated to a single RAID 1+0, it states: In this scenario, the I/O parallelism can be used to its fullest by all partitions. Therefore, distribution of I/O workload is among 20 physical spindles instead of four at the partition level. But... every other configuration I've seen suggests physically separating the data and log files onto separate RAIDs. Everything I've found here on Server Fault suggests the same. I understand that a log files will be write heavy, and that data files will be a combination of reads and writes, but does this require that the files be placed onto separate RAIDs instead of a single RAID?

    Read the article

  • Remotely port forward/launch process or a client-less remote desktop app?

    - by DC177E
    I have an XP box running Logmein at a remote location behind a linksys router, which was running well for a whole of four days, until we had a power failure. Our ISP gave us a new IP, the machine restarted, and logmein did not autorun (or, at least, it did not automatically sign in), and our service (which may or may not be a Minecraft server with non-backed-up save files) also did not run upon startup. Logmein does not register the new IP (it still displays the old one). I have a DDNS updater service, so I do know the new dynamic address. I have tried using the built in XP remote desktop service, but, as with almost all non-cloud-based remote desktop services, it requires a port forward. Thus, I would appreciate it if anyone has any ideas as to: A: Any way of accessing our router remotely to forward the remote desktop port. I've seen the Remote Management option (forwarding the setup page to port 8080), but I do not have it enabled. I've tried UPnP, but again, the setup page for our router is not forwarded. B: Any way of remotely launching a process that does not require port forwarding (or uses ports 255XX, 18XXX, or 9000.), such as a remote console service built into XP. I realize this is a near impossibility. C: A Way to remotely start logmein, and sign in, which is likely a definite impossibility. Sorry if this is too specific for Stackexchange, or if I've put it into the wrong section (is SuperUser the correct place for this?). Ideas would, again be much appreciated, as shot-in-the-dark-like this may be.

    Read the article

  • linux: per-process monitor, every 10 minutes, with history access

    - by Inbar Rose
    I really didn't know a better way to ask my question, hence you get a horribly named question. I will explain what i want to do, maybe that will help you help me. I would like to have my linux machine continuously monitor (every 10 minutes) all the processes on my machine. The information from each process that I require is the name, CPU usage, allocated (virtual) memory, and resident (ram) memory. If these periodic reports were to be looked at, they would look something like this: PROCESS CPU RAM VIRTUAL name1 % MB MB name2 % MB MB ...etc..etc These reports should be stored in such a way that I can access them at a later date by giving a date/time scope (range). For instance, if I want to see the history of my processes from 12:00:00 1.12.12 till 12:00:00 2.12.12 I can - and it should give me the history of the processes for every 10 minutes between those date/time borders. The format of the return is not important, that will be handled by a script anyway and can be modified into anything I need. I have looked into a few things so far, but have not found something that clearly meets my needs. Among the things i searched: sar, free(1), top(1).. and a few other things. It should be a simple issue, i can already see all this information by simply looking at my htop, but i need only a tool that will gather the desired fields for me for each processes every 10 minutes, and then also let me extract slices of that data based on date/time scopes (ranges). note: I have limited experience with linux, so please give detailed information. note2: The desired output will be something like this (after receiving the desired range) CPU USAGE BY PROCESS: proc_nameA 1,2,2,2,2,2...... numbers represent % usage every 10 minutes... proc_nameB 4,3,3,6,1,2...... The same idea with the other information.

    Read the article

  • Mercurial no longer working in NetBeans?

    - by John Isaacks
    I have been using Mercurial inside NetBeans for a while now. For the past few weeks now Windows Explorer has been crashing for various reasons, including every time I right click. Finally someone suggested I try uninstalling TourtoiseSVN and TortoiseHG since they affect Windows Explorer directly. I uninstalled both last week since I don't ever use them (I either user the NetBeans interface or use the command line interface.) Since then my Windows Explorer stopped crashing. Today I noticed that in NetBeans it is no longer showing me any of the Mercurial features. I pull up a command prompt and its not working their either. It seems like uninstalling TortoiseHG also uninstalled Mercurial altogether, which was not intended. I went to http://mercurial.selenic.com/wiki/Download to download the 64bit 1.8.4 .exe version of Mercurial for Windows. I installed it, and I can now use the command line. However it is still now working in NetBeans. Does NetBeans require Tortoise to work? Is there something else I am missing?

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • Subversion: Secure connection truncated

    - by Nick
    Hi, I'm trying to set-up a subversion server with apache2/webdav access. I've created the repository and configure Apache according to the official book, and I can see the repository in a webbrowser. The browser shows: conf/ db/ hooks/ locks/ Although clicking any of those links gives an empty xml document like: <D:error> <C:error/> <m:human-readable errcode="2"> Could not open the requested SVN filesystem </m:human-readable> </D:error> I've never used subversion before so I assume this is correct? Anyway, when I try to connect via a command line client, it asks for my password, I give it, then I get the (useless) error message: svn: OPTIONS of 'https://svn.mysite.com': Could not read status line: Secure connection truncated (https://svn.mysite.com) The command I'm using is: svn checkout https://svn.mysite.com/ svn.mysite.com Subversion was installed using Ubuntu's package manager. It's version 1.6.6 on Ubuntu 10.04. My Virtualhost Cofiguration: <VirtualHost 123.123.12.12:443> ServerAdmin [email protected] ServerName svn.mysite.com <Location /> DAV svn SVNParentPath /var/svn/repos SVNListParentPath On AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/subversion/passwd Require valid-user </Location> # Setup The SSL Certificate Paths SSLEngine On SSLCertificateFile /etc/ssl/certs/mysite.com.crt SSLCertificateKeyFile /etc/ssl/private/dmysite.com.key </VirtualHost>

    Read the article

  • Identifying test machines in analytics logs

    - by RTigger
    We're just beginning to add analytics to our SaaS application, to begin (among other things) billing clients based on usage. The problem we're running into is there's a few circumstances where our support team will simulate a log in into production to try to reproduce reported issues with a client's configuration. When they log in, an entry will be made into our analytics logs that their specific account has logged in, which we use to calculate billing. A few ideas we had to solve this: 1) We log IP addresses as well as machine keys for each PC that logs in - we could filter out known IP addresses and/or machine keys belonging to support. The drawback is we have to maintain a list of keys / addresses manually. 2) If support (or anyone else internal) runs our application in debug mode (as opposed to release), it will not report analytics. This is fine, as long as support / anyone else remembers to switch to debug mode. 3) Include some sort of reg key / similar setting required to be set when configuring a production system in order to send analytics. Again, fine, as long as our infrastructure team remembers to set the reg key or setting. All of these approaches require some sort of human involvement, which we all know can be iffy at best. Has anyone run into a similar situation? Is there an automated approach to this problem? (PS Of course, we shouldn't be testing in production, but there are a few one-off instances with customer set up that we can't reproduce without logging in as them in production. This is the only time we do so, and this is the case I'm talking about in this question.)

    Read the article

  • Optimize Apache performance

    - by Phliplip
    I'm looking for ways to optimize our current web server hosted in-house. I'm trying to supply as much relevant information below. Please let me know if you would require additional information in order to assist. Server is running 1 single website, which is an online pizza ordering platform built on Zend Framework (ver1). On traffic stats from the last month aprox 6.000 pageloads per day, concentrated mainly around dinnertime. Around 1500 loads/hour peaks in that period. We recently upgraded from a 2/2mbit aDSL-line to 100/100mbit fiber, and we still have performance issues at dinner time. We assumed the 2mbit was the issue. Website is pretty snappy in low-load periods. Hardware CPU: Intel(R) Xeon(R) CPU 5160 @ 3.00GHz (3000.13-MHz K8-class CPU) Mem: 328M Active, 4427M Inact, 891M Wired, 244M Cache, 623M Buf, 33M Free Swap: 16G Total, 468K Used, 16G Free (6GB physical, 16GB swap) Filesystem Type Size Used Avail Capacity Mounted on /dev/ad7s1a ufs 4.8G 768M 3.7G 17% / devfs devfs 1.0K 1.0K 0B 100% /dev /dev/ad7s1g ufs 176G 5.2G 157G 3% /home /dev/ad7s1e ufs 4.8G 2.8M 4.5G 0% /tmp /dev/ad7s1f ufs 19G 3.5G 14G 19% /usr /dev/ad7s1d ufs 4.8G 550M 3.9G 12% /var Server OS FreeBSD 8.2-RELEASE Software apache-2.2.17 php5-5.3.8 mysql-server-5.5 Apache footprint (example, taken from # top) 31140 www 1 45 0 377M 41588K lockf 2 0:00 0.00% httpd 31122 www 1 44 0 375M 35416K lockf 2 0:00 0.00% httpd 31109 www 1 44 0 375M 38188K lockf 2 0:00 0.00% httpd 31113 www 1 44 0 375M 35188K lockf 2 0:00 0.00% httpd Apache is using the prefork MPM, APC (Alternative PHP Cache). SSL module is loaded, but not utilized (as in don't really work, thus not used). There is a file containing settings for MPM modules, but as i see it's not included in the httpd.conf file, the include line is commented out. Thus i would guess that the prefork MPM is working of default values too. Here are some other Apache conf values that i found - which are included in https.conf Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 UseCanonicalName Off HostnameLookups Off

    Read the article

  • Is there such a thing as a file hosted container which deduplicates data held within?

    - by Mallow
    Background I have backups of a website which stores all of it's data into a single file. This file is several gigs large and I have many different backups of this file. Most of the data within is mostly the same plus whatever was added or changed to it. I want to keep all the concurrent backups I've made through the years in case I find a horrible surprise of data corruption along the line. However storing a 10gig file every month gets expensive. Seeking Solution I've often thought about different ways of alleviating this problem. One thought that comes up very often combines the idea of a duplicating file system which doesn't require it's own partitioned volume on a hard drive. Something like what truecrypt does, what it calls, "file hosted containers" which when using the truecrypt program allows you to mount and dismount that volume as a regular hard drive. Question Is there a virtual hard drive mounter which uses file-based container which uses data deduplicaiton file system? (This question is a little awkward to put into words, if you have a better idea on how to ask this question please feel free to help out.)

    Read the article

  • Unable to delete a file or take ownership on Win7x64

    - by Basic
    I'm a developer and as part of the build process, a Microsoft dll is copied to a certain folder. That file copy is now failing as the target can't be overwritten. I decided to delete it by hand (using an admin account but a non-elevated explorer) so browsed to the folder and attempted a delete. This failed (Require permission from the Administrator). The same applies when using an elevated explorer. So I tried Properties-Security-Advanced-Ownership The current owner is showing as Unable to display current owner. I can't take ownership (a simple Access Denied message with no elaboration). Elevated Command Prompt/PowerShell don't help either (both give an Access Denied in their own way). Process explorer shows no open handles on the file. Eventually, I booted to linux and deleted the file but what I'd like to know is what caused it? Security Essentials had no issues with the file. It's digitally signed by MS and the signatures match.

    Read the article

  • How to deploy new instances of the same application (on 1 server) automatically?

    - by Intru
    I'm working on a SaaS application where each customer runs its own version of the application. All the application instances currently run on a single server. This works quite well for us (we need less resources in total). The application doesn't use a lot of resources, so even a small VPS would be overkill (and more expensive). Adding a new customer is currently quite a bit of work: Create a user that is allowed to ssh Create a new MySQL database and user Create a virtual host for the application Log in with the new user, do a git checkout of the application (in the right location) Create tables in the new database, and add some init data Add some cron jobs Create a first user that can log in Add this new instance to capistrano What would be the best way to automate these tasks? Are the applications that can (given proper configuration) do this? Ideally this should be usable for a sales-person (so something web-based). I could write a (bash) script that does most of these tasks, and then maybe add a small web-based wrapper where someone could provider the domain/default user information. Of course, this would also require a delete-script, since some customers will eventually leave, which means that you need a list of all existing customers/instances.

    Read the article

  • What are the right questions to ask when deciding whether to use Chef or Puppet?

    - by John Feminella
    I am about to start a new project which will, in part, require deploying many identical nodes of approximately three different classes: Data nodes, which will run sharded instances of MongoDB. Application nodes, which will run instances of a Ruby on Rails application and an older ASP.NET MVC application. Processing nodes, which will run jobs requested by the application nodes. ALl the nodes will run on instances of Ubuntu 10.04, though they will have different packages installed. I have some familiarity with Chef from previous projects, though I don't consider myself an expert. In an effort to do due diligence, I have been investigating alternative possibilities. We have a number of folks in-house who are long-time Puppet users, and they have encouraged me to take a look. I am having trouble evaluating both choices, though. Chef and Puppet share many of the same domain terminology -- packages, resources, attributes, and so on, and they have a common history that stems from taking different approaches to the same problem. So in some sense they are very similar. But much of the comparison information I've found, like this article, is a little outdated. If you were starting this project today, what questions would you ask yourself to decide whether you should use Chef or Puppet for configuration management? (Note: I don't want answer to the question "Should I use Chef or Puppet?")

    Read the article

  • IIS 7: launch unique site instance per host name

    - by OlduwanSteve
    Is it possible to configure IIS 7 so that a single site with multiple bindings (or wildcard bindings) will launch a unique instance for each unique host name? To explain why this is desirable, we have an application that retrieves its configuration from a remote system. The behaviour of the application is governed by this configuration and not by the 'web.config'. The application uses its host name as a key to retrieve the configuration. Currently it is a manual process to create an identical IIS site for each instance of the application, differing only by the bindings. My thought, if it were possible, is that it would be nice to have one IIS site that effectively works as a template for an arbitrary number of dynamic sites. Whenever it is accessed by a unique host name a new instance of the site would be launched, and all further requests to that host name would go to that instance just as though I had created the site by hand. I use IIS regularly, but only for fairly straightforward site hosting. I'd like to know if this could be configured with vanilla IIS 7, but would also welcome answers that require a plugin or 3rd party product. Programming/architectural suggestions about changes to the app wouldn't really be appropriate for serverfault.

    Read the article

  • Trying to migrate old server to new. Getting duplicate name errors

    - by SpaceCowboy74
    I have an existing server on my network that is running under windows 2000 with SQL Server 2000 on it. We are in the process of moving the server to a windows 2008 platform, with SQL 2008 as well. A few changes are happening though. For one, applications that were on the old server, will now be on a new application server. The issue is, the developers of the original applications hard coded the server name in the apps and/or batch files. I could change all the code, but that would require weeks of work. My original idea was to change the hosts and lmhosts files to point to the new servers with a different IP. So i implemented the following where oldserver was the original server and server is the new one brought online: hosts: 192.168.1.10 oldserver 192.168.1.15 server lmhosts: 192.168.1.10 oldserver #pre 192.168.1.15 server #pre Problem is, when i try to do this, i get the following errors: \\server\c$ Logon Failure : The target account name is incorrect. and \\oldserver\c$ A duplicate name exists on the network. I know about renaming servers in AD, but can't do so yet as the original server is in production and i cannot rename it without breaking a lot of things at the moment. I'm wanting to do a proof of concept to the management before renaming the servers. Any idea how i should resolve this?

    Read the article

  • What ports, besides 80, need to be available to send (only send) email using phpmailer to gmail over SSL?

    - by Wobblefoot
    Using phpmailer I keep getting a 110 timeout and "Unable to connect to host" when sending email from my web server. The authentication details are right and they work on another server I have (login, pwd, ports etc and gmail acct set up for SSL connections on 465), but it's failing on my new server. FIREWALL: I allow related/established, port 80 and a port for SSH on INPUT, then this on OUTPUT: 7906 474K DROP tcp -- any any anywhere anywhere tcp dpt:smtp 0 0 ACCEPT tcp -- any any localhost.localdomain yw-in-f109.1e100.net tcp dpt:submission 0 0 ACCEPT tcp -- any any localhost.localdomain gx-in-f109.1e100.net tcp dpt:ssmtp 0 0 DROP tcp -- any any anywhere anywhere tcp dpt:submission 9 540 DROP tcp -- any any anywhere anywhere tcp dpt:ssmtp This output chain works on my other server and disabling it doesn't get mail delivered either. WEB SERVER: Varnish (80) Nginx (8088) Drupal 7 PHP5-FPM APC MySQL All works beautifully, except for outgoing email. What else could it be? I understand phpmailer does NOT require a local MTA or procmail (this is sort of the point - I don't want the security or admin overhead of a full blown MTA on my web server). Am I wrong? Do I need an MTA as well? What local ports and programs are used to authenticate over SSL and route mail using phpmailer? Any ideas at all greatly appreciated - wasted a day on this nonsense already!

    Read the article

  • Apache directory access with virtual host [SOLVED]

    - by alexeygaidamaka
    I have a virtual host with a configuration like that. When i'm trying to get into foobar.com/dir providing valid username/password pair i get 403 forbidden page instead of that directory contents. www.foobar.com/dir has 777 rights, .httpaswd is chmoded 644. But i can't figure out why i am still not seeing contents. Please, give me a hint. ServerAdmin webmaster@localhost ServerName www.foobar.com ServerAlias www.foobar.com DocumentRoot /var/www/foobar <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/foobar> Options -Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> <Directory /var/www/foobar/dir> AllowOverride AuthConfig AuthName "Authorize yourself, please!" AuthType Basic AuthUserFile /etc/apache2/.htpasswd AuthGroupFile /dev/null Allow from All Order Allow,Deny Options +Indexes<<- that one should be added Require valid-user you have to add the line Options +Indexes to see directory contents

    Read the article

  • How to decouple development server from Internet?

    - by intoxicated.roamer
    I am working in a small set-up where there are 4 developers (might grow to 6 or 8 in cuople of years). I want to set-up an environment in which developers get an internet access but can not share any data from the company on internet. I have thought of the following plan: Set-up a centralized git server (Debian). The server will have an internet access. A developer will only have git account on that server, and won't have any other account on it. Do not give internet access to developer's individual machine (Windows XP/Windows 7). Run a virtual machine (any multi-user OS) on the centralized server (the same one on which git is hosted). Developer will have an account on this virtual machine. He/she can access internet via this virtual machine. Any data-movement between this virtual machine and underlying server, as well as any of the developer's machine, is prohibited. All developers require USB port on their local machine, so that they can burn their code into a microcontroller. This port will be made available only to associated software that dumps the code in a microcontroller (MPLAB in current case). All other softwares will be prohibited from accessing the port. As more developers get added, providing internet support for them will become difficult with this plan as it will slow down the virtual machine running on the server. Can anyone suggest an alternative ? Are there any obvious flaws in the above plan ? Some key details of the server are as below: 1) OS:Debian 2) RAM: 8GB 3) CPU: Intel Xeon E3-1220v2 4C/4T

    Read the article

  • Multiple SSL certificates on one server

    - by Kyle O'Brien
    We're hosting two websites on our fairly tiny but dedicated production server. Both website require SSL authentication. So, we have virtualhosts set up for both of them. They both reference their own domain.key, domain.crt and domain.intermediate.crt files. Each CSR and certificate file for each site was setup using its own unique information and nothing is shared between them (other than the server itself) However, which ever site's symbolic link (set up in /etc/apache2/sites-enabled) is reference first, is the site who's certificate is referenced even if we're visiting the second site. So for example, assume our companies are Cadbury and Nestle. We set up both sites with their own certificates but we create Cadbury's symbolic link in apache's site-enabled folder first and then Nestle's. You can visit Nestle perfectly fine but if you check the certificate installation, it reference's Cadbury's certificate. We're hosting these websites on a dedicated Ubuntu 12.04.3 LTS server. Both certificates are provided by Thawte.com. I came across a few potential solutions with no degree of success. I'm hoping someone else has a decent solution? Thanks Edit: The only other solution that seems to have provided success to some people is using SNI with Apache. However, the setups here didn't seem to coincide with our setup at all.

    Read the article

  • Is a modem required to be programmed when using with an internet provider?

    - by Tim
    I wonder if a modem is required to be programmed when using with an internet provider? If yes, what is the purpose of programming a modem? Do both a DSL and a cable ISP both require a modem to be used in an individual home? For example, I have a Motorola modem SURFboard Model:SB5101, Customer S/N: xxx S/N? xxx HFC MAC ID: xxx USB CPE MAC ID: xxx a coil of cable and a splitter from Comcast High-Speed internet Self-Installation Kit, which were bought 5 years ago, when I purchased Comcast internet service from its retailer www.comcastoffers.com. With them, I was hoping to reduce the amount of fee by avoiding to ask Comcast people to come over to install. But I remember at that time Comcast sent its technician here, dismissed my idea of self-installation, saying they needed to use their own modem and charging me a hefty fee, and so my equipments have never been used. I haven't been using Comcast for a long time. I wonder if my modem, cable and splitter (brand new, never used) are still good to use with an internet provider such as Comcast? If needed, we can ignore their policy and just consider the technology side? Or they are not good to use and I must throw them away like trash? Thanks and regards!

    Read the article

  • allow public access to subfolder of protected folder on apache

    - by UnnamedMook
    I have password-protected the root folder of my website while i do maintenance, but I want to display a custom 401 error page to let people know the site is under construction. Unfortunately, my web host doesn't allow me write access to anything outside the root folder of my website, so this custom error page must by stored in the root folder or one of its subfolders. Instead of my custom error page I get the Apache default error page and it also says "Additionally, a 401 Authorization Required error was encountered while trying to use an ErrorDocument to handle the request." I searched for ways to make a subfolder of a protected directory public, and all I could find was to use the "Satisfy any" directive, but this doesn't work for me. It doesn't work on a file-only basis either, as with the .htaccess file below. #Authorization Restriction AuthType Basic AuthName "Access to root" AuthUserFile ********************************* Require user *********** Order Allow,Deny Satisfy any #Error Documents ErrorDocument 401 Error-401.html #Allow access to error documents <Files Error-*,html> Order Deny,Allow Allow from all Satisfy any </Files> I can only use .htaccess files; I don't have access to httpd.conf

    Read the article

  • Is it possible to use different zsh menu selection behaviour for different commands?

    - by kine
    I'm using the menu select behaviour in zsh, which invokes a menu below the cursor where you can see the various possibilities. The .zshrc option i have set for this is zstyle ':completion:*' menu select=2 By default, pressing Return to select a possibility in this menu only completes the word — it does not actually send the command. For example, I might get a menu like this ~ % cd de<TAB> completing directory: [Desktop/] Development/ Pressing Return here will result in ~ % cd Desktop/ I then have to press Return a second time to actually send the command. I can modify this behaviour to make it so that pressing Return both selects the completion and sends the command by doing this bindkey -M menuselect '^M' .accept-line However, there's a problem with this: sometimes I need to complete a file or directory without sending the command. For example, I might need to do ln -s Desktop Desktop2 — with this bindkey behaviour, trying to complete Desktop will result in ln -s Desktop/ being sent as the command, and obviously I don't want that. I'm aware that just pressing space will let me get on with the command, but it's now a habit. Given this, is there a way to make it so that only some commands let you press Return once (like cd), but all other commands require pressing it twice?

    Read the article

  • How can I document and automate a system's configuration?

    - by Diomidis Spinellis
    Having a system's configuration represented by its current state is risky, inefficient, and opaque. At some point you may be left with an unsupported system and no upgrade path. Then configuring a new system compatible with the old is a process or trial and error. Furthermore, if at some point the system is damaged the only option is to go back to the most recent full backup, and try to remember what changes followed from that point. Also, the only way to create a system compatible with the original is through a complete dump/restore. Finally, in such a setup there's no way to know how you solved a particular problem; the only thing you can do is to look at the corresponding configuration files and try to guess what you changed to achieve the desired effect. Currently for each system I maintain, I keep a log file where I record all system administration activity, starting from the installation: installation options, added packages, changes in configuration files, updates, problem fixes etc. In theory this allows me to (manually) replay all changes to arrive at the current state, or to unroll an erroneous change by executing the reverse commands. However, this process is also inefficient, error-prone, and relies on human judgment. Another thing I've tried is to put /etc configuration files under version control with git. This helps me document the changes automatically and also apply them on a clean setup. But it's not without problems: git has to run under sudo, passwords and private keys may be stored in the repository, installed packages can't be meaningfully tracked, and git will have a fit if I try to extend this approach to all the system's directories. I've also thought about performing all changes through shell scripts or makefiles, but I think this process will require a lot of effort and will be fragile. Are there some better methods or tools that I'm missing?

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >