Search Results

Search found 6834 results on 274 pages for 'dojo require'.

Page 35/274 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Ext JS 4.2.1 loading controller - best practice

    - by Hown_
    I am currently developing a Ext JS application with many views/controlers/... I am wondering myself what the best practice is for loading the JS controllers/views/and so on... currently i have my application defined like this: // enable javascript cache for debugging, otherwise Chrome breakpoints are lost Ext.Loader.setConfig({ disableCaching: false }); Ext.require('Ext.util.History'); Ext.require('app.Sitemap'); Ext.require('app.Error'); Ext.define('app.Application', { name: 'app', extend: 'Ext.app.Application', views: [ // TODO: add views here 'app.view.Viewport', 'app.view.BaseMain', 'app.view.Main', 'app.view.ApplicationHeader', //administration 'app.view.administration.User' ... ], controllers: [ 'app.controller.Viewport', 'app.controller.Main', 'app.controller.ApplicationHeader', //administration 'app.controller.administration.User', ... ], stores: [ // stores in there.. ] }); somehow this forces the client to load all my views and controllers at startup and is calling all init methods of all controllers of course.. i need to load data everytime i chnage my view.. and now i cant load it in my controllers init function. I would have to do something like this i assume: init: function () { this.control({ '#administration_User': { afterrender: this.onAfterRender } }); }, Is there a better way to do this? Or just an other event? Though the main thing i am questioning myself is if it is the best practice to load all the javascript at startup. Wouldnt it be better to only load the controllers/views/... which the client does need right now? Or should i load all the JS at startup? If i do want to load the controllers dynamicly how could i do this? I assume a would have to remove them from my application arrays (views, controllers, stores) and create an instance if i do need it and mby set the view in the controllers init?! What's best practice??

    Read the article

  • Can't get gravity to work with RMagick and 'caption'

    - by mph
    I'm using RMagick 2.12.2 with ImageMagick 6.5.6-10 on Snow Leopard. I'm trying to put captions on a collection of photos, and I'm getting the caption to work (i.e. it appears on the image), but I can't get the gravity parameter to work correctly. No matter what I set it to, I end up with some variation on NorthGravity. For instance: Setting it to SouthWestGravity gives me NorthWestGravity. Setting it to SouthEastGravity gives me NorthEastGravity. Setting it to CenterGravity gives me NorthGravity. In other words, I can't get the caption to come down off the top of the image. I'd consider using "annotate," but I need "caption" so the lengthy caption text for each image will wrap. What am I doing wrong? Here's the code: #!/usr/bin/env ruby require "rubygems" require "yaml" require "RMagick" include Magick base_dir = "/Users/mike/Desktop/caption_test" photo_log = File.open("#{base_dir}/photo_log.yaml" ) YAML::load_documents(photo_log) do |doc| caption = doc["photo-caption"] filename = doc["file"] canvas = ImageList.new.from_blob(open("#{base_dir}/#{filename}") { |f| f.read } ) canvas << Magick::Image.read("caption:#{caption}") { self.gravity = SouthWestGravity self.size = "#{canvas.first.columns}" self.font = "Helvetica Neue" self.pointsize = 12 self.background_color = "#fff" }.first canvas.flatten_images.write("#{base_dir}/images/#{filename}") end

    Read the article

  • What are some commonly used source code check-in policies?

    - by rwmnau
    I'm curious what code review policies other development shops apply to their source code when it's checked into the source control repository. I'm setting up a TFS (Team Foundation) server, and I'd like to apply some check-in policies to start to stamp out bad practices. For example, I was thinking of starting with the following couple, so this is the kind of stuff I'm looking for: Prohibit empty "Catch" blocks. This would prevent applications from swallowing any exceptions without at least requiring a comment explaining why it's not necessary to do anything with the exception. Prohibit "Catch ex as Exception" generic exception handling. Instead, require code to catch specific types of exceptions and deal with them appropriately, instead of just building catch-all handling. Require a check-in comment. This one should be self-explanatory, though it seems that TFS (and most other source-control systems) don't require a comment by default. While these are just examples, they're where I'm thinking of starting, and while I'd like some additional examples of what's popular, I'm open to feedback on these. Also, though we're a mostly .NET shop, I imagine the popular policies are universal across languages and IDEs (we have some Java development and a few people who will use the repository develop with Eclipse).

    Read the article

  • Online job-searching is tedious. Help me automate it.

    - by ehsanul
    Many job sites have broken searches that don't let you narrow down jobs by experience level. Even when they do, it's usually wrong. This requires you to wade through hundreds of postings that you can't apply for before finding a relevant one, quite tedious. Since I'd rather focus on writing cover letters etc., I want to write a program to look through a large number of postings, and save the URLs of just those jobs that don't require years of experience. I don't require help writing the scraper to get the html bodies of possibly relevant job posts. The issue is accurately detecting the level of experience required for the job. This should not be too difficult as job posts are usually very explicit about this ("must have 5 years experience in..."), but there may be some issues with overly simple solutions. In my case, I'm looking for entry-level positions. Often they don't say "entry-level", but inclusion of the words probably means the job should be saved. Next, I can safely exclude a job the says it requires "5 years" of experience in whatever, so a regex like /\d\syears/ seems reasonable to exclude jobs. But then, I realized some jobs say they'll take 0-2 years of experience, matches the exclusion regex but is clearly a job I want to take a look at. Hmmm, I can handle that with another regex. But some say "less than 2 years" or "fewer than 2 years". Can handle that too, but it makes me wonder what other patterns I'm not thinking of, and possibly excluding many jobs. That's what brings me here, to find a better way to do this than regexes, if there is one. I'd like to minimize the false negative rate and save all the jobs that seem like they might not require many years of experience. Does excluding anything that matches /[3-9]\syears|1\d\syears/ seem reasonable? Or is there a better way? Training a bayesian filter maybe?

    Read the article

  • Multi Player game using Nodejs and Socket IO

    - by Kishorevarma
    I am trying out multi player racing game using Node and Socket IO ,express . So I have tried simple example to see the latency between node server and the clients. I have a draggable image in client . when I move the image ienter code heren one client ,it has to move in all clients. so basically when I am moving the image I am sending the image position to the node server in a json format , then from there I am broadcasting to all clients. there is a ~approx 300ms latency from the time. following are the results. Client 1 sending data to server at : 286136 (timestamp) Server received at : 286271 Client2 received data at : 286470 Client3 received data at : 286479 Client4 received data at : 286487 Client5 received data at : 286520 the latency between move from client1 to client5 is 384ms. its too hight for a racing game .. here is my server code. var app = require('express').createServer(); var io = require('socket.io'); var http = require('http'); var http_server = http.createServer(); var server = http.createServer(app); server.listen(3000); var socket = io.listen(server,{ log: false }); socket.sockets.on('connection', function (client) { client.on('message', function (data){ console.log("data arrived to server",new Date().getTime()); // Below both statements are giving same latency between the client 1 and client 5 client.broadcast.emit('message',data); //socket.sockets.emit('message',data); }); }); 1) Is there any way to optimize the server code to reduce the latency? 2) is this expected latency using node and websockets ? 3) is socket io can't broadcast the data asynchronously (I mean at a same time) ? Thanks Kishorevarma

    Read the article

  • What is a good solution to log the deletion of a row in MySQL?

    - by hobodave
    Background I am currently logging deletion of rows from my tickets table at the application level. When a user deletes a ticket the following SQL is executed: INSERT INTO alert_log (user_id, priority, priorityName, timestamp, message) VALUES (9, 4, 'WARN', NOW(), "TICKET: David A. deleted ticket #6 from Foo"); Please do not offer schema suggestions for the alert_log table. Fields: user_id - User id of the logged in user performing the deletion priority - Always 4 priorityName - Always 'WARN' timestamp - Always NOW() message - Format: "[NAMESPACE]: [FullName] deleted ticket #[TicketId] from [CompanyName]" NAMESPACE - Always TICKET FullName - Full name of user identified by user_id above TicketId - Primary key ID of the ticket being deleted CompanyName - Ticket has a Company via tickets.company_id Situation/Questions Obviously this solution does not work if a ticket is deleted manually from the mysql command line client. However, now I need to. The issues I'm having are as follows: Should I use a PROCEDURE, FUNCTION, or TRIGGER? -- Analysis: TRIGGER - I don't think this will work because I can't pass parameters to it, and it would trigger when my application deleted the row too. PROCEDURE or FUNCTION - Not sure. Should I return the number of deleted rows? If so, that would require a FUNCTION right? How should I account for the absence of a logged in user? -- Possibilities: Using either a PROC or FUNC, require the invoker to pass in a valid user_id Require the user to pass in a string with the name Use the CURRENT_USER - meh Hard code the FullName to just be "Database Administrator" Could the name be an optional parameter? I'm rather green when it comes to sprocs. Assuming I went with the PROC/FUNC approach, is it possible to outright restrict regular DELETE calls to this table, yet still allow users to call this PROC/FUNC to do the deletion for them? Ideally the solution is usable by my application as well, so that my code is DRY.

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • xl2tpd[845]: parse_config: line 13: data 'ipsec sared=yes' occurs with no context

    - by mmc18
    When I executed xl2tpd I amhaving following error. # xl2tpd -D xl2tpd[845]: parse_config: line 13: data 'ipsec sared=yes' occurs with no context xl2tpd[845]: init: Unable to load config file When I remove the "line 13" I having same error with "Line 14" thefore I do not think that the problem is about "ipsec sared" Here is my configuration file xl2tpd.conf. LINUX Ubuntu 12.0.4 ;Openswan IPsec 2.6.37; xl2tpd version: xl2tpd-1.3.1 ; [global] ipsec sared=yes listen-addr=47.168.137.27 ; [lns default] ip range = 192.168.1.10-192.168.1.20 local ip = 192.168.1.1 require chap = yes refuse pap = yes require authentication = yes ppp debug = yes pppoptfile = /etc/ppp/options.xl2tpd length bit = yes name=LinuxIPSECVPN ANSWER:(since have not enough reputation I am writting it over here.) removing the ";" character at the beginning of [global] and [lns default] have solved the issue. At fist I tought that [global] and[lns default] were just a comment.

    Read the article

  • 64-bit Windows 7 softphone to make SIP calls without registering with a SIP proxy?

    - by Dan J
    We have test tools that require us to call SIP addresses like localhost:5061. I used to use SJPhone on Windows XP, and an older version of X-lite, which both worked fine, and didn't require the SIP phone to be registered with a SIP proxy. I have just upgraded to Windows 7 and SJPhone doesn't seem to work any more (see forum here for others with the problem) - it says "No sound input device / No sound output device" at startup. I have tried a range of other softphones (X-lite 3, X-lite 4, Zoiper, 3CX), but I can't seem to find any that will install on Windows 7 and will let me call a SIP address like localhost:5061. It might be that I just don't know how to configure these phones to do it...

    Read the article

  • IIS 7.0: Requiring Client Certificates causes error 500 and "page cannot be displayed"

    - by user48443
    I have two Windows 2008 x86 servers running IIS 7.0, one site on each server; both sites are SSL-enabled, using DoD-issued certificates. Both sites are accessible via https over port 443, but fail the moment Client Certificates are set to Require or Accept. IIS log records error 500.0.64 but nothing else. I have several Windows 2008 IIS 7 x64 servers that require client certificates and they are working as expected; it's just the two x86 servers that are being problematic.

    Read the article

  • Requiring SSH-key Login Via PAM From Specific IP Ranges

    - by Sean M
    I need to be able to access my server (Ubuntu 8.04 LTS) from remote sites, but I'd like to worry a bit less about password complexity. Thus, I'd like to require that SSH keys be used for login instead of name/password. However, I still have a lot to learn about security, and having already badly broken a test box when I was trying to set this up, I'm acutely aware of the chance of screwing myself while trying to accomplish this. So I have a second goal: I'd like to require that certain IP ranges (e.g. 10.0.0.0/8) may log in with name/password, but everyone else must use an SSH key to log in. How can I satisfy both of these goals? There already exists a very similar question here, but I can't quite figure out how to get to what I want from that information. Current tactic: reading through the PAM documentation (pam_access looks promising) and looking at /etc/ssh/sshd_config.

    Read the article

  • How to set a key combination as a browser toolbar shortcut?

    - by Ian Campbell
    For example, I so often clear my browser history with CTRL+SHIFT+DEL. While this is easy enough, I'd like to take it a step further and have a shortcut in my browser's toolbar that when clicked executes the above-mentioned key combination, something like this: Is this directly possible, or would it require creating a plugin? If it would require creating a plugin, how might that be done? p.s. — I'm using Firefox, but it would be awesome if there was a browser-independent solution.

    Read the article

  • When using Bundler and Rails 2.3.5 I get uninitialized constant SubdomainFu when migrating

    - by user347480
    Hi I'm using bundler with rails 2.3.5 and I'm trying to make sure everything is working correctly but when I do a "rake db:migrate --trace" I get this ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment rake aborted! uninitialized constant SubdomainFu /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:443:inload_missing_constant' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:80:in const_missing' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:92:inconst_missing' /Users/node/Projects/Race-RX/config/initializers/subdomain_config.rb:1 /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:145:in load_without_new_constant_marking' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:145:inload' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in new_constants_in' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:145:inload' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:622:in load_application_initializers' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:621:ineach' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:621:in load_application_initializers' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:176:inprocess' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in send' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:inrun' /Users/node/Projects/Race-RX/config/environment.rb:9 /opt/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' /opt/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in require' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:innew_constants_in' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in require' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/tasks/misc.rake:4 /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:incall' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in execute' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:ineach' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in execute' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:ininvoke_with_call_chain' /opt/local/lib/ruby/1.8/monitor.rb:242:in synchronize' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:ininvoke_with_call_chain' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:607:in invoke_prerequisites' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:ineach' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in invoke_prerequisites' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:596:ininvoke_with_call_chain' /opt/local/lib/ruby/1.8/monitor.rb:242:in synchronize' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:ininvoke_with_call_chain' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in invoke' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:ininvoke_task' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in top_level' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:ineach' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in top_level' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:instandard_exception_handling' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in top_level' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:inrun' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in standard_exception_handling' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:inrun' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 /opt/local/bin/rake:19:in load' /opt/local/bin/rake:19 I don't know what could be causing this. I did however but my require "rubygems" require "bundler" Bundler.setup in my enviroment.rb file but that doesn't see to be the problem.

    Read the article

  • IIS requesting certificates even though set to ignore

    - by lupefiasco
    I have a web site in IIS 6 with directory security set to Require secure channel (SSL) and Require 128-bit encryption. Also, the Client certificates setting is set to "Ignore client certificates". When I hit https://servername/resource in Internet Explorer and Chrome , I am prompted for a certificate. I can cancel the prompt, and the resource will load, but I don't want to see this prompt at all. I looked at the virtual directories and resources within the web site, and they all have the ignore client certificates setting on. Could there be another setting, perhaps in the metbase, that is overriding the web site's directory security settings?

    Read the article

  • Forwarding port to a VM - How to?

    - by Peter Gadd
    I use Win 8 Ent x64 on my PC, and I also have a Win 7 VMware virtual machine set up using a bridged network adapter. The IPv4 number for the Win 7 VM is 192.168.1.115. I require access to the VM from the Internet through port 1688. How do I set up port forwarding to achieve this? My router is a Cisco Linksys WAG120N. ========= If you require any further information to help me with this, I will gladly supply it. ========= Thanks in advance.

    Read the article

  • puppet duplicate resources and virtual resources

    - by user45097
    Overview Hi just started using Puppet and have been unable to suss something. Problem Because of normalization when I add 2 classes to a node with packages that have the same dependencies it fails. In simple terms have duplicate resources - in this case the package libssl. Note: packages are being held to prevent latest packages being installed. QUestion What's the best practice way to get round this? class ssh { package { 'openssh-server': ensure = installed, require = libssl } package { 'libssl': ensure = installed, } } class apache { package { 'apache': ensure = installed, require = libssl, } package { 'libssl': ensure = installed, } } node server { include apache include openssl-server

    Read the article

  • Flushing a queue using AMQP, Rabbit, and Ruby

    - by jeffshantz
    I'm developing a system in Ruby that is going to make use of RabbitMQ to send messages to a queue as it does some work. I am using: Ruby 1.9.1 Stable RabbitMQ 1.7.2 AMQP gem v0.6.7 (http://github.com/tmm1/amqp) Most of the examples I have seen on this gem have their publish calls in an EM.add_periodic_timer block. This doesn't work for what I suspect is a vast majority of use cases, and certainly not for mine. I need to publish a message as I complete some work, so putting a publish statement in an add_periodic_timer block doesn't suffice. So, I'm trying to figure out how to publish a few messages to a queue, and then "flush" it, so that any messages I've published are then delivered to my subscribers. To give you an idea of what I mean, consider the following publisher code: #!/usr/bin/ruby require 'rubygems' require 'mq' MESSAGES = ["hello","goodbye","test"] AMQP.start do queue = MQ.queue('testq') messages_published = 0 while (messages_published < 50) if (rand() < 0.4) message = MESSAGES[rand(MESSAGES.size)] puts "#{Time.now.to_s}: Publishing: #{message}" queue.publish(message) messages_published += 1 end sleep(0.1) end AMQP.stop do EM.stop end end So, this code simply loops, publishing a message with 40% probability on each iteration of the loop, and then sleeps for 0.1 seconds. It does this until 50 messages have been published, and then stops AMQP. Of course, this is just a proof of concept. Now, my subscriber code: #!/usr/bin/ruby require 'rubygems' require 'mq' AMQP.start do queue = MQ.queue('testq') queue.subscribe do |header, msg| puts "#{Time.now.to_s}: Received #{msg}" end end So, we just subscribe to the queue, and for each message received, we print it out. Great, except that the subscriber only receives all 50 messages when the publisher calls AMQP.stop. Here's the output from my publisher. It has been truncated in the middle for brevity: $ ruby publisher.rb 2010-04-14 21:45:42 -0400: Publishing: test 2010-04-14 21:45:42 -0400: Publishing: hello 2010-04-14 21:45:42 -0400: Publishing: test 2010-04-14 21:45:43 -0400: Publishing: test 2010-04-14 21:45:44 -0400: Publishing: test 2010-04-14 21:45:44 -0400: Publishing: goodbye 2010-04-14 21:45:45 -0400: Publishing: goodbye 2010-04-14 21:45:45 -0400: Publishing: test 2010-04-14 21:45:45 -0400: Publishing: test . . . 2010-04-14 21:45:55 -0400: Publishing: test 2010-04-14 21:45:55 -0400: Publishing: test 2010-04-14 21:45:55 -0400: Publishing: test 2010-04-14 21:45:55 -0400: Publishing: goodbye Next, the output from my subscriber: $ ruby consumer.rb 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received hello 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received goodbye 2010-04-14 21:45:56 -0400: Received goodbye 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received test . . . 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received test 2010-04-14 21:45:56 -0400: Received goodbye If you note the timestamps in the output, the subscriber only receives all of the messages once the publisher has stopped AMQP and exited. So, being an AMQP newb, how can I get my messages to deliver immediately? I tried putting AMQP.start and AMQP.stop in the body of the while loop of the publisher, but then only the first message gets delivered -- though strangely, if I turn on logging, there are no error messages reported by the server and the messages do get sent to the queue, but never get received by the subscriber. Suggestions would be much appreciated. Thanks for reading.

    Read the article

  • Windows 7 client can't connect to CentOS PPTP VPN

    - by Chris
    Have a Macintosh (10.8.2) that connects just fine to a CentOS 6.0 virtual private server (OpenVZ, with PPP added by the host) via PPTP. A Windows 7 Home Premium client (virtualized in Sun's Virtual Box), on the same computer, using the same Ethernet connection, cannot connect to the Linux VPN server. I have iptables disabled (for testing) on the Linux box. I have the Windows firewall turned off. /var/log/messages looks like this, for a Windows connection: Oct 12 18:44:30 production pptpd[1880]: CTRL: Client 66.104.246.168 control connection started Oct 12 18:44:30 production pptpd[1880]: CTRL: Starting call (launching pppd, opening GRE) Oct 12 18:44:30 production pppd[1881]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Oct 12 18:44:30 production pppd[1881]: pptpd-logwtmp: $Version$ Oct 12 18:44:30 production pppd[1881]: pppd options in effect: Oct 12 18:44:30 production pppd[1881]: debug#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: nologfd#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: dump#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: plugin /usr/lib/pptpd/pptpd-logwtmp.so#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: require-mschap-v2#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: refuse-pap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: refuse-chap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: refuse-mschap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: name pptpd#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: pptpd-original-ip 66.104.246.168#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: 115200#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: lock#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: local#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: novj#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: novjccomp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: ipparam 66.104.246.168#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: proxyarp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: 192.168.97.1:192.168.97.10#011#011# (from command line) Oct 12 18:44:30 production pppd[1881]: nobsdcomp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: require-mppe-128#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: mppe-stateful#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:44:30 production pppd[1881]: pppd 2.4.5 started by root, uid 0 Oct 12 18:44:30 production pppd[1881]: Using interface ppp0 Oct 12 18:44:30 production pppd[1881]: Connect: ppp0 <--> /dev/pts/1 (At this point the Windows machine displays a dialog, reading: "Verifying user name and password...") Oct 12 18:45:00 production pppd[1881]: LCP: timeout sending Config-Requests Oct 12 18:45:00 production pppd[1881]: Connection terminated. Oct 12 18:45:00 production pppd[1881]: Modem hangup Oct 12 18:45:00 production pppd[1881]: Exit. Oct 12 18:45:00 production pptpd[1880]: GRE: read(fd=6,buffer=8059660,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Oct 12 18:45:00 production pptpd[1880]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Oct 12 18:45:00 production pptpd[1880]: CTRL: Client 66.104.246.168 control connection finished The Macintosh connecting looks like this in /var/log/messages: Oct 12 18:50:49 production pptpd[1920]: CTRL: Client 66.104.246.168 control connection started Oct 12 18:50:49 production pptpd[1920]: CTRL: Starting call (launching pppd, opening GRE) Oct 12 18:50:49 production pppd[1921]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Oct 12 18:50:49 production pppd[1921]: pptpd-logwtmp: $Version$ Oct 12 18:50:49 production pppd[1921]: pppd options in effect: Oct 12 18:50:49 production pppd[1921]: debug#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: nologfd#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: dump#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: plugin /usr/lib/pptpd/pptpd-logwtmp.so#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: require-mschap-v2#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: refuse-pap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: refuse-chap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: refuse-mschap#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: name pptpd#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: pptpd-original-ip 66.104.246.168#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: 115200#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: lock#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: local#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: novj#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: novjccomp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: ipparam 66.104.246.168#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: proxyarp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: 192.168.97.1:192.168.97.10#011#011# (from command line) Oct 12 18:50:49 production pppd[1921]: nobsdcomp#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: require-mppe-128#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: mppe-stateful#011#011# (from /etc/ppp/options.pptpd) Oct 12 18:50:49 production pppd[1921]: pppd 2.4.5 started by root, uid 0 Oct 12 18:50:49 production pppd[1921]: Using interface ppp0 Oct 12 18:50:49 production pppd[1921]: Connect: ppp0 <--> /dev/pts/1 Oct 12 18:50:52 production pppd[1921]: MPPE 128-bit stateless compression enabled Oct 12 18:50:52 production pppd[1921]: Unsupported protocol 'IPv6 Control Protocol' (0x8057) received Oct 12 18:50:52 production pppd[1921]: Unsupported protocol 'Apple Client Server Protocol Control' (0x8235) received Oct 12 18:50:52 production pppd[1921]: Cannot determine ethernet address for proxy ARP Oct 12 18:50:52 production pppd[1921]: local IP address 192.168.97.1 Oct 12 18:50:52 production pppd[1921]: remote IP address 192.168.97.10 Oct 12 18:50:52 production pppd[1921]: pptpd-logwtmp.so ip-up ppp0 chris 66.104.246.168 I'm baffled...

    Read the article

  • subversion problem on mac os x

    - by user32942
    This exists in my httpd.conf file: <Location /svn> DAV svn SVNParentPath /Users/iirp/Sites/svn Allow from all #AuthType Basic #AuthName "Subversion repository" #AuthUserFile /Users/iirp/Sites/svn-auth-file #Require valid-user </Location> This is working file When I change this to: <Location /svn> DAV svn SVNParentPath /Users/iirp/Sites/svn #Allow from all AuthType Basic AuthName "Subversion repository" AuthUserFile /Users/iirp/Sites/svn-auth-file Require valid-user </Location> and when I access my repository through URL, it gives me the authentication screen but after that screen my svn repository is not showing up correctly. to see message that it gives to me is: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log.

    Read the article

  • Bootstrap a debian build environment and build source packages with no root privileges

    - by Erwan Queffélec
    On debian squeeze, I am trying to do the following : fetch sources package from the wheezy source repository bootstrap a squeeze chroot for several architectures build the packages for several architectures (i386, amd64 + all and any) I want both the fetching, bootstrapping and build operation to be scriptable, repeatable, and run as a normal user. For the environment setup, I want to make as little use of the root account as possible (install the necessary dependencies, and maybe some visudo stuff). If possible I would like to avoid using a VM (pbuilder with user mode linux) So far I have tried several things with pbuilder (require root), debootstrap (require root) with little success. Here is an example script of what I want to do (does not work): #/bin/bash set -e set -x this=`readlink -f $0` this_dir=`dirname $this` archs='i386 amd64 any all' pushd $this_dir/src # I actually want the following line to work with a repo that # is not in /etc/apt/sources.list but that is another question apt-get source cyrus-imapd-2.4 popd for arch in $archs do build_dir=$this_dir/build/$arch/ pbuilder --create --configfile $build_dir/pbuilderrc --buildresult $build_dir/ pbuilder --build --configfile $build_dir/pbuilderrc --buildresult \ $build_dir/ $this_dir/src/*.dsc # of course I want to use the .dput.cf in /home/myuser/ # and not in /root/ dput $build_dir/$arch/*.changes done

    Read the article

  • LFTP when used with proxies doesn't work

    - by user2949465
    can't seem to use LFTP with proxies that require authentification correctly on my Ubuntu server. When I use it with proxy that doesn't require username/password everything seems fine: lftp lftp :~> set http:proxy http://HOST:PORT lftp :~> set ftp:proxy http://HOST:PORT lftp :~> open username:[email protected] lftp [email protected]:~> get file.ext file.ext 36352 bytes transferred in 10 seconds (3.5K/s) lftp [email protected]:~> exit but when I have to put username/password there is a problem: lftp lftp :~> set http:proxy http://proxylogin:proxypass@HOST:port lftp :~> set ftp:proxy http://proxylogin:proxypass@HOST:port lftp :~> open ftp://ftpuser:[email protected] answer: cd: Access failed: 401 Authentication Required (~) please someone help!

    Read the article

  • Subversion problem, repo has moved

    - by Rudiger
    Hi, I've set up subversion on a CentOS fresh install. Web view works fine and gives no errors and requests password but when I try and access it through svn client (xcode) it gives the error 175011 (Repository has been moved). I've tried some of the solutions out there but no success. My subversion.conf: <Location /repos> DAV svn SVNParentPath /var/www/html/repos # Limit write permission to list of valid users. # Require SSL connection for password protection. SSLRequireSSL AuthType Basic AuthName "Authorization Realm" AuthUserFile /etc/svn-auth-conf Require valid-user </Location> My Apache DocumentRoot: /var/www/html I've only set up one svn repository so far so there shouldn't be any conflicts there. If you need any more info let me know. Thanks

    Read the article

  • How do I force .htaccess authorization to occur over ssl?

    - by kenja
    I'm trying to force a particular directory to require only allowed IPs and a valid username/password through basic authorization. To ensure that the username/password are sent in encrypted form, I want the directory to also force SSL use. Here is what I have in my .htaccess file: # Force HTTPS-Connection RewriteEngine On RewriteCond %{SERVER_PORT} !^443$ RewriteRule (.*) https://www.mywebsite.com%{REQUEST_URI} [R,L] ## password begin ## AuthName "Restricted Access" AuthUserFile /var/www/admin/.htpasswd AuthType Basic Require valid-user Order deny,allow Deny from all Allow from 79.1.231.151 62.123.134.83 Satisfy All Unfortunately, when I access that directory using http protocol, it is asking for the password before it redirects the page to the secure version. This means the password is sent unencrypted. What am I doing wrong? Is there a way to do this?

    Read the article

  • In Puppet, how would I secure a password variable (in this case a MySQL password)?

    - by Beaming Mel-Bin
    I am using Puppet to provision MySQL with a parameterised class: class mysql::server( $password ) { package { 'mysql-server': ensure => installed } package { 'mysql': ensure => installed } service { 'mysqld': enable => true, ensure => running, require => Package['mysql-server'], } exec { 'set-mysql-password': unless => "mysqladmin -uroot -p$password status", path => ['/bin', '/usr/bin'], command => "mysqladmin -uroot password $password", require => Service['mysqld'], } } How can I protect $password? Currently, I removed the default world readable permission from the node definition file and explicitly gave puppet read permission via ACL. I'm assuming others have come across a similar situation so perhaps there's a better practice.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >