Search Results

Search found 6033 results on 242 pages for 'partition magic'.

Page 220/242 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • form_form and custom parameter in path_prefix

    - by fguillen
    Hi people, I have this route: # config/routes.rb map.namespace :backshop, :path_prefix => '/:shop_id/admin' do |backshop| backshop.resources :items end And I want to use the form_for magic to reuse the same form on both: new and edit views: <% form_for [:backshop, @item] do |f| %> This used to works, and used to build a create url for the item or update url for the item depending on the status of the @item object. But this is not working on this case because the routes don't exists without the shop_id parameter, and I don't know how to say to the form_for something like this: <% form_for [:backshop, @item], :shop_id => @shop do |f| %> Because it tries to use the @item like the :shop_id parameter. Or like this <% form_for [:backshop, @shop, @item] do |f| %> Because it tries to build this url: backshop_shop_order_path I Know I can just to extract the form_for declaration from the partial and do different calls on depending if new or edit: <% form_for( @item, :url => backshop_items_path( @shop ) ) do |f| %> and <% form_for( @item, :url => backshop_item_path( @shop, @item ) ) do |f| %> But I just wanted don't do this because I have a bunch of models and is a few boring :) Thanks for any suggestion f.

    Read the article

  • What db fits me?

    - by afvasd
    Dear Everyone I am currently using mysql. I am finding that my schema is getting incredibly complicated. I seek to find a new db that will suit my needs: Let's assume I am building a news aggregrator (which collects news from multiple website). I then run algorithms to determine if two news from different sites are actually referring to the same topic. I run this algorithm to cluster news together. The relationship is depicted below: cluster \--news1 \--word1 \--word2 \--news2 \--word3 \--news3 \--word1 \--word3 And then I will apply some magic and determine the importance of each word. Summing all the importance of each word gives me the importance of a news article. Summing the importance of each news article gives me the importance of a cluster. Note that above cluster there are also subgroups( like split by region etc), and categories (like sports, etc) which I have to determine the importance of that in a particular day per se. I have used views in the past to do so, but I realized that views are very slow. So i will normally do an insert into an actual table and index them for better performance. As you can see this leads to multiple tables derived like (cluster, importance), (news, importance), (words, importance) etc which can get pretty messy. Also the "importance" metric will change. It has become increasingly difficult to alter tables, update data (which I am using TRUNCATE TABLE) and then inserting from null. I am currently looking into something schemaless like Mongodb. I do not need distributedness. I would very much want something that is reasonably fast (which can be indexed) and something that is a lot more flexible that traditional RDMBS. Also, I need something that has some kind of ORM because I personally like ORM a lot. I am currently using sqlalchemy Please help!

    Read the article

  • PHP array minor problem

    - by Sennheiser
    I'm really not sure how to explain this. It's so simple I can't fathom why it's not working. I have a loop. It puts a bunch of strings into an array. If I fill a single variable with any given string, it will output it perfectly. But filling an array with the strings will make it give me the dreaded: Array Array Array Array Array Array Array Array Note: my strings are not all 'Array'. The way I loop is: while(...) { $arr[] = $resultFromLoop; } Here is my var_dump. array(1) { ["tagName"]=> string(5) "magic" } array(1) { ["tagName"]=> string(4) "nunu" } array(1) { ["tagName"]=> string(5) "books" } array(1) { ["tagName"]=> string(0) "" } array(1) { ["tagName"]=> string(3) "zzz" } array(1) { ["tagName"]=> string(4) "grey" } array(1) { ["tagName"]=> string(3) "new" } array(1) { ["tagName"]=> string(6) "flight" }

    Read the article

  • What is the mean of @ModelAttribute annotation at method argument level?

    - by beemaster
    Spring 3 reference teaches us: When you place it on a method parameter, @ModelAttribute maps a model attribute to the specific, annotated method parameter I don't understand this magic spell, because i sure that model object's alias (key value if using ModelMap as return type) passed to the View after executing of the request handler method. Therefore when request handler method executes the model object's name can't be mapped to the method parameter. To solve this contradiction i went to stackoverflow and found this detailed example. The author of example said: // The "personAttribute" model has been passed to the controller from the JSP It seems, he is charmed by Spring reference... To dispel the charms i deployed his sample app in my environment and cruelly cut @ModelAttribute annotation from method MainController.saveEdit. As result the application works without any changes! So i conclude: the @ModelAttribute annotation is not needed to pass web form's field values to the argument's fields. Then i stuck to the question: what is the mean of @ModelAttribute annotation? If the only mean is to set alias for model object in View, then why this way better than explicitly adding of object to ModelMap?

    Read the article

  • Time.new does not work as I would expect

    - by Marius Pop
    I am trying to generate some seed material. seed_array.each do |seed| Task.create(date: Date.new(2012,06,seed[1]), start_t: Time.new(2012,6,2,seed[2],seed[3]), end_t: Time.new(2012,6,2,seed[2] + 2,seed[3]), title: "#{seed[0]}") end Ultimately I will put random hours, minutes, seconds. The problem that I am facing is that instead of creating a time with the 2012-06-02 date it creates a time with a different date: 2000-01-01. I tested Time.new(2012,6,2,2,20,45) in rails console and it works as expected. When I am trying to seed my database however some voodo magic happens and I don't get the date I want. Any inputs are appreciated. Thank you! Update1: * [1m[36m (0.0ms)[0m [1mbegin transaction[0m [1m[35mSQL (0.5ms)[0m INSERT INTO "tasks" ("created_at", "date", "description", "end_t", "group_id", "start_t", "title", "updated_at") VALUES (?, ?, ?, ?, ?, ?, ?, ?) [["created_at", Tue, 03 Jul 2012 02:15:34 UTC +00:00], ["date", Thu, 07 Jun 2012], ["description", nil], ["end_t", 2012-06-02 10:02:00 -0400], ["group_id", nil], ["start_t", 2012-06-02 08:02:00 -0400], ["title", "99"], ["updated_at", Tue, 03 Jul 2012 02:15:34 UTC +00:00]] [1m[36m (2.3ms)[0m [1mcommit transaction * This is a small sample of the log. Update 2 Task id: 101, date: "2012-06-26", start_t: "2000-01-01 08:45:00", end_t: "2000-01-01 10:45:00", title: "1", description: nil, group_id: nil, created_at: "2012-07-03 02:15:33", updated_at: "2012-07-03 02:15:33" This is what shows up in rails console.

    Read the article

  • Compile Apache 2.4.3 on Centos 6.2 (64bit)

    - by RiseCakoPlusplus
    I attempt to compile Apache 2.4.3 with apr-1.4.6 and apr-util-1.5.1 on Centos 6.2 (64bit). ./configure --build=x86_64-unknown-linux-gnu --host=x86_64-unknown-linux-gnu --target=x86_64-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --cache-file=../config.cache --with-libdir=lib64 --with-config-file-path=/etc --with-config-file-scan-dir=/etc/php.d --disable-debug --with-pic --disable-rpath --without-pear --with-bz2 --with-exec-dir=/usr/bin --with-freetype-dir=/usr --with-png-dir=/usr --with-xpm-dir=/usr --enable-gd-native-ttf --with-t1lib=/usr --without-gdbm --with-gettext --with-gmp --with-iconv --with-jpeg-dir=/usr --with-openssl --with-zlib --with-layout=GNU --enable-exif --enable-ftp --enable-magic-quotes --enable-sockets --with-kerberos --enable-ucd-snmp-hack --enable-shmop --enable-calendar --with-libxml-dir=/usr --enable-xml --with-system-tzdata --with-mhash --with-apxs2=/usr/sbin/apxs --libdir=/usr/lib64/php --enable-pdo=shared --with-mysql=shared,/usr --with-mysqli=shared,/usr/lib64/mysql/mysql_config --with-pdo-mysql=shared,/usr/lib64/mysql/mysql_config --without-pdo-sqlite --without-gd --disable-dom --disable-dba --without-unixODBC --disable-xmlreader --disable-xmlwriter --without-sqlite3 --disable-phar --disable-fileinfo --disable-json --without-pspell --disable-wddx --without-curl --disable-posix --disable-sysvmsg --disable-sysvshm --disable-sysvsem ./configure --with-included-apr --with-included-apr-util and when I issue make this happen: /root/httpd-2.4.3/srclib/apr/libtool: line 5989: cd: yes/lib: No such file or directory libtool: link: cannot determine absolute directory name of yes/lib' make[3]: *** [libaprutil-1.la] Error 1 make[3]: Leaving directory/root/httpd-2.4.3/srclib/apr-util' make[2]: * [all-recursive] Error 1 make[2]: Leaving directory /root/httpd-2.4.3/srclib/apr-util' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory/root/httpd-2.4.3/srclib' make: * [all-recursive] Error 1 anything I missed?

    Read the article

  • Dealing with large directories in a checkout

    - by Eric
    I am trying to come up with a version control process for a web app that I work on. Currently, my major stumbling blocks are two directories that are huge (both over 4GB). Only a few people need to work on things within the huge directories; most people don't even need to see what's in them. Our directory structure looks something like: / --file.aspx --anotherFile.aspx --/coolThings ----coolThing.aspx --/bigFolder ----someHugeMovie.mov ----someHugeSound.mp3 --/anotherBigFolder ----... I'm sure you get the picture. It's hard to justify a checkout that has to pull down 8GB of data that's likely useless to a developer. I know, it's only once, but even once could be really frustrating for someone (and will make it harder for me to convince everyone to use source control). (Plus, clean checkouts will be painfully slow.) These folders do have to be available in the web application. What can I do? I've thought about separate repositories for the big folders. That way, you only download if you need it; but then how do I manage checking these out onto our development server? I've also thought about not trying to version control those folders: just update them directly on the web server... but I am not enamored of this idea. Is there some magic way to simply exclude directories from a checkout that I haven't found? (Pretty sure there is not.) Of course, there's always the option to just give up, bite the bullet, and accept downloading 8 useless GB. What say you? Have you encountered this problem before? How did you solve it?

    Read the article

  • Overload the behavior of count() when called on certain objects

    - by Tom
    In PHP 5, you can use magic methods, overload some classes, etc. In C++, you can implement functions that exist is STL as long as the argument types are different. Is there a way to do this in PHP? An example of what I'd like to do is this: class a { function a() { $this->list = array("1", "2"); } } $blah = new a(); count($blah); I would like blah to return 2. IE count the values of a specific array in the class. So in C++, the way I would do this might look like this: int count(a varName) { return count(varName->list); } Basically, I am trying to simplify data calls for a large application so I can call do this: count($object); rather than count($object->list); The list is going to be potentially a list of objects so depending on how it's used, it could be really nasty statement if someone has to do it the current way: count($object->list[0]->list[0]->list); So, can I make something similar to this: function count(a $object) { count($object->list); } I know PHP's count accepts a mixed var, so I don't know if I can override an individual type.

    Read the article

  • Specific path to java class file when embedded into HTML; Help urgent

    - by Jeevanism
    This is a resonant post to one of my other query due to an error with Java applet embedded into CMS pages. well, I tell issue:- Problem:- I have a website using Concrete 5 CMS, in which I have a page that I have embedded a Java applet class. This applet should show the system information which this applet is working fine in simple single HTML pages. now whenever I access my plugin Test page which created in Concrete 5 CMS (this page I embedded this java applet), it shows a Java error. The error is says incompatible Magic Number. Observation:- After a lot of searching through various tech forums, I finally found that, the issue is happened because the browser cannot load Java class file. The class file path location is wrong. Here below I post the log of server access when I test in my local machine. 127.0.0.1 - - [20/Dec/2012:12:59:28 +0800] "GET /linuxhouse/index.php/techlab/java/testvm.class HTTP/1.1" 200 1896 "-" "Mozilla/4.0 (Linux 2.6.37.6-0.9-desktop) Java/1.6.0_29" 127.0.0.1 - - [20/Dec/2012:12:59:29 +0800] "GET /linuxhouse/index.php/techlab/java/testvm.class HTTP/1.1" 200 1896 "-" "Mozilla/4.0 (Linux 2.6.37.6-0.9-desktop) Java/1.6.0_29" <<<<<<<<<<<<<<<<<<<<< Clearly, the path to Java class file is wrong. But I have no idea how to specify exact path in an embedded code. Is this a CMS specific issue. I have disabled Pretty URL feature of CMS. But still I cannot find the solution. here the referred Page that shows the java error. http://www.linux-house.net/v3/techlab/plugin pls pls give some insight..URGENT SOS

    Read the article

  • List of values as keys for a Map

    - by thr
    I have lists of variable length where each item can be one of four unique, that I need to use as keys for another object in a map. Assume that each value can be either 0, 1, 2 or 3 (it's not integer in my real code, but a lot easier to explain this way) so a few examples of key lists could be: [1, 0, 2, 3] [3, 2, 1] [1, 0, 0, 1, 1, 3] [2, 3, 1, 1, 2] [1, 2] So, to re-iterate: each item in the list can be either 0, 1, 2 or 3 and there can be any number of items in a list. My first approach was to try to hash the contents of the array, using the built in GetHashCode() in .NET to combine the hash of each element. But since this would return an int I would have to deal with collisions manually (two equal int values are identical to a Dictionary). So my second approach was to use a quad tree, breaking down each item in the list into a Node that has four pointers (one for each possible value) to the next four possible values (with the root node representing [], an empty list), inserting [1, 0, 2] => Foo, [1, 3] => Bar and [1, 0] => Baz into this tree would look like this: Grey nodes nodes being unused pointers/nodes. Though I worry about the performance of this setup, but there will be no need to deal with hash collisions and the tree won't become to deep (there will mostly be lists with 2-6 items stored, rarely over 6). Is there some other magic way to store items with lists of values as keys that I have missed?

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • SSL connection errors from Apache

    - by Yang
    I'm running a (self-signed) SSL cert site on Apache/2.2.14 on Ubuntu 10.04, but various browsers are giving errors on half the connection attempts. Just now saw this transient error from Chrome: "Error 126 (net::ERR_SSL_BAD_RECORD_MAC_ALERT): Unknown error." Hit refresh and the problem goes away for a while. wget too: $ wget --no-check-certificate https://dev.foo.com/deps/ --2010-09-08 19:30:26-- https://dev.foo.com/deps/ Resolving dev.foo.com... 184.72.53.220 Connecting to dev.foo.com|184.72.53.220|:443... connected. OpenSSL: error:0407006A:rsa routines:RSA_padding_check_PKCS1_type_1:block type is not 01 OpenSSL: error:04067072:rsa routines:RSA_EAY_PUBLIC_DECRYPT:padding check failed OpenSSL: error:1408D07B:SSL routines:SSL3_GET_KEY_EXCHANGE:bad signature Unable to establish SSL connection. Run it right away again and it works: $ wget --no-check-certificate https://dev.foo.com/deps/ --2010-09-08 19:30:29-- https://dev.foo.com/deps/ Resolving dev.foo.com... 184.72.53.220 Connecting to dev.foo.com|184.72.53.220|:443... connected. WARNING: cannot verify dev.foo.com's certificate, issued by `/CN=dev.foo.com': Self-signed certificate encountered. HTTP request sent, awaiting response... 200 OK Length: 3157 (3.1K) [text/html] Saving to: `index.html' 100%[======================================>] 3,157 --.-K/s in 0s 2010-09-08 19:30:29 (48.6 MB/s) - `index.html' saved [3157/3157] In my sites-enabled/default-ssl: SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key The cert: -----BEGIN CERTIFICATE----- MIIBszCCARwCCQCa0TzNwqLgsTANBgkqhkiG9w0BAQUFADAeMRwwGgYDVQQDExNk ZXYucGFydHlvbmRhdGEuY29tMB4XDTEwMDgyNzA2MzA1N1oXDTIwMDgyNDA2MzA1 N1owHjEcMBoGA1UEAxMTZGV2LnBhcnR5b25kYXRhLmNvbTCBnzANBgkqhkiG9w0B AQEFAAOBjQAwgYkCgYEAzXDEULpCUqIc9hV/ESFapkckR2uoYINA81DvG2aQZ9Ot Q30OwX2ae2CC4bSzJEIVlahU8vjVrWpmpa28NEhQbqh4ywwbl1XDrEVYI6Gkfimf snJhOKyaVrEhlwutYtBjmsz3ZIqwymMPm/6smVcSS5dJIynlSmtltxX6ivPcO8UC AwEAATANBgkqhkiG9w0BAQUFAAOBgQBGxHVkpSSOnZjzuySRepjhAlV/yhe9Fx23 fh12WrjQMEi98B7JEuNSLXDWckUN7O6XRc3RzKmazcGHJqzhn0Ov6gAmAE2XjZ/x VW21xmaLwk+KgYKFJbJJaP3jMSpU7I3aa11wqAkR2Zd4Nkm9N0YXYIzcBdfztTVI Et8mEHBFdg== -----END CERTIFICATE----- The cert is in turn generated via: $ make-ssl-cert generate-default-snakeoil --force-overwrite Apache version. $ apache2 -V Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/worker" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf" I don't administer the network, hardware, etc. - this is all running on Amazon EC2. I'm not running a load-balancer or anything else in front of the server. I'm making direct TCP connections to that host (AFAIK). Any ideas? Thanks in advance for any help.

    Read the article

  • SSL connection errors from Apache

    - by Yang
    I'm running a (self-signed) SSL cert site on Apache/2.2.14 on Ubuntu 10.04, but various browsers are giving errors on half the connection attempts. Just now saw this transient error from Chrome: "Error 126 (net::ERR_SSL_BAD_RECORD_MAC_ALERT): Unknown error." Hit refresh and the problem goes away for a while. wget too: $ wget --no-check-certificate https://dev.partyondata.com/deps/ --2010-09-08 19:30:26-- https://dev.partyondata.com/deps/ Resolving dev.partyondata.com... 184.72.53.220 Connecting to dev.partyondata.com|184.72.53.220|:443... connected. OpenSSL: error:0407006A:rsa routines:RSA_padding_check_PKCS1_type_1:block type is not 01 OpenSSL: error:04067072:rsa routines:RSA_EAY_PUBLIC_DECRYPT:padding check failed OpenSSL: error:1408D07B:SSL routines:SSL3_GET_KEY_EXCHANGE:bad signature Unable to establish SSL connection. Run it right away again and it works: $ wget --no-check-certificate https://dev.partyondata.com/deps/ --2010-09-08 19:30:29-- https://dev.partyondata.com/deps/ Resolving dev.partyondata.com... 184.72.53.220 Connecting to dev.partyondata.com|184.72.53.220|:443... connected. WARNING: cannot verify dev.partyondata.com's certificate, issued by `/CN=dev.partyondata.com': Self-signed certificate encountered. HTTP request sent, awaiting response... 200 OK Length: 3157 (3.1K) [text/html] Saving to: `index.html' 100%[======================================>] 3,157 --.-K/s in 0s 2010-09-08 19:30:29 (48.6 MB/s) - `index.html' saved [3157/3157] In my sites-enabled/default-ssl: SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key The cert: -----BEGIN CERTIFICATE----- MIIBszCCARwCCQCa0TzNwqLgsTANBgkqhkiG9w0BAQUFADAeMRwwGgYDVQQDExNk ZXYucGFydHlvbmRhdGEuY29tMB4XDTEwMDgyNzA2MzA1N1oXDTIwMDgyNDA2MzA1 N1owHjEcMBoGA1UEAxMTZGV2LnBhcnR5b25kYXRhLmNvbTCBnzANBgkqhkiG9w0B AQEFAAOBjQAwgYkCgYEAzXDEULpCUqIc9hV/ESFapkckR2uoYINA81DvG2aQZ9Ot Q30OwX2ae2CC4bSzJEIVlahU8vjVrWpmpa28NEhQbqh4ywwbl1XDrEVYI6Gkfimf snJhOKyaVrEhlwutYtBjmsz3ZIqwymMPm/6smVcSS5dJIynlSmtltxX6ivPcO8UC AwEAATANBgkqhkiG9w0BAQUFAAOBgQBGxHVkpSSOnZjzuySRepjhAlV/yhe9Fx23 fh12WrjQMEi98B7JEuNSLXDWckUN7O6XRc3RzKmazcGHJqzhn0Ov6gAmAE2XjZ/x VW21xmaLwk+KgYKFJbJJaP3jMSpU7I3aa11wqAkR2Zd4Nkm9N0YXYIzcBdfztTVI Et8mEHBFdg== -----END CERTIFICATE----- The cert is in turn generated via: $ make-ssl-cert generate-default-snakeoil --force-overwrite Apache version. $ apache2 -V Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/worker" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf" Any ideas? Thanks in advance for any help.

    Read the article

  • How to solve "403 Forbidden" on CentOS6 with SELinux Disabled?

    - by André
    I have a machine on Linode that is driving me crazy. Linode does not have SELinux on CentOS6... I'm trying to configure to put my website in "/home/websites/public_html/mysite.com/public" As I don´t have SELinux enable, how can I avoid the "403 Forbidden" that I get when trying to access the webpage? Sorry for my english. Best Regards, Update1, ERROR_LOG [Mon Oct 17 14:04:16 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:08:07 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:10:25 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:10:41 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:32:35 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:34:45 2011] [error] [client 58.218.199.227] (13)Permission denied: access to /proxy-1.php denied [Mon Oct 17 15:32:25 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:37:26 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:37:43 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:38:32 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:42:56 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Oct 17 15:43:12 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Oct 17 15:45:34 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Oct 17 15:51:25 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Upadate2, /home/websites directory drwx------ 3 websites websites 4096 Oct 17 14:52 . drwxr-xr-x. 3 root root 4096 Oct 17 13:42 .. -rw------- 1 websites websites 372 Oct 17 14:52 .bash_history -rw-r--r-- 1 websites websites 18 May 30 11:46 .bash_logout -rw-r--r-- 1 websites websites 176 May 30 11:46 .bash_profile -rw-r--r-- 1 websites websites 124 May 30 11:46 .bashrc drwxrwxr-x 3 websites apache 4096 Oct 17 13:45 public_html Update3, httpd.conf ### Section 1: Global Environment ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15 <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> <IfModule worker.c> StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> #Listen 12.34.56.78:80 Listen 80 LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so Include conf.d/*.conf #ExtendedStatus On User apache Group apache ServerAdmin root@localhost #ServerName www.example.com:80 UseCanonicalName Off DocumentRoot "/var/www/html" # # Each directory to which Apache has access can be configured with respect # to which services and features are allowed and/or disabled in that # directory (and its subdirectories). # # First, we configure the "default" to be a very restrictive set of # features. # <Directory /> Options FollowSymLinks AllowOverride None </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # This should be changed to whatever you set DocumentRoot to. # <Directory "/home/websites/public_html"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # Options Indexes FollowSymLinks # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # AllowOverride None # # Controls who can get stuff from this server. # Order allow,deny Allow from all </Directory> # # UserDir: The name of the directory that is appended onto a user's home # directory if a ~user request is received. # # The path to the end user account 'public_html' directory must be # accessible to the webserver userid. This usually means that ~userid # must have permissions of 711, ~userid/public_html must have permissions # of 755, and documents contained therein must be world-readable. # Otherwise, the client will only receive a "403 Forbidden" message. # # See also: http://httpd.apache.org/docs/misc/FAQ.html#forbidden # <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disabled # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # #UserDir public_html </IfModule> # # Control access to UserDir directories. The following is an example # for a site where these directories are restricted to read-only. # #<Directory /home/*/public_html> # AllowOverride FileInfo AuthConfig Limit # Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec # <Limit GET POST OPTIONS> # Order allow,deny # Allow from all # </Limit> # <LimitExcept GET POST OPTIONS> # Order deny,allow # Deny from all # </LimitExcept> #</Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # # The index.html.var file (a type-map) is used to deliver content- # negotiated documents. The MultiViews Option can be used for the # same purpose, but it is much slower. # DirectoryIndex index.html index.html.var # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy All </Files> # # TypesConfig describes where the mime.types file (or equivalent) is # to be found. # TypesConfig /etc/mime.types # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # <IfModule mod_mime_magic.c> # MIMEMagicFile /usr/share/magic.mime MIMEMagicFile conf/magic </IfModule> # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off #EnableMMAP off #EnableSendfile off # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog logs/error_log LogLevel warn # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # "combinedio" includes actual counts of actual bytes received (%I) and sent (%O); this # requires the mod_logio module to be loaded. #LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # #CustomLog logs/access_log common # # If you would like to have separate agent and referer logfiles, uncomment # the following directives. # #CustomLog logs/referer_log referer #CustomLog logs/agent_log agent # # For a single logfile with access, agent, and referer information # (Combined Logfile Format), use the following directive: # CustomLog logs/access_log combined ServerSignature On Alias /icons/ "/var/www/icons/" <Directory "/var/www/icons"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> # # WebDAV module configuration section. # <IfModule mod_dav_fs.c> # Location of the WebDAV lock database. DAVLockDB /var/lib/dav/lockdb </IfModule> # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the client. # The same rules about trailing "/" apply to ScriptAlias directives as to # Alias. # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" # # "/var/www/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/var/www/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable Charset=UTF-8 AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip AddIconByType (TXT,/icons/text.gif) text/* AddIconByType (IMG,/icons/image2.gif) image/* AddIconByType (SND,/icons/sound2.gif) audio/* AddIconByType (VID,/icons/movie.gif) video/* AddIcon /icons/binary.gif .bin .exe AddIcon /icons/binhex.gif .hqx AddIcon /icons/tar.gif .tar AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip AddIcon /icons/a.gif .ps .ai .eps AddIcon /icons/layout.gif .html .shtml .htm .pdf AddIcon /icons/text.gif .txt AddIcon /icons/c.gif .c AddIcon /icons/p.gif .pl .py AddIcon /icons/f.gif .for AddIcon /icons/dvi.gif .dvi AddIcon /icons/uuencoded.gif .uu AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl AddIcon /icons/tex.gif .tex AddIcon /icons/bomb.gif core AddIcon /icons/back.gif .. AddIcon /icons/hand.right.gif README AddIcon /icons/folder.gif ^^DIRECTORY^^ AddIcon /icons/blank.gif ^^BLANKICON^^ # # DefaultIcon is which icon to show for files which do not have an icon # explicitly set. # DefaultIcon /icons/unknown.gif # # AddDescription allows you to place a short description after a file in # server-generated indexes. These are only displayed for FancyIndexed # directories. # Format: AddDescription "description" filename # #AddDescription "GZIP compressed document" .gz #AddDescription "tar archive" .tar #AddDescription "GZIP compressed tar archive" .tgz # # ReadmeName is the name of the README file the server will look for by # default, and append to directory listings. # # HeaderName is the name of a file which should be prepended to # directory indexes. ReadmeName README.html HeaderName HEADER.html # # IndexIgnore is a set of filenames which directory indexing should ignore # and not include in the listing. Shell-style wildcarding is permitted. # IndexIgnore .??* *~ *# HEADER* README* RCS CVS *,v *,t # # DefaultLanguage and AddLanguage allows you to specify the language of # a document. You can then use content negotiation to give a browser a # file in a language the user can understand. # # Specify a default language. This means that all data # going out without a specific language tag (see below) will # be marked with this one. You probably do NOT want to set # this unless you are sure it is correct for all cases. # # * It is generally better to not mark a page as # * being a certain language than marking it with the wrong # * language! # # DefaultLanguage nl # # Note 1: The suffix does not have to be the same as the language # keyword --- those with documents in Polish (whose net-standard # language code is pl) may wish to use "AddLanguage pl .po" to # avoid the ambiguity with the common suffix for perl scripts. # # Note 2: The example entries below illustrate that in some cases # the two character 'Language' abbreviation is not identical to # the two character 'Country' code for its country, # E.g. 'Danmark/dk' versus 'Danish/da'. # # Note 3: In the case of 'ltz' we violate the RFC by using a three char # specifier. There is 'work in progress' to fix this and get # the reference data for rfc1766 cleaned up. # # Catalan (ca) - Croatian (hr) - Czech (cs) - Danish (da) - Dutch (nl) # English (en) - Esperanto (eo) - Estonian (et) - French (fr) - German (de) # Greek-Modern (el) - Hebrew (he) - Italian (it) - Japanese (ja) # Korean (ko) - Luxembourgeois* (ltz) - Norwegian Nynorsk (nn) # Norwegian (no) - Polish (pl) - Portugese (pt) # Brazilian Portuguese (pt-BR) - Russian (ru) - Swedish (sv) # Simplified Chinese (zh-CN) - Spanish (es) - Traditional Chinese (zh-TW) # AddLanguage ca .ca AddLanguage cs .cz .cs AddLanguage da .dk AddLanguage de .de AddLanguage el .el AddLanguage en .en AddLanguage eo .eo AddLanguage es .es AddLanguage et .et AddLanguage fr .fr AddLanguage he .he AddLanguage hr .hr AddLanguage it .it AddLanguage ja .ja AddLanguage ko .ko AddLanguage ltz .ltz AddLanguage nl .nl AddLanguage nn .nn AddLanguage no .no AddLanguage pl .po AddLanguage pt .pt AddLanguage pt-BR .pt-br AddLanguage ru .ru AddLanguage sv .sv AddLanguage zh-CN .zh-cn AddLanguage zh-TW .zh-tw # # LanguagePriority allows you to give precedence to some languages # in case of a tie during content negotiation. # # Just list the languages in decreasing order of preference. We have # more or less alphabetized them here. You probably want to change this. # LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW # # ForceLanguagePriority allows you to serve a result page rather than # MULTIPLE CHOICES (Prefer) [in case of a tie] or NOT ACCEPTABLE (Fallback) # [in case no accepted languages matched the available variants] # ForceLanguagePriority Prefer Fallback # # Specify a default charset for all content served; this enables # interpretation of all content as UTF-8 by default. To use the # default browser choice (ISO-8859-1), or to allow the META tags # in HTML content to override this choice, comment out this # directive: # AddDefaultCharset UTF-8 # # AddType allows you to add to or override the MIME configuration # file mime.types for specific file types. # #AddType application/x-tar .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # Despite the name similarity, the following Add* directives have nothing # to do with the FancyIndexing customization directives above. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # MIME-types for downloading Certificates and CRLs # AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # # For files that include their own HTTP headers: # #AddHandler send-as-is asis # # For type maps (negotiated resources): # (This is enabled by default to allow the Apache "It Worked" page # to be distributed in multiple languages.) # AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # AddType text/html .shtml AddOutputFilter INCLUDES .shtml # # Action lets you define media types that will execute a script whenever # a matching file is called. This eliminates the need for repeated URL # pathnames for oft-used CGI file processors. # Format: Action media/type /cgi-script/location # Format: Action handler-name /cgi-script/location # # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # Putting this all together, we can internationalize error responses. # # We use Alias to redirect any /error/HTTP_<error>.html.var response to # our collection of by-error message multi-language collections. We use # includes to substitute the appropriate text. # # You can modify the messages' appearance without changing any of the # default HTTP_<error>.html.var files by adding the line: # # Alias /error/include/ "/your/include/path/" # # which allows you to create your own set of files by starting with the # /var/www/error/include/ files and # copying them to /your/include/path/, even on a per-VirtualHost basis. # Alias /error/ "/var/www/error/" <IfModule mod_negotiation.c> <IfModule mod_include.c> <Directory "/var/www/error"> AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en es de fr ForceLanguagePriority Prefer Fallback </Directory> # ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var # ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var # ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var # ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var # ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var # ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var # ErrorDocument 410 /error/HTTP_GONE.html.var # ErrorDocument 411 /error/HTTP_LENGTH_REQUIRED.html.var # ErrorDocument 412 /error/HTTP_PRECONDITION_FAILED.html.var # ErrorDocument 413 /error/HTTP_REQUEST_ENTITY_TOO_LARGE.html.var # ErrorDocument 414 /error/HTTP_REQUEST_URI_TOO_LARGE.html.var # ErrorDocument 415 /error/HTTP_UNSUPPORTED_MEDIA_TYPE.html.var # ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var # ErrorDocument 501 /error/HTTP_NOT_IMPLEMENTED.html.var # ErrorDocument 502 /error/HTTP_BAD_GATEWAY.html.var # ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var # ErrorDocument 506 /error/HTTP_VARIANT_ALSO_VARIES.html.var </IfModule> </IfModule> # # The following directives modify normal HTTP response behavior to # handle known problems with browser implementations. # BrowserMatch "Mozilla/2" nokeepalive BrowserMatch "MSIE 4\.0b2;" nokeepalive downgrade-1.0 force-response-1.0 BrowserMatch "RealPlayer 4\.0" force-response-1.0 BrowserMatch "Java/1\.0" force-response-1.0 BrowserMatch "JDK/1\.0" force-response-1.0 # # The following directive disables redirects on non-GET requests for # a directory that does not include the trailing slash. This fixes a # problem with Microsoft WebFolders which does not appropriately handle # redirects for folders with DAV methods. # Same deal with Apple's DAV filesystem and Gnome VFS support for DAV. # BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully BrowserMatch "MS FrontPage" redirect-carefully BrowserMatch "^WebDrive" redirect-carefully BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully BrowserMatch "^gnome-vfs/1.0" redirect-carefully BrowserMatch "^XML Spy" redirect-carefully BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully # # Allow server status reports generated by mod_status, # with the URL of http://servername/server-status # Change the ".example.com" to match your domain to enable. # #<Location /server-status> # SetHandler server-status # Order deny,allow # Deny from all # Allow from .example.com #</Location> # # Allow remote server configuration reports, with the URL of # http://servername/server-info (requires that mod_info.c be loaded). # Change the ".example.com" to match your domain to enable. # #<Location /server-info> # SetHandler server-info # Order deny,allow # Deny from all # Allow from .example.com #</Location> # # Proxy Server directives. Uncomment the following lines to # enable the proxy server: # #<IfModule mod_proxy.c> #ProxyRequests On # #<Proxy *> # Order deny,allow # Deny from all # Allow from .example.com #</Proxy> # # Enable/disable the handling of HTTP/1.1 "Via:" headers. # ("Full" adds the server version; "Block" removes all outgoing Via: headers) # Set to one of: Off | On | Full | Block # #ProxyVia On # # To enable a cache of proxied content, uncomment the following lines. # See http://httpd.apache.org/docs/2.2/mod/mod_cache.html for more details. # #<IfModule mod_disk_cache.c> # CacheEnable disk / # CacheRoot "/var/cache/mod_proxy" #</IfModule> # #</IfModule> # End of proxy directives. ### Section 3: Virtual Hosts # # VirtualHost: If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # NameVirtualHost *:80 # # NOTE: NameVirtualHost cannot be used without a port specifier # (e.g. :80) if mod_ssl is being used, due to the nature of the # SSL protocol. # # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for requests without a known # server name. # #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot /www/docs/dummy-host.example.com # ServerName dummy-host.example.com # ErrorLog logs/dummy-host.example.com-error_log # CustomLog logs/dummy-host.example.com-access_log common #</VirtualHost> # domain: mysite.com # public: /home/websites/public_html/mysite.com/ <VirtualHost *:80> # Admin email, Server Name (domain name) and any aliases ServerAdmin [email protected] ServerName mysite.com ServerAlias www.mysite.com # Index file and Document Root (where the public files are located) DirectoryIndex index.html DocumentRoot /home/websites/public_html/mysite.com/public # Custom log file locations LogLevel warn ErrorLog /home/websites/public_html/mysite.com/log/error.log CustomLog /home/websites/public_html/mysite.com/log/access.log combined </VirtualHost>

    Read the article

  • How to allow local LAN access while connected to Cisco VPN?

    - by Ian Boyd
    How can I maintain local LAN access while connected to Cisco VPN? When connecting using Cisco VPN, the server has to ability to instruct the client to prevent local LAN access. Assuming this server-side option cannot be turned off, how can allow local LAN access while connected with a Cisco VPN client? I used to think it was simply a matter of routes being added that capture LAN traffic with a higher metric, for example: Network Destination Netmask Gateway Interface Metric 10.0.0.0 255.255.0.0 10.0.0.3 10.0.0.3 20 <--Local LAN 10.0.0.0 255.255.0.0 192.168.199.1 192.168.199.12 1 <--VPN Link And trying to delete the 10.0.x.x -> 192.168.199.12 route don't have any effect: >route delete 10.0.0.0 >route delete 10.0.0.0 mask 255.255.0.0 >route delete 10.0.0.0 mask 255.255.0.0 192.168.199.1 >route delete 10.0.0.0 mask 255.255.0.0 192.168.199.1 if 192.168.199.12 >route delete 10.0.0.0 mask 255.255.0.0 192.168.199.1 if 0x3 And while it still might simply be a routing issue, attempts to add or delete routes fail. At what level is Cisco VPN client driver doing what in the networking stack that takes overrides a local administrator's ability to administer their machine? The Cisco VPN client cannot be employing magic. It's still software running on my computer. What mechanism is it using to interfere with my machine's network? What happens when an IP/ICMP packet arrives on the network? Where in the networking stack is the packet getting eaten? See also No internet connection with Cisco VPN Cisco VPN Client interrupts connectivity to my LDAP server Cisco VPN stops Windows 7 Browsing How can I prohibit the creation of a route in Windows XP upon connection to Cisco VPN? Rerouting local LAN and Internet traffic when in VPN VPN Client "Allow local LAN Access" Allow Local LAN Access for VPN Clients on the VPN 3000 Concentrator Configuration Example LAN access gone when I connect to VPN Windows XP Documentation: Route Edit: Things I've not yet tried: >route delete 10.0.* Update: Since Cisco has abandoned their old client, in favor of AnyConnect (HTTP SSL based VPN), this question, unsolved, can be left as a relic of history. Going forward, we can try to solve the same problem with their new client.

    Read the article

  • Linux Software RAID recovery

    - by Zoredache
    I am seeing a discrepancy between the output of mdadm --detail and mdadm --examine, and I don't understand why. This output mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Array Size : 3662760640 (3493.08 GiB 3750.67 GB) Used Dev Size : 1465104256 (1397.23 GiB 1500.27 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 Persistence : Superblock is persistent Seems to contradict this. (the same for every disk in the array) mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 0.90.00 UUID : 1f54d708:60227dd6:163c2a05:89fa2e07 (local to host) Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Used Dev Size : 1465104320 (1397.23 GiB 1500.27 GB) Array Size : 2930208640 (2794.46 GiB 3000.53 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 The array was created like this. mdadm -v --create /dev/md2 \ --level=raid10 --layout=o2 --raid-devices=5 \ --chunk=64 --metadata=0.90 \ /dev/sdg2 /dev/sdf2 /dev/sde2 /dev/sdd2 /dev/sdc2 Each of the 5 individual drives have partitions like this. Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057754 Device Boot Start End Blocks Id System /dev/sdc1 2048 34815 16384 83 Linux /dev/sdc2 34816 2930243583 1465104384 fd Linux raid autodetect Backstory So the SATA controller failed in a box I provide some support for. The failure was a ugly and so individual drives fell out of the array a little at a time. While there are backups, we the are not really done as frequently as we really need. There is some data that I am trying to recover if I can. I got additional hardware and I was able to access the drives again. The drives appear to be fine, and I can get the array and filesystem active and mounted (using read-only mode). I am able to access some data on the filesystem and have been copying that off, but I am seeing lots of errors when I try to copy the most recent data. When I am trying to access that most recent data I am getting errors like below which makes me think that the array size discrepancy may be the problem. Mar 14 18:26:04 server kernel: [351588.196299] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.196309] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.196313] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.199260] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.199264] dm-7: rw=0, want=20647626304, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.202446] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.202450] dm-7: rw=0, want=19973212288, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.205516] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.205520] dm-7: rw=0, want=8009695096, limit=6442450944

    Read the article

  • Ubuntu web server 11.10 ftp/server issue

    - by Nate
    I was wondering if I could get some help with FTP, atleast I'm pretty sure it has to do with FTP. Although it could have to do with something else, I'm not 100% sure.. Now, for fare warning, I'm no ubuntu dominator, I'm pretty newb. Anyway, I've attempted to build a webserver to to test php and what not for a site I'm building. Now everything works, the php, the sql etc. By the way, I built this in VMware, so it's virtual, over a network, so I can access stuff from anywhere. I'm in a college right now so yeah. The one problem I have is this. I go into the terminal, and do ifconfig to find my IP. I get it and go to a browser on a different machine and type that IP in. I get the "index of/" page, where I can browse the website I'm making. I can click through folders and what not. I can click on things and they open up. Now lets say I'm working on my desktop and open up an FTP and drag and drop something into there, go to the IP in the browser again and try to open it. I either get "Server error The website encountered an error while retrieving http://my_server_ip/phpinfo.php. It may be down for maintenance or configured incorrectly. Here are some suggestions: Reload this webpage later." or "Forbidden You don't have permission to access /html.html on this server." But, lets say I make it on the server itself, and try, bam, magic it works. I'm sure I set the permissions to let everyone open and view the files, but maybe I didn't? I'm not sure, and this is where I was hoping I could get some help. By the way, I followed a tutorial on changing the www folder (apache) from /var/www to home/"user"/www. I can't recall how I did that, but it's there and my ftp goes to the home/"user"/www folder. But yeah, any and all help is appreciated. Like I said, I'm really new to this, but I do enjoy attempting to make these servers and learning how they work, so it's not like making this webserver is a project for a class, It's just assisting me in testing stuff for another class and possibly other websites later on down the road. Anyway, anyone who decides to help, thanks so much, I'd really appreciate it. Nate. P.S. I'm using Ubuntu 11.10 desktop edition with a LAMP server

    Read the article

  • OpenVPN, Great on Windows, VERY slow on Mac...

    - by Phsion
    Hello, I'm not really an IT Pro, but this seemed like the best place to ask this question... I have setup VPN networks in the past, for fun, and everything was great, but now I've set one up for my boss, and while my computers all work great, his Mac machines are almost too slow to work with. Its pretty much vanilla configs all around, anyone have any ideas? Its a TUN routing setup over UDP. Back Story: My boss travels a lot, and wants to be able to access all his files from the road, and is also pretty paranoid about security (even though knows almost nothing about computers). SO i figured a VPN would be the answer. I went with OpenVPN, but there are some other issues. The only ISP we can get in our area besides Dial-UP is a crappy Satellite provider, that doesn't offer public IPs unless your willing to pay, so while the computers and VPN setup are pretty vanilla, the routing and structure is strange to get around this limitation. Specs: Its OpenVPN2, and there are six machines using it (only three actually use it, the rest are my test machines), one Windows 7 laptop, two XP Desktops, one OS X 10.5 Desktop, one 10.6 Desktop, and one 10.6 Laptop. One XP Desktop sits at my house and acts as the server (6Mbs/2Mbs FIOS connection). One XP desktop sits at the office and hosts a webpage that will wake up the Main Mac Desktop from sleep, and also ping all the machines on the VPN and show their status. The main office mac (10.6) stays in sleep mode until it gets the Wake-On-Lan packet from the Office XP, and then it auto connects to the VPN and opens itself up. The reason for all this is the Satellite private IP crap means i cant directly access the office machines outside of the LAN, so everyone connects to my house first, then they talk to each other from there. The Wake On Lan weirdness is because my boss doesn't want to leave the main Mac on all the time, and making a quick and dirty webpage was the easiest way to send a Magic Packet from inside the LAN without confusing my boss. The VPN uses Client Config files to make static IPs for the client. The only thing i found in google was some changes to the VPN MTU settings (down to 1400) but no real help. Oh, and i forgot...all the windows machines just have OpenVPN start as a service. The Mac laptop uses tunnelblick (an OpenVPN GUI) and the Mac Desktops use OpenVPN in normal command line mode. Server Config: tun-mtu 1500 fragment 1450 mssfix 1450 management localhost #### port #### proto udp dev tun ca ####### cert ####### key ###### dh ###### server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-config-dir ccd route 10.8.0.0 255.255.255.252 client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status log Client Configs (all are simple variations on this) tun-mtu 1500 fragment 1450 mssfix 1450 client dev tun proto udp remote ######## #### resolv-retry infinite nobind persist-key presist-tun ca ##### cert ##### key ##### ns-cert-type server comp-lzo verb 3

    Read the article

  • TSQL Conditionally Select Specific Value

    - by Dzejms
    This is a follow-up to #1644748 where I successfully answered my own question, but Quassnoi helped me to realize that it was the wrong question. He gave me a solution that worked for my sample data, but I couldn't plug it back into the parent stored procedure because I fail at SQL 2005 syntax. So here is an attempt to paint the broader picture and ask what I actually need. This is part of a stored procedure that returns a list of items in a bug tracking application I've inherited. There are are over 100 fields and 26 joins so I'm pulling out only the mostly relevant bits. SELECT tickets.ticketid, tickets.tickettype, tickets_tickettype_lu.tickettypedesc, tickets.stage, tickets.position, tickets.sponsor, tickets.dev, tickets.qa, DATEDIFF(DAY, ticket_history_assignment.savedate, GETDATE()) as 'daysinqueue' FROM dbo.tickets WITH (NOLOCK) LEFT OUTER JOIN dbo.tickets_tickettype_lu WITH (NOLOCK) ON tickets.tickettype = tickets_tickettype_lu.tickettypeid LEFT OUTER JOIN dbo.tickets_history_assignment WITH (NOLOCK) ON tickets_history_assignment.ticketid = tickets.ticketid AND tickets_history_assignment.historyid = ( SELECT MAX(historyid) FROM dbo.tickets_history_assignment WITH (NOLOCK) WHERE tickets_history_assignment.ticketid = tickets.ticketid GROUP BY tickets_history_assignment.ticketid ) WHERE tickets.sponsor = @sponsor The area of interest is the daysinqueue subquery mess. The tickets_history_assignment table looks roughly as follows declare @tickets_history_assignment table ( historyid int, ticketid int, sponsor int, dev int, qa int, savedate datetime ) insert into @tickets_history_assignment values (1521402, 92774,20,14, 20, '2009-10-27 09:17:59.527') insert into @tickets_history_assignment values (1521399, 92774,20,14, 42, '2009-08-31 12:07:52.917') insert into @tickets_history_assignment values (1521311, 92774,100,14, 42, '2008-12-08 16:15:49.887') insert into @tickets_history_assignment values (1521336, 92774,100,14, 42, '2009-01-16 14:27:43.577') Whenever a ticket is saved, the current values for sponsor, dev and qa are stored in the tickets_history_assignment table with the ticketid and a timestamp. So it is possible for someone to change the value for qa, but leave sponsor alone. What I want to know, based on all of these conditions, is the historyid of the record in the tickets_history_assignment table where the sponsor value was last changed so that I can calculate the value for daysinqueue. If a record is inserted into the history table, and only the qa value has changed, I don't want that record. So simply relying on MAX(historyid) won't work for me. Quassnoi came up with the following which seemed to work with my sample data, but I can't plug it into the larger query, SQL Manager bitches about the WITH statement. ;WITH rows AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY ticketid ORDER BY savedate DESC) AS rn FROM @Table ) SELECT rl.sponsor, ro.savedate FROM rows rl CROSS APPLY ( SELECT TOP 1 rc.savedate FROM rows rc JOIN rows rn ON rn.ticketid = rc.ticketid AND rn.rn = rc.rn + 1 AND rn.sponsor <> rc.sponsor WHERE rc.ticketid = rl.ticketid ORDER BY rc.rn ) ro WHERE rl.rn = 1 I played with it yesterday afternoon and got nowhere because I don't fundamentally understand what is going on here and how it should fit into the larger context. So, any takers? UPDATE Ok, here's the whole thing. I've been switching some of the table and column names in an attempt to simplify things so here's the full unedited mess. snip - old bad code Here are the errors: Msg 102, Level 15, State 1, Procedure usp_GetProjectRecordsByAssignment, Line 159 Incorrect syntax near ';'. Msg 102, Level 15, State 1, Procedure usp_GetProjectRecordsByAssignment, Line 179 Incorrect syntax near ')'. Line numbers are of course not correct but refer to ;WITH rows AS And the ')' char after the WHERE rl.rn = 1 ) Respectively Is there a tag for extra super long question? UPDATE #2 Here is the finished query for anyone who may need this: CREATE PROCEDURE [dbo].[usp_GetProjectRecordsByAssignment] ( @assigned numeric(18,0), @assignedtype numeric(18,0) ) AS SET NOCOUNT ON WITH rows AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY recordid ORDER BY savedate DESC) AS rn FROM projects_history_assignment ) SELECT projects_records.recordid, projects_records.recordtype, projects_recordtype_lu.recordtypedesc, projects_records.stage, projects_stage_lu.stagedesc, projects_records.position, projects_position_lu.positiondesc, CASE projects_records.clientrequested WHEN '1' THEN 'Yes' WHEN '0' THEN 'No' END AS clientrequested, projects_records.reportingmethod, projects_reportingmethod_lu.reportingmethoddesc, projects_records.clientaccess, projects_clientaccess_lu.clientaccessdesc, projects_records.clientnumber, projects_records.project, projects_lu.projectdesc, projects_records.version, projects_version_lu.versiondesc, projects_records.projectedversion, projects_version_lu_projected.versiondesc AS projectedversiondesc, projects_records.sitetype, projects_sitetype_lu.sitetypedesc, projects_records.title, projects_records.module, projects_module_lu.moduledesc, projects_records.component, projects_component_lu.componentdesc, projects_records.loginusername, projects_records.loginpassword, projects_records.assistedusername, projects_records.browsername, projects_browsername_lu.browsernamedesc, projects_records.browserversion, projects_records.osname, projects_osname_lu.osnamedesc, projects_records.osversion, projects_records.errortype, projects_errortype_lu.errortypedesc, projects_records.gsipriority, projects_gsipriority_lu.gsiprioritydesc, projects_records.clientpriority, projects_clientpriority_lu.clientprioritydesc, projects_records.scheduledstartdate, projects_records.scheduledcompletiondate, projects_records.projectedhours, projects_records.actualstartdate, projects_records.actualcompletiondate, projects_records.actualhours, CASE projects_records.billclient WHEN '1' THEN 'Yes' WHEN '0' THEN 'No' END AS billclient, projects_records.billamount, projects_records.status, projects_status_lu.statusdesc, CASE CAST(projects_records.assigned AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' WHEN '20000' THEN 'Client' WHEN '30000' THEN 'Tech Support' WHEN '40000' THEN 'LMI Tech Support' WHEN '50000' THEN 'Upload' WHEN '60000' THEN 'Spider' WHEN '70000' THEN 'DB Admin' ELSE rtrim(users_assigned.nickname) + ' ' + rtrim(users_assigned.lastname) END AS assigned, CASE CAST(projects_records.assigneddev AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' ELSE rtrim(users_assigneddev.nickname) + ' ' + rtrim(users_assigneddev.lastname) END AS assigneddev, CASE CAST(projects_records.assignedqa AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' ELSE rtrim(users_assignedqa.nickname) + ' ' + rtrim(users_assignedqa.lastname) END AS assignedqa, CASE CAST(projects_records.assignedsponsor AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' ELSE rtrim(users_assignedsponsor.nickname) + ' ' + rtrim(users_assignedsponsor.lastname) END AS assignedsponsor, projects_records.clientcreated, CASE projects_records.clientcreated WHEN '1' THEN 'Yes' WHEN '0' THEN 'No' END AS clientcreateddesc, CASE projects_records.clientcreated WHEN '1' THEN rtrim(clientusers_createuser.firstname) + ' ' + rtrim(clientusers_createuser.lastname) + ' (Client)' ELSE rtrim(users_createuser.nickname) + ' ' + rtrim(users_createuser.lastname) END AS createuser, projects_records.createdate, projects_records.savedate, projects_resolution.sitesaffected, projects_sitesaffected_lu.sitesaffecteddesc, DATEDIFF(DAY, projects_history_assignment.savedate, GETDATE()) as 'daysinqueue', projects_records.iOnHitList, projects_records.changetype FROM dbo.projects_records WITH (NOLOCK) LEFT OUTER JOIN dbo.projects_recordtype_lu WITH (NOLOCK) ON projects_records.recordtype = projects_recordtype_lu.recordtypeid LEFT OUTER JOIN dbo.projects_stage_lu WITH (NOLOCK) ON projects_records.stage = projects_stage_lu.stageid LEFT OUTER JOIN dbo.projects_position_lu WITH (NOLOCK) ON projects_records.position = projects_position_lu.positionid LEFT OUTER JOIN dbo.projects_reportingmethod_lu WITH (NOLOCK) ON projects_records.reportingmethod = projects_reportingmethod_lu.reportingmethodid LEFT OUTER JOIN dbo.projects_lu WITH (NOLOCK) ON projects_records.project = projects_lu.projectid LEFT OUTER JOIN dbo.projects_version_lu WITH (NOLOCK) ON projects_records.version = projects_version_lu.versionid LEFT OUTER JOIN dbo.projects_version_lu projects_version_lu_projected WITH (NOLOCK) ON projects_records.projectedversion = projects_version_lu_projected.versionid LEFT OUTER JOIN dbo.projects_sitetype_lu WITH (NOLOCK) ON projects_records.sitetype = projects_sitetype_lu.sitetypeid LEFT OUTER JOIN dbo.projects_module_lu WITH (NOLOCK) ON projects_records.module = projects_module_lu.moduleid LEFT OUTER JOIN dbo.projects_component_lu WITH (NOLOCK) ON projects_records.component = projects_component_lu.componentid LEFT OUTER JOIN dbo.projects_browsername_lu WITH (NOLOCK) ON projects_records.browsername = projects_browsername_lu.browsernameid LEFT OUTER JOIN dbo.projects_osname_lu WITH (NOLOCK) ON projects_records.osname = projects_osname_lu.osnameid LEFT OUTER JOIN dbo.projects_errortype_lu WITH (NOLOCK) ON projects_records.errortype = projects_errortype_lu.errortypeid LEFT OUTER JOIN dbo.projects_resolution WITH (NOLOCK) ON projects_records.recordid = projects_resolution.recordid LEFT OUTER JOIN dbo.projects_sitesaffected_lu WITH (NOLOCK) ON projects_resolution.sitesaffected = projects_sitesaffected_lu.sitesaffectedid LEFT OUTER JOIN dbo.projects_gsipriority_lu WITH (NOLOCK) ON projects_records.gsipriority = projects_gsipriority_lu.gsipriorityid LEFT OUTER JOIN dbo.projects_clientpriority_lu WITH (NOLOCK) ON projects_records.clientpriority = projects_clientpriority_lu.clientpriorityid LEFT OUTER JOIN dbo.projects_status_lu WITH (NOLOCK) ON projects_records.status = projects_status_lu.statusid LEFT OUTER JOIN dbo.projects_clientaccess_lu WITH (NOLOCK) ON projects_records.clientaccess = projects_clientaccess_lu.clientaccessid LEFT OUTER JOIN dbo.users users_assigned WITH (NOLOCK) ON projects_records.assigned = users_assigned.userid LEFT OUTER JOIN dbo.users users_assigneddev WITH (NOLOCK) ON projects_records.assigneddev = users_assigneddev.userid LEFT OUTER JOIN dbo.users users_assignedqa WITH (NOLOCK) ON projects_records.assignedqa = users_assignedqa.userid LEFT OUTER JOIN dbo.users users_assignedsponsor WITH (NOLOCK) ON projects_records.assignedsponsor = users_assignedsponsor.userid LEFT OUTER JOIN dbo.users users_createuser WITH (NOLOCK) ON projects_records.createuser = users_createuser.userid LEFT OUTER JOIN dbo.clientusers clientusers_createuser WITH (NOLOCK) ON projects_records.createuser = clientusers_createuser.userid LEFT OUTER JOIN dbo.projects_history_assignment WITH (NOLOCK) ON projects_history_assignment.recordid = projects_records.recordid AND projects_history_assignment.historyid = ( SELECT ro.historyid FROM rows rl CROSS APPLY ( SELECT TOP 1 rc.historyid FROM rows rc JOIN rows rn ON rn.recordid = rc.recordid AND rn.rn = rc.rn + 1 AND rn.assigned <> rc.assigned WHERE rc.recordid = rl.recordid ORDER BY rc.rn ) ro WHERE rl.rn = 1 AND rl.recordid = projects_records.recordid ) WHERE (@assignedtype='0' and projects_records.assigned = @assigned) OR (@assignedtype='1' and projects_records.assigneddev = @assigned) OR (@assignedtype='2' and projects_records.assignedqa = @assigned) OR (@assignedtype='3' and projects_records.assignedsponsor = @assigned) OR (@assignedtype='4' and projects_records.createuser = @assigned)

    Read the article

  • Convert mkv/h264 video so it can be played on a "mid-range" Sony Ericsson phone. (using Ubuntu).

    - by Johan
    Hi As a little experiment I thinking of converting some video/movies/tv-series into a format that could be playable on my K850, but to be a little bit more generic in this question let's say "mid range Sony Ericsson" phone since they all more or less behave the same and has the same screen resolution (240 x 320). I am looking for command line based tools (for Ubuntu), since I am thinking about writing a "convert and move" script later if it is successful. A lot of the video I have is encoded in mkv/h264, but since that is not supported by the phone I guess that I need to convert it into some mp4/mpeg4 low quality video. After some googling it seems like a good candidate for the job is ffmpeg, but that seems to be a very versatile tool with a lot of magic tricks. Am I on the right track? And if so how do I use ffmpeg to do this? Thanks Johan Update: After plating a little bit with ffmeg I noticed that it only uses 1 of my 4 cores, so the transcoding takes forever. I found a arg called -threads but that did not change much, maybe I got it wrong. I also found that something like this plays in the phone. ffmpeg -i Mythbusters\ S1D1_1.mkv -threads 4 -t 180 -vcodec mpeg4 -r 15 -s 320x240 Mythbusters\ S1D1_1_mini.mp4 It was possible to use 3gp/h263, but the quality was really useless. ffmpeg -i Mythbusters\ S1D1_1.mkv -t 180 -vcodec h263 -acodec libfaac -s cif Mythbusters\ S1D1_1_cif.3gp And it seems like mp4/h264 is also possible and the result is ok, thanks to this question, this one seem to use more than one core as well so it was a little bit faster for me. ffmpeg -i Mythbusters_S1D1_1.mkv -t 180 -acodec libfaac -ab 60k -s 320x240 -vcodec libx264 -b 500k -flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -flags2 +mixed_refs -me_method umh -subq 6 -trellis 1 -refs 5 -coder 0 -me_range 16 -g 250 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -bt 500k -maxrate 768k -bufsize 2M -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 13 -threads 0 -f mp4 Mythbusters_S1D1_1_qvga.mp4 Update: I have tried to use HandBrakeCLI and it is no problem creating a new file that seem to be the same as the one created with ffmpeg with something like this. HandBrakeCLI -i Mythbusters_S1D1_1.mkv --size 100 -E faac -B 60 --maxHeight 240 -r 15 -e x264 -o Mythbusters_S1D1_1_hand.mp4 But that one did not play in the phone... I found this in the official manual: If you transfer video clips using another program than Media Go™, we recommend that you select H.264 Baseline profile video, up to QVGA at 30 fps, VBR 384 kbps (max 768 kps) with AAC+ audio at 128 kbps (max 255 kbps), 48 kHz and stereo audio in mp4 file format. So the idea to use H264 seems to be correct.

    Read the article

  • Cannot exit X server, restart, shutdown or drop to tty when VGA monitor active

    - by terdon
    I have a strange problem. If I connect an external VGA monitor to my laptop, exiting the X environment in any way crashes the computer. For example, say I am working with my two monitors (the laptop's and one connected to my VGA port) active. Hitting Ctrl+Alt+F Key should take me down to a tty. What actually happens is that the VGA screen goes blank, as you would expect, but the laptop screen, although still on, shows nothing. I know the screen is on because it is slightly more illuminated than when it is off. When in this state, I can do nothing to regain access to the machine. I have tried: Ctrl+F Key (and even Ctrl+Alt+F Key, just in case) combinations and none seem to have any effect. Ctrl+Alt+Del : Nothing Magic SysRq key: Nothing Blindly typing my username and password and trying to reboot/shutdown or restart GDM or MDM: Nothing The only thing that works is a hard reset. The exact same behavior occurs when kiling the X server through Ctrl+Alt+Backspace, rebooting or shutdown. There is no difference if I reboot/shutdown/log out using the WM's graphical menu or if I use the shutdown or rebootcommands. It is also not WM-dependent. I have the same problem using Cinnamon, Gnome 3, MATE and xfce4. It is, however, VGA dependent. I have tried connecting another VGA monitor and have the same problem. I do not, however, have this problem if a screen is connected to the DisplayPort. It is, therefore, a VGA specific issue. To make things even stranger, this only occurs when both screens are active. If either the laptop screen or the VGA monitor is inactive the problem goes away. Finally, this problem arose when I installed the latest Linux Mint Debian (LMDE). It did not occur with the previous release of LMDE. I am not sure what has changed since I used the same kernel version in both releases (I had upgraded the kernel while on the previous release) and, I think, the same nvidia drivers. Oh, and yes, I have updated the nvidia driver. Hardware: Dell M4500 laptop CPU: Intel Core i7 RAM: 8GB Graphics: nVidia GT216 [Quadro FX 880M] Software: LMDE, kernel 3.2.0-2-amd64 Xorg: 1.11.4 nVidia kernel: 295.20-1+3.2.9-1 Possibly relevant files: /var/log/Xorg.0.log ~/.xsession-errors Does anyone have any ideas how to fix this? Thanks in advance for any help.

    Read the article

  • Tracking down rogue disk usage

    - by Amadan
    I found several other questions regarding the theory behind my problem (e.g. this, this), but I don't know how to apply the answers to my machine. # du -hsx / 11000283 / # df -kT / Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/csisv13-root ext4 516032952 361387456 128432532 74% / There is a big difference between 11G (du) and 345G (df). Where are the remaining 334G? It's not in deleted files. There was only one, it was short, and I truncated it just in case. This is what remains: # lsof -a +L1 / COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME zabbix_ag 4902 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4902 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) I rebooted to see if fsck does anything. But, from /var/log/boot.log, it seems there are no issues: /dev/mapper/server-root: clean, 3936097/32768000 files, 125368568/131064832 blocks Thinking maybe someone overzealously reserved root space, I checked the master record: # tune2fs -l /dev/mapper/server-root tune2fs 1.42 (29-Nov-2011) Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 86430ade-cea7-46ce-979c-41769a41ecbe Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131064832 Reserved block count: 6553241 Free blocks: 5696264 Free inodes: 28831903 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Feb 1 13:44:04 2013 Last mount time: Tue Aug 19 16:56:13 2014 Last write time: Fri Feb 1 13:51:28 2013 Mount count: 9 Maximum mount count: -1 Last checked: Fri Feb 1 13:44:04 2013 Check interval: 0 (<none>) Lifetime writes: 1215 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28836028 Default directory hash: half_md4 Directory Hash Seed: bca55ff5-f530-48d1-8347-25c004f66d43 Journal backup: inode blocks The system is: # uname -a Linux server 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:46:11 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" Does anyone have any tips on what exactly to do to find and hopefully reclaim the missing space?

    Read the article

  • Installing Phusion Passenger 4.0.20 on Ubuntu 13.10

    - by tempestfire2002
    So I'm trying to install Passenger on the newest version of KUbuntu (13.10). I installed Apache2 using the apache2-mpm-worker package using the Muon Package Manager. And these are the commands I ran. rvmsudo gem install passenger rvmsudo passenger-install-apache2-module But I keep getting the following errors: [Fri Oct 18 15:52:13.227790 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_LOCK_DIR} is not defined [Fri Oct 18 15:52:13.227933 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_PID_FILE} is not defined [Fri Oct 18 15:52:13.227969 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_RUN_USER} is not defined [Fri Oct 18 15:52:13.227991 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_RUN_GROUP} is not defined [Fri Oct 18 15:52:13.228026 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Oct 18 15:52:13.231737 2013] [core:warn] [pid 13095:tid 3074562624] AH00111: Config variable ${APACHE_RUN_DIR} is not defined [Fri Oct 18 15:52:13.232760 2013] [core:warn] [pid 13095:tid 3074562624] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Oct 18 15:52:13.233043 2013] [core:warn] [pid 13095:tid 3074562624] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Oct 18 15:52:13.233078 2013] [core:warn] [pid 13095:tid 3074562624] AH00111: Config variable ${APACHE_LOG_DIR} is not defined AH00526: Syntax error on line 74 of /etc/apache2/apache2.conf: Invalid Mutex directory in argument file:${APACHE_LOCK_DIR} -------------------------------------------- WARNING: Apache doesn't seem to be compiled with the 'prefork', 'worker' or 'event' MPM Phusion Passenger has only been tested on Apache with the 'prefork', the 'worker' and the 'event' MPM. Your Apache installation is compiled with the '' MPM. We recommend you to abort this installer and to recompile Apache with either the 'prefork', the 'worker' or the 'event' MPM. Press Ctrl-C to abort this installer (recommended). Press Enter if you want to continue with installation anyway. The result of my running apache2ctl -V is: Server version: Apache/2.4.6 (Ubuntu) Server built: Aug 9 2013 14:31:04 Server's Module Magic Number: 20120211:23 Server loaded: APR 1.4.8, APR-UTIL 1.5.2 Compiled using: APR 1.4.8, APR-UTIL 1.5.2 Architecture: 32-bit Server MPM: worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=256 -D HTTPD_ROOT="/etc/apache2" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="mime.types" -D SERVER_CONFIG_FILE="apache2.conf" As can be seen, the server is compiled with the worker MPM, so why is passenger complaining? And how do I solve the above errors (warnings, really, but to be safe, I'd like to not have any warnings)? Thanks.

    Read the article

  • Wake On Lan only works on first boot, not sequent ones

    - by sp3ctum
    I have converted my old Dell Latitude D410 laptop to a server for tinkering. It is running an updated Debian Squeeze (6) with a Xen enabled kernel (I want to toy with virtual machines later on). I am running it 'headless' via an ethernet connection. I am struggling to enable Wake On Lan for the box. I have enabled the setting in the BIOS, and it works nicely, but only for the first time after the power cord is plugged in. Here is my test: Plug in power cord, don't boot yet Send magic Wake On Lan packet from test machine (Ubuntu) using the wakeonlan program Server expected to start (does every time) Once server has booted, log in via ssh and shut it down via the operating system After shutdown, wake server up via WOL again (fails every time) Some observations: Right after step 1 I can see the integrated NIC has a light on. I deduce this means the NIC gets adequate power and that the ethernet cable is connected to my switch. This light is not on after step 4 (the shutdown stage). The light becomes back on after I disconnect and reconnect the power cord, after which WOL works as well. After step 4 I can verify that wake on lan is enabled via the ethtool program (repeatable each time) This blog post suggested the problem may lay in the fact the motherboard might not be giving adequate power to the NIC after shutdown, so I copied an acpitool script that supposedly should signal the system to give the needed power to the card when shut down. Obviously it did not fix my issue. I have included the relevant power settings in the paste below. I have tried different combinations of parameters of shutdown (the program) options, as well as the poweroff program. I even tried "telinit 0", which I figured would do the most direct boot via software. If I keep the laptop's power button pressed down and do a hard boot this way, the light on the ethernet port stays lit and a WOL is possible. I copied a bunch of hopefully useful information in this paste I have tried this with the laptop battery connected and without it. I get the same result. Promptly pressing the power button causes the system to shut down with the message "The system is going down for system halt NOW!", and WOL is still unsuccessful.

    Read the article

  • PC monitors shut off and system hangs while playing 3D games, but sound continues - Diagnosis?

    - by Jon Schneider
    Two days ago, I started running into a problem with my Windows PC: The PC's two connected monitors simultaneously lose signal and go black (as though the PC had been powered off). The keyboard's Numlock, Capslock, and Scroll Lights will become "stuck" in their current positions, as though the PC is hung. (For example, the Numlock light on the keyboard remains lit regardless of me pressing the Numlock key repeatedly.) No keyboard input does anything. (Ctrl+Alt+Del, Ctrl+Shift+Esc, Ctrl+C, etc.) However -- Whatever sound/music the PC was playing continues to play, and the PC's fans continue running, so the PC hasn't powered itself off or rebooted itself. Opening up the case, the graphics card is pretty hot to the touch. I had this happen 3 times in one evening. In all cases, I was playing a game with 3D graphics when the problem occurred (Torchlight, Minecraft, Magic: The Gathering 2012, Avadon: The Black Fortress demo). I have yet to have the problem happen when I'm not playing a game. This system has been running stable for about 2.5 years prior to this. I didn't make any changes to the system prior to the problem starting to occur. System specs: OS: Windows 7 64-bit Processor: Intel Core 2 Duo E7200 Wolfdale 2.53GHz Video Card: XFX GeForce 9800 GT 512 MB Motherboard: Foxconn P45A-S LGA 775 Intel ATX RAM: Corsair 4 GB (2x 2GB) DDR2-800 (PC2 6400) Full specs: New PC 2008 Troubleshooting tried so far (the problem occurred again after taking each of these steps, one at a time): Updated the video drivers with the latest drivers from NVidia's site. Opened up the case and cleaned out the video card and processor fans (both were pretty dirty). Installed and ran temperature monitor software. The processor idles at about 50 degrees C, and goes up to about 63 degrees C while playing a game (seems on the warm side, but not excessively so?). The software wasn't able to report the temperature of the GPU -- not sure this particular GPU supports software temperature readout? My initial diagnosis is that maybe the GPU is on its last legs (given that it seems to be running pretty hot, and the problem only occurs while playing 3D games). Does this seem likely? Or is it likely that this problem is caused by the processor, RAM, or motherboard? Or could this be a software issue of some kind? Thanks for any advice!

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >