Search Results

Search found 97840 results on 3914 pages for 'custom data type'.

Page 238/3914 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • Skynet Big Data Demo Using Hexbug Spider Robot, Raspberry Pi, and Java SE Embedded (Part 3)

    - by hinkmond
    In Part 2, I described what connections you need to make for this demo using a Hexbug Spider Robot, a Raspberry Pi, and Java SE Embedded for programming. Here are some photos of me doing the soldering. Software engineers should not be afraid of a little soldering work. It's all good. See: Skynet Big Data Demo (Part 2) One thing to watch out for when you open the remote is that there may be some glue covering the contact points. Make sure to use an Exacto knife or small screwdriver to scrape away any glue or non-conductive material covering each place where you need to solder. And after you are done with your soldering and you gave the solder enough time to cool, make sure all your connections are marked so that you know which wire goes where. Give each wire a very light tug to make sure it is soldered correctly and is making good contact. There are lots of videos on the Web to help you if this is your first time soldering. Check out Laday Ada's (from adafruit.com) links on how to solder if you need some additional help: http://www.ladyada.net/learn/soldering/thm.html If everything looks good, zip everything back up and meet back here for how to connect these wires to your Raspberry Pi. That will be it for the hardware part of this project. See, that wasn't so bad. Hinkmond

    Read the article

  • Count function on tree structure (non-binary)

    - by Spevy
    I am implementing a tree Data structure in c# based (largely on Dan Vanderboom's Generic implementation). I am now considering approach on handling a Count property which Dan does not implement. The obvious and easy way would be to use a recursive call which Traverses the tree happily adding up nodes (or iteratively traversing the tree with a Queue and counting nodes if you prefer). It just seems expensive. (I also may want to lazy load some of my nodes down the road). I could maintain a count at the root node. All children would traverse up to and/or hold a reference to the root, and update a internally settable count property on changes. This would push the iteration problem to when ever I want to break off a branch or clear all children below a given node. Generally less expensive, and puts the heavy lifting what I think will be less frequently called functions. Seems a little brute force, and that usually means exception cases I haven't thought of yet, or bugs if you prefer. Does anyone have an example of an implementation which keeps a count for an Unbalanced and/or non-binary tree structure rather than counting on the fly? Don't worry about the lazy load, or language. I am sure I can adjust the example to fit my specific needs. EDIT: I am curious about an example, rather than instructions or discussion. I know this is not technically difficult...

    Read the article

  • Moving from a static site to a CMS with new URLs and meta-data for pages

    - by Chris J
    Hi I am in the process of rebuilding a site from static pages to a CMS which will be using mod_rewrite to generate new page URLs. In this process our marketing people and myself have decided to tidy up the descriptions, keywords and titles. Eg: a page which who's URL is currently "website-name/about_us.html" and has a title of "website-name - something not quite page specific" will change to "website-name/about-us/" and title: "about us - website-name" and may have a few keywords and the description changed. Our goal with updating the meta data is to improve our page rankings and try to keep in line with some best practices for SEO. Though our current page rankings are quite good in many aspects, there is room for improvement. All of the pages will also have content changes (like rearranging heading tags, new menu on all pages, new content in footer, extra pieces of dynamic content relating to other pages). In this new site process I plan to use 301 redirects for all the old URLs pointing to the new URLs. My question is what can I expect to happen to the page rankings in Google, in the sort term and long term? Will this be like kicking off a new site which will have to build up trust over time or will the original page rankings have affect?

    Read the article

  • Storing data offline with javascript

    - by Walker
    My question is about storing data offline and potentially whether I will need to bring in an outside programmer or could this be learned within a few weeks? The website I am working on will have an interface where users will login and go through a series of quizzes in the form of checkbox, drop down menus, and others. Each page/quiz area could have 20-100 total checkboxes in a series of 3-5 rows because of the comprehensive nature of course. This I can do - I know how to code the quiz and return a correct or incorrect answer based on each individual checkbox and present a cumulative score (ie: you got 57% correct). The issue lies in the fact that I would like to save the users results and keep them informed of their progress. When they complete all of the quizzes, I would like to have a visual output of their performance in each area. Storing the output from their results offline is where I think I may run into a problem with my lack of coding experience. I would also like to have a sidebar with their progress of each section (10-15) with a green percentage completion bar or a % correct which would draw from this. I have never had to code something that stores information like this offline - so back to my question - would it be better to learn the language needed or bring in a coder/developer for the back end stuff.

    Read the article

  • How should I track approval workflow when users at every security level can create a request?

    - by Eric Belair
    I am writing a new application that allows users to enter requests. Once a request is entered, it must follow an approval workflow to be finally approved by a user the highest security level. So, let's say a user at Security Level 1 enters a request. This request must be approved by his superior - a user at Security Level 2. Once the Security Level 2 user approves it, it must be approved by a user at Security Level 3. Once the Security Level 3 user approves it, it is considered fully approved. However, users at any of the three Security Levels can enter requests. So, if a Security Level 3 user enters a request, it is automatically considered "fully approved". And, if a Security Level 2 user enters a request, it must only be approved by a Security Level 3 user. I'm currently storing each approval status in a Database Log Table, like so: STATUS_ID (PK) REQUEST_ID STATUS STATUS_DATE -------------- ------------- ---------------- ----------------------- 1 1 USER_SUBMIT 2012-09-01 00:00:00.000 2 1 APPROVED_LEVEL2 2012-09-01 01:00:00.000 3 1 APPROVED_LEVEL3 2012-09-01 02:00:00.000 4 2 USER_SUBMIT 2012-09-01 02:30:00.000 5 2 APPROVED_LEVEL2 2012-09-01 02:45:00.000 My question is, which is a better design: Record all three statuses for every request ...or... Record only the statuses needed according to the Security Level of the user submitting the request In Case 2, the data might look like this for two requests - one submitted by Security Level 2 User and another submitted by Security Level 3 user: STATUS_ID (PK) REQUEST_ID STATUS STATUS_DATE -------------- ------------- ---------------- ----------------------- 1 3 APPROVED_LEVEL2 2012-09-01 01:00:00.000 2 3 APPROVED_LEVEL3 2012-09-01 02:00:00.000 3 4 APPROVED_LEVEL3 2012-09-01 02:00:00.000

    Read the article

  • ASP.NET MVVM Handling multiple Data Transfer Objects on a single page

    - by meffect
    I have an asp.net mvc "edit" page which allows the user to make edits to the parent entity, and then also "create" child entities on the same page. Note: I'm making these data transfer objects up. public class CustomerViewModel { public int Id { get; set; } public Byte[] Timestamp { get; set; } public string CustomerName { get; set; } public etc.. public CustomerOrderCreateViewModel CustomerOrderCreateViewModel { get; set; } } In my view I have two html form's. One for Customer "edit" Http Posts, and the other for CustomerOrder "create" Http Posts. In the view page, I load the CustomerOrder "create" form in using: <div id="CustomerOrderCreate"> @Html.Partial("Vendor/_CustomerOrderCreatePartial", Model.CustomerOrderCreateViewModel) </div> The CustomerOrder html form action posts to a different controller HttpPost ActionResult than the Customer "edit" Action Result. My concern is this, on the CustomerOrder controller, in the HttpPost ActionResult [HttpPost] public ActionResult Create(CustomerOrderCreateViewModel vm) { if (!ModelState.IsValid) { return [What Do I Return Here] } ...[Persist to database code]... } I don't know what to return if the model state isn't valid. Right now it's not a problem, because jquery unobtrusive validation handles validation on the client. But what if I need more complex validation (ie: the server needs to handle the validation).

    Read the article

  • How to represent a graph with multiple edges allowed between nodes and edges that can selectively disappear

    - by Pops
    I'm trying to figure out what sort of data structure to use for modeling some hypothetical, idealized network usage. In my scenario, a number of users who are hostile to each other are all trying to form networks of computers where all potential connections are known. The computers that one user needs to connect may not be the same as the ones another user needs to connect, though; user 1 might need to connect computers A, B and D while user 2 might need to connect computers B, C and E. Image generated with the help of NCTM Graph Creator I think the core of this is going to be an undirected cyclic graph, with nodes representing computers and edges representing Ethernet cables. However, due to the nature of the scenario, there are a few uncommon features that rule out adjacency lists and adjacency matrices (at least, without non-trivial modifications): edges can become restricted-use; that is, if one user acquires a given network connection, no other user may use that connection in the example, the green user cannot possibly connect to computer A, but the red user has connected B to E despite not having a direct link between them in some cases, a given pair of nodes will be connected by more than one edge in the example, there are two independent cables running from D to E, so the green and blue users were both able to connect those machines directly; however, red can no longer make such a connection if two computers are connected by more than one cable, each user may own no more than one of those cables I'll need to do several operations on this graph, such as: determining whether any particular pair of computers is connected for a given user identifying the optimal path for a given user to connect target computers identifying the highest-latency computer connection for a given user (i.e. longest path without branching) My first thought was to simply create a collection of all of the edges, but that's terrible for searching. The best thing I can think to do now is to modify an adjacency list so that each item in the list contains not only the edge length but also its cost and current owner. Is this a sensible approach? Assuming space is not a concern, would it be reasonable to create multiple copies of the graph (one for each user) rather than a single graph?

    Read the article

  • Generic Adjacency List Graph implementation

    - by DmainEvent
    I am trying to come up with a decent Adjacency List graph implementation so I can start tooling around with all kinds of graph problems and algorithms like traveling salesman and other problems... But I can't seem to come up with a decent implementation. This is probably because I am trying to dust the cobwebs off my data structures class. But what I have so far... and this is implemented in Java... is basically an edgeNode class that has a generic type and a weight-in the event the graph is indeed weighted. public class edgeNode<E> { private E y; private int weight; //... getters and setters as well as constructors... } I have a graph class that has a list of edges a value for the number of Vertices and and an int value for edges as well as a boolean value for whether or not it is directed. The brings up my first question, if the graph is indeed directed, shouldn't I have a value in my edgeNode class? Or would I just need to add another vertices to my LinkedList? That would imply that a directed graph is 2X as big as an undirected graph wouldn't it? public class graph { private List<edgeNode<?>> edges; private int nVertices; private int nEdges; private boolean directed; //... getters and setters as well as constructors... } Finally does anybody have a standard way of initializing there graph? I was thinking of reading in a pipe-delimited file but that is so 1997. public graph GenereateGraph(boolean directed, String file){ List<edgeNode<?>> edges; graph g; try{ int count = 0; String line; FileReader input = new FileReader("C:\\Users\\derekww\\Documents\\JavaEE Projects\\graphFile"); BufferedReader bufRead = new BufferedReader(input); line = bufRead.readLine(); count++; edges = new ArrayList<edgeNode<?>>(); while(line != null){ line = bufRead.readLine(); Object edgeInfo = line.split("|")[0]; int weight = Integer.parseInt(line.split("|")[1]); edgeNode<String> e = new edgeNode<String>((String) edges.add(e); } return g; } catch(Exception e){ return null; } } I guess when I am adding edges if boolean is true I would be adding a second edge. So far, this all depends on the file I write. So if I wrote a file with the following Vertices and weights... Buffalo | 18 br Pittsburgh | 20 br New York | 15 br D.C | 45 br I would obviously load them into my list of edges, but how can I represent one vertices connected to the other... so on... I would need the opposite vertices? Say I was representing Highways connected to each city weighted and un-directed (each edge is bi-directional with weights in some fictional distance unit)... Would my implementation be the best way to do that? I found this tutorial online Graph Tutorial that has a connector object. This appears to me be a collection of vertices pointing to each other. So you would have A and B each with there weights and so on, and you would add this to a list and this list of connectors to your graph... That strikes me as somewhat cumbersome and a little dismissive of the adjacency list concept? Am I wrong and that is a novel solution? This is all inspired by steve skiena's Algorithm Design Manual. Which I have to say is pretty good so far. Thanks for any help you can provide.

    Read the article

  • SQL to XML open data made simple

    - by drrwebber
    The perennial question for people is how to easily generate XML from SQL table content?  The latest CAM Editor release really tackles this head on by providing a powerful and simple toolset.  Firstly you can visually browse your SQL tables and then drag and drop from columns and tables into the XML structure editor.   This gives you a code-free method of describing the transformation you require.  So you do not need to know about the vagaries of XML and XSD schema syntax. Second you can map directly into existing industry domain XML exchange structures in the XML visual editor, again no need to wrestle with XSD schema, you have WYSIWYG visual control over what your output will look like. If you do not have a target XML structure and need to build one from scratch, then the CAM Editor makes this simple.  Switch the SQL viewer into designer mode, then take your existing SQL table and drag and drop it into the XML structure editor.  Automatically the XML wizard tool will take your SQL column names and definitions and create equivalent XML for you and insert the mappings. Simply save the structure template, and run the Open Data generator menu option, and your XML is built for you. Completely code-free template driven development. To see this in action, see our video demonstration links and then download the tools and samples and try it yourself.

    Read the article

  • Design practice for securing data inside Azure SQL

    - by Sid
    Update: I'm looking for a specific design practice as we try to build-our-own database encryption. Azure SQL doesn't support many of the encryption features found in SQL Server (Table and Column encryption). We need to store some sensitive information that needs to be encrypted and we've rolled our own using AesCryptoServiceProvider to encrypt/decrypt data to/from the database. This solves the immediate issue (no cleartext in db) but poses other problems like Key rotation (we have to roll our own code for this, walking through the db converting old cipher text into new cipher text) metadata mapping of which tables and which columns are encrypted. This is simple when it's just couple of columns (send an email to all devs/document) but that quickly gets out of hand ... So, what is the best practice for doing application level encryption into a database that doesn't support encryption? In particular, what is a good design to solve the above two bullet points? If you had specific schema additions would love it if you could give details ("Have a NVARCHAR(max) column to store the cipher metadata as JSON" or a SQL script/commands). If someone would like to recommend a library, I'd be happy to stay away from "DIY" too. Before going too deep - I assume there isn't any way I can add encryption support to Azure by creating a stored procedure, right?

    Read the article

  • A tip: Updating Data in SharePoint 2010 using REST API

    - by Sahil Malik
    SharePoint 2010 Training: more information Here is a little tip that will save you hours of head scratching. See there are two ways to update data in SharePoint using REST based API. A PUT request is used to update an entire entity. If no values are specified for fields in the entity, the fields will be set to default values. A MERGE request is used to update only those field values that have changed. Any fields that are not specified by the operation will remain set to their current value.   Now, sit back and think about it. You are going to update the entire entity! Hmm. Which means, you need to a) specify every column value, and b) ensure that the read only values match what was supplied to you. What a pain in the donkey! So 99/100 times, a PUT request will give you a HTTP 500 internal server error occurred, which is just so helpful. Read full article ....

    Read the article

  • Data Access Objects old fashioned? [on hold]

    - by Bono
    A couple of weeks ago I delivered some work for a university project. After a code review with some teachers I got some snarky remarks about the fact that I was (still) using Data Access Objects. The teacher in question who said this mentions the use of DAO's in his classes and always says something along the lines of "Back then we always used DAO's". He's a big fan of Object Relational Mapping, which I also think is a great tool. When I was talking about this with some of my fellow students, they also mentioned that they prefer the use of ORM, which I can understand. It did make me wonder though, is using DAO's really so old fashioned? I know that at my work DAO's are still being used, but this is due to the fact that some of the code is rather old and therefor can't be coupled with ORM. We also do use ORM at my work. Trying to find some more information on Google or Stack Exchange sites didn't really enlighten me. Should I step away from the use of DAO's and only start implementing ORM? I just feel that ORM's can be a bit overkill for some simple projects. I'd love to hear your opinions (or facts) about this.

    Read the article

  • Simplifying data search using .NET

    - by Peter
    An example on the asp.net site has an example of using Linq to create a search feature on a Music album site using MVC. The code looks like this - public ActionResult Index(string movieGenre, string searchString) { var GenreLst = new List<string>(); var GenreQry = from d in db.Movies orderby d.Genre select d.Genre; GenreLst.AddRange(GenreQry.Distinct()); ViewBag.movieGenre = new SelectList(GenreLst); var movies = from m in db.Movies select m; if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.Title.Contains(searchString)); } if (!string.IsNullOrEmpty(movieGenre)) { movies = movies.Where(x => x.Genre == movieGenre); } return View(movies); } I have seen similar examples in other tutorials and I have tried them in a real-world business app that I develop/maintain. In practice this pattern doesn't seem to scale well because as the search criteria expands I keep adding more and more conditions which looks and feels unpleasant/repetitive. How can I refactor this pattern? One idea I have is to create a column in every table that is "searchable" which could be a computed column that concatenates all the data from the different columns (SQL Server 2008). So instead of having movie genre and title it would be something like. if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.SearchColumn.Contains(searchString)); } What are the performance/design/architecture implications of doing this? I have also tried using procedures that use dynamic queries but then I have just moved the ugliness to the database. E.g. CREATE PROCEDURE [dbo].[search_music] @title as varchar(50), @genre as varchar(50) AS -- set the variables to null if they are empty IF @title = '' SET @title = null IF @genre = '' SET @genre = null SELECT m.* FROM view_Music as m WHERE (title = @title OR @title IS NULL) AND (genre LIKE '%' + @genre + '%' OR @genre IS NULL) ORDER BY Id desc OPTION (RECOMPILE) Any suggestions? Tips?

    Read the article

  • 3D Box Collision Data Import

    - by cboe
    I'm trying to implement a collision system using oriented bounding boxes, using a center for the box, it's extents as a 3D Vector and a rotation matrix, which is all stuff I picked up online and seem to be somewhat the standard. Detecting the center is no problem so I'm gonna leave these out here. My problem however is importing the data from a 3D file. Say I've placed a box with 2 units length on each side aligned to the world axis. The logic results here are extents of 1,1,1 and I use an identity matrix for rotation - easy. However I'm stuck when I rotate the box in the 3D program, say 30 degrees each axis. How would I parse the box? I only have these 8 vertices as information, and I guess what I would need to do is to find out the rotation of said box, apply it to the vertices so they are aligned to world axes and then calculate the extents out of that. How do I get the rotation of the box when I only have the vertex information of the box available?

    Read the article

  • Handling timeout in network application

    - by user2175831
    How can I handle timeouts in a network application. I'm implementing a provisioning system on a Linux server, the code is huge so I'm going to put the algorithm, it works as like this Read provisioning commands from file Send it to another server using TCP Save the request in hash. Receive the response then if successful response received then remove request from hash if failed response received then retry the message The problem I'm in now is when the program didn't receive the response for a timeout reason then the request will be waiting for a response forever and won't be retried. And please note that I'll be sending hundreds of commands and I have to monitor the timeout commands for all of them. I tried to use timer but that didn't help because I'll end up with so many waiting timers and I'm not sure if this is a good way of doing this. The question is how can I save the message in some data structure and check to remove or retry it later when there is no response from the other end? Please note that I'm willing to change the algorithm to anything you suggest that could deal with the timeouts.

    Read the article

  • Custom fail2ban Filter

    - by Michael Robinson
    In my quest to block excessive failed phpMyAdmin login attempts with fail2ban, I've created a script that logs said failed attempts to a file: /var/log/phpmyadmin_auth.log Custom log The format of the /var/log/phpmyadmin_auth.log file is: phpMyadmin login failed with username: root; ip: 192.168.1.50; url: http://somedomain.com/phpmyadmin/index.php phpMyadmin login failed with username: ; ip: 192.168.1.50; url: http://192.168.1.48/phpmyadmin/index.php Custom filter [Definition] # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; phpMyAdmin jail [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 6 The fail2ban log contains: 2012-10-04 10:52:22,756 fail2ban.server : INFO Stopping all jails 2012-10-04 10:52:23,091 fail2ban.jail : INFO Jail 'ssh-iptables' stopped 2012-10-04 10:52:23,866 fail2ban.jail : INFO Jail 'fail2ban' stopped 2012-10-04 10:52:23,994 fail2ban.jail : INFO Jail 'ssh' stopped 2012-10-04 10:52:23,994 fail2ban.server : INFO Exiting Fail2ban 2012-10-04 10:52:24,253 fail2ban.server : INFO Changed logging target to /var/log/fail2ban.log for Fail2ban v0.8.6 2012-10-04 10:52:24,253 fail2ban.jail : INFO Creating new jail 'ssh' 2012-10-04 10:52:24,253 fail2ban.jail : INFO Jail 'ssh' uses poller 2012-10-04 10:52:24,260 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,260 fail2ban.filter : INFO Set maxRetry = 6 2012-10-04 10:52:24,261 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,261 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,279 fail2ban.jail : INFO Creating new jail 'ssh-iptables' 2012-10-04 10:52:24,279 fail2ban.jail : INFO Jail 'ssh-iptables' uses poller 2012-10-04 10:52:24,279 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set maxRetry = 5 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,280 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,287 fail2ban.jail : INFO Creating new jail 'fail2ban' 2012-10-04 10:52:24,287 fail2ban.jail : INFO Jail 'fail2ban' uses poller 2012-10-04 10:52:24,287 fail2ban.filter : INFO Added logfile = /var/log/fail2ban.log 2012-10-04 10:52:24,287 fail2ban.filter : INFO Set maxRetry = 3 2012-10-04 10:52:24,288 fail2ban.filter : INFO Set findtime = 604800 2012-10-04 10:52:24,288 fail2ban.actions: INFO Set banTime = 604800 2012-10-04 10:52:24,292 fail2ban.jail : INFO Jail 'ssh' started 2012-10-04 10:52:24,293 fail2ban.jail : INFO Jail 'ssh-iptables' started 2012-10-04 10:52:24,297 fail2ban.jail : INFO Jail 'fail2ban' started When I issue: sudo service fail2ban restart fail2ban emails me to say ssh has restarted, but I receive no such email about my phpmyadmin jail. Repeated failed logins to phpMyAdmin does not cause an email to be sent. Have I missed some critical setup? Is my filter's regular expression wrong? Update: added changes from default installation Starting with a clean fail2ban installation: cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local Change email address to my own, action to: action = %(action_mwl)s Append the following to jail.local [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 4 Add the following to /etc/fail2ban/filter.d/phpmyadmin.conf # phpmyadmin configuration file # # Author: Michael Robinson # [Definition] # Option: failregex # Notes.: regex to match the password failures messages in the logfile. The # host must be matched by a group named "host". The tag "<HOST>" can # be used for standard IP/hostname matching and is only an alias for # (?:::f{4,6}:)?(?P<host>\S+) # Values: TEXT # # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; # Option: ignoreregex # Notes.: regex to ignore. If this regex matches, the line is ignored. # Values: TEXT # # Ignore our own bans, to keep our counts exact. # In your config, name your jail 'fail2ban', or change this line! ignoreregex = Restart fail2ban sudo service fail2ban restart PS: I like eggs

    Read the article

  • Custom fail2ban Filter for phpMyadmin bruteforce attempts

    - by Michael Robinson
    In my quest to block excessive failed phpMyAdmin login attempts with fail2ban, I've created a script that logs said failed attempts to a file: /var/log/phpmyadmin_auth.log Custom log The format of the /var/log/phpmyadmin_auth.log file is: phpMyadmin login failed with username: root; ip: 192.168.1.50; url: http://somedomain.com/phpmyadmin/index.php phpMyadmin login failed with username: ; ip: 192.168.1.50; url: http://192.168.1.48/phpmyadmin/index.php Custom filter [Definition] # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; phpMyAdmin jail [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 6 The fail2ban log contains: 2012-10-04 10:52:22,756 fail2ban.server : INFO Stopping all jails 2012-10-04 10:52:23,091 fail2ban.jail : INFO Jail 'ssh-iptables' stopped 2012-10-04 10:52:23,866 fail2ban.jail : INFO Jail 'fail2ban' stopped 2012-10-04 10:52:23,994 fail2ban.jail : INFO Jail 'ssh' stopped 2012-10-04 10:52:23,994 fail2ban.server : INFO Exiting Fail2ban 2012-10-04 10:52:24,253 fail2ban.server : INFO Changed logging target to /var/log/fail2ban.log for Fail2ban v0.8.6 2012-10-04 10:52:24,253 fail2ban.jail : INFO Creating new jail 'ssh' 2012-10-04 10:52:24,253 fail2ban.jail : INFO Jail 'ssh' uses poller 2012-10-04 10:52:24,260 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,260 fail2ban.filter : INFO Set maxRetry = 6 2012-10-04 10:52:24,261 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,261 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,279 fail2ban.jail : INFO Creating new jail 'ssh-iptables' 2012-10-04 10:52:24,279 fail2ban.jail : INFO Jail 'ssh-iptables' uses poller 2012-10-04 10:52:24,279 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set maxRetry = 5 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,280 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,287 fail2ban.jail : INFO Creating new jail 'fail2ban' 2012-10-04 10:52:24,287 fail2ban.jail : INFO Jail 'fail2ban' uses poller 2012-10-04 10:52:24,287 fail2ban.filter : INFO Added logfile = /var/log/fail2ban.log 2012-10-04 10:52:24,287 fail2ban.filter : INFO Set maxRetry = 3 2012-10-04 10:52:24,288 fail2ban.filter : INFO Set findtime = 604800 2012-10-04 10:52:24,288 fail2ban.actions: INFO Set banTime = 604800 2012-10-04 10:52:24,292 fail2ban.jail : INFO Jail 'ssh' started 2012-10-04 10:52:24,293 fail2ban.jail : INFO Jail 'ssh-iptables' started 2012-10-04 10:52:24,297 fail2ban.jail : INFO Jail 'fail2ban' started When I issue: sudo service fail2ban restart fail2ban emails me to say ssh has restarted, but I receive no such email about my phpmyadmin jail. Repeated failed logins to phpMyAdmin does not cause an email to be sent. Have I missed some critical setup? Is my filter's regular expression wrong? Update: added changes from default installation Starting with a clean fail2ban installation: cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local Change email address to my own, action to: action = %(action_mwl)s Append the following to jail.local [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 4 Add the following to /etc/fail2ban/filter.d/phpmyadmin.conf # phpmyadmin configuration file # # Author: Michael Robinson # [Definition] # Option: failregex # Notes.: regex to match the password failures messages in the logfile. The # host must be matched by a group named "host". The tag "<HOST>" can # be used for standard IP/hostname matching and is only an alias for # (?:::f{4,6}:)?(?P<host>\S+) # Values: TEXT # # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; # Option: ignoreregex # Notes.: regex to ignore. If this regex matches, the line is ignored. # Values: TEXT # # Ignore our own bans, to keep our counts exact. # In your config, name your jail 'fail2ban', or change this line! ignoreregex = Restart fail2ban sudo service fail2ban restart PS: I like eggs

    Read the article

  • fail2ban custom action to permanent ban IPs from China

    - by John Magnolia
    When a IP address gets banned how can I check if the banned IP address is from China. If yes, then add it to the permanent ban list. I have found this nice guide which write the banned IP to file. Reason: I am getting a lot of brute force attacks from China daily, thankfully fail2ban is helping restrict this although they appear to be getting worse and they are just changing their IP Address. Or even better would be if there was a maintained database of known hacker IP addresses. Example 1 Hi, The IP 60.169.78.77 has just been banned by Fail2Ban after 4 attempts against vsftpd. Here are more information about 60.169.78.77: % [whois.apnic.net node-7] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 60.166.0.0 - 60.175.255.255 netname: CHINANET-AH descr: CHINANET anhui province network descr: China Telecom descr: A12,Xin-Jie-Kou-Wai Street descr: Beijing 100088 country: CN admin-c: CH93-AP tech-c: JW89-AP mnt-by: APNIC-HM mnt-routes: MAINT-CHINANET-AH mnt-lower: MAINT-CHINANET-AH status: ALLOCATED PORTABLE changed: [email protected] 20040721 source: APNIC person: Chinanet Hostmaster nic-hdl: CH93-AP e-mail: [email protected] address: No.31 ,jingrong street,beijing address: 100032 phone: +86-10-58501724 fax-no: +86-10-58501724 country: CN changed: [email protected] 20070416 mnt-by: MAINT-CHINANET source: APNIC person: Jinneng Wang address: 17/F, Postal Building No.120 Changjiang address: Middle Road, Hefei, Anhui, China country: CN phone: +86-551-2659073 fax-no: +86-551-2659287 e-mail: [email protected] nic-hdl: JW89-AP mnt-by: MAINT-NEW changed: [email protected] 19990818 source: APNIC Regards, Fail2Ban Example 2 Hi, The IP 60.169.78.81 has just been banned by Fail2Ban after 4 attempts against vsftpd. Here are more information about 60.169.78.81: % [whois.apnic.net node-6] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 60.166.0.0 - 60.175.255.255 netname: CHINANET-AH descr: CHINANET anhui province network descr: China Telecom descr: A12,Xin-Jie-Kou-Wai Street descr: Beijing 100088 country: CN admin-c: CH93-AP tech-c: JW89-AP mnt-by: APNIC-HM mnt-routes: MAINT-CHINANET-AH mnt-lower: MAINT-CHINANET-AH status: ALLOCATED PORTABLE changed: [email protected] 20040721 source: APNIC person: Chinanet Hostmaster nic-hdl: CH93-AP e-mail: [email protected] address: No.31 ,jingrong street,beijing address: 100032 phone: +86-10-58501724 fax-no: +86-10-58501724 country: CN changed: [email protected] 20070416 mnt-by: MAINT-CHINANET source: APNIC person: Jinneng Wang address: 17/F, Postal Building No.120 Changjiang address: Middle Road, Hefei, Anhui, China country: CN phone: +86-551-2659073 fax-no: +86-551-2659287 e-mail: [email protected] nic-hdl: JW89-AP mnt-by: MAINT-NEW changed: [email protected] 19990818 source: APNIC Regards, Fail2Ban Example 3 Hi, The IP 222.133.244.99 has just been banned by Fail2Ban after 4 attempts against vsftpd. Here are more information about 222.133.244.99: % [whois.apnic.net node-6] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 222.133.244.96 - 222.133.244.127 netname: LCZFFHQ country: CN descr: liaochenggovermentfanghuoqiang admin-c: DS95-AP tech-c: DS95-AP status: ASSIGNED NON-PORTABLE changed: [email protected] 20060122 mnt-by: MAINT-CNCGROUP-SD source: APNIC route: 222.132.0.0/14 descr: CNC Group CHINA169 Shandong Province Network country: CN origin: AS4837 mnt-by: MAINT-CNCGROUP-RR changed: [email protected] 20060118 source: APNIC person: Data Communication Bureau Shandong nic-hdl: DS95-AP e-mail: [email protected] address: No.77 Jingsan Road,Jinan,Shandong,P.R.China phone: +86-531-6052611 fax-no: +86-531-6052414 country: CN changed: [email protected] 20050330 mnt-by: MAINT-CNCGROUP-SD source: APNIC Regards, Fail2Ban

    Read the article

  • IIS 7.5 Manager crashes when adding a custom error page

    - by dig412
    I'm running a local IIS 7.5 server in Win 7 Pro, and I'm trying to add a custom error page for 403 responses. When I click OK to add a custom error page for my site, IIS Manager just vanishes. The server is still running, and I can re-start IIS Manager, but the new page has not been saved. I've also tried adding it directly to web.config, but that just gives me The page cannot be displayed because an internal server error has occurred. Does anyone know why this might be happening? Edit: The event log implies that an invalid character in the path caused the crash, but It occured even when I copied & pasted a path from a valid entry. Application error log: IISMANAGER_CRASH IIS Manager terminated unexpectedly. Exception:System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. --- System.ArgumentException: Illegal characters in path. at System.IO.Path.CheckInvalidPathChars(String path) at System.IO.Path.IsPathRooted(String path) at Microsoft.Web.Management.Iis.CustomErrors.CustomErrorsForm.OnAccept() at Microsoft.Web.Management.Client.Win32.TaskForm.OnOKButtonClick(Object sender, EventArgs e) at System.Windows.Forms.Control.OnClick(EventArgs e) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Form.ShowDialog(IWin32Window owner) at Microsoft.Web.Management.Host.UserInterface.ManagementUIService.ShowDialogInternal(Form form, IWin32Window parent) at Microsoft.Web.Management.Host.UserInterface.ManagementUIService.Microsoft.Web.Management.Client.Win32.IManagementUIService.ShowDialog(DialogForm form) at Microsoft.Web.Management.Client.Win32.ModulePage.ShowDialog(DialogForm form) at Microsoft.Web.Management.Iis.CustomErrors.CustomErrorsPage.AddCustomError() --- End of inner exception stack trace --- at System.RuntimeMethodHandle._InvokeMethodFast(Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at Microsoft.Web.Management.Client.TaskList.InvokeMethod(String methodName, Object userData) at Microsoft.Web.Management.Host.UserInterface.Tasks.MethodTaskItemLine.InvokeMethod() at System.Windows.Forms.LinkLabel.OnMouseUp(MouseEventArgs e) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.Label.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at Microsoft.Web.Management.Host.Shell.ShellApplication.Execute(Boolean localDevelopmentMode, Boolean resetPreferences, Boolean resetPreferencesNoLaunch) Process:InetMgr

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • encfs error while decoding the data

    - by migrator
    I have installed encfs and started using it to secure all my personal & office data and it was working absolutely fine until 2 hours back. The setup is like this. I have a folder in Copy folder called OfficeData which gets synchronized with my Copy folder When I login into the system I use the command encfs ~/Copy/OfficeData ~/Documents/OfficeData Once my work is over I dismount with the command fusermount -u ~/Documents/OfficeData All this data get synchronized with my desktop and with my mobile phone (as a backup) Today when I mounted, the folder got mounted by no directories and files present in that folder. I was worried and read man encfs which gave me to run the command encfs -v -f ~/Copy/OfficeData ~/Documents/OfficeData 2> encfs-OfficeData-report.txt. The below is the output of the file encfs-OfficeData-report.txt. The directory "/home/sri/Documents/OfficeData" does not exist. Should it be created? (y,n) 13:16:26 (main.cpp:523) Root directory: /home/sri/Copy/OfficeData/ 13:16:26 (main.cpp:524) Fuse arguments: (fg) (threaded) (keyCheck) encfs /home/sri/Documents/OfficeData -f -s -o use_ino -o default_permissions 13:16:26 (FileUtils.cpp:177) version = 20 13:16:26 (FileUtils.cpp:181) found new serialization format 13:16:26 (FileUtils.cpp:199) subVersion = 20100713 13:16:26 (Interface.cpp:165) checking if ssl/aes(3:0:2) implements ssl/aes(3:0:0) 13:16:26 (SSL_Cipher.cpp:370) allocated cipher ssl/aes, keySize 32, ivlength 16 13:16:26 (Interface.cpp:165) checking if ssl/aes(3:0:2) implements ssl/aes(3:0:0) 13:16:26 (SSL_Cipher.cpp:370) allocated cipher ssl/aes, keySize 32, ivlength 16 13:16:26 (FileUtils.cpp:1620) useStdin: 0 13:16:46 (Interface.cpp:165) checking if ssl/aes(3:0:2) implements ssl/aes(3:0:0) 13:16:46 (SSL_Cipher.cpp:370) allocated cipher ssl/aes, keySize 32, ivlength 16 13:16:49 (FileUtils.cpp:1628) cipher key size = 52 13:16:49 (Interface.cpp:165) checking if nameio/block(3:0:1) implements nameio/block(3:0:0) 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (RawFileIO.cpp:191) getAttr error on /home/sri/Copy/OfficeData/UWbT-M-UKk1JpvNfN5uvOhGn: No such file or directory 13:16:49 (CipherFileIO.cpp:105) in setIV, current IV = 0, new IV = 4188221457101129840, fileIV = 0 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/UWbT-M-UKk1JpvNfN5uvOhGn 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/UWbT-M-UKk1JpvNfN5uvOhGn 13:16:49 (RawFileIO.cpp:191) getAttr error on /home/sri/Copy/OfficeData/UWbT-M-UKk1JpvNfN5uvOhGn: No such file or directory 13:16:49 (encfs.cpp:138) getattr error: No such file or directory 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (RawFileIO.cpp:191) getAttr error on /home/sri/Copy/OfficeData/o94olxB3orqarqyFviHKZ,ZF: No such file or directory 13:16:49 (CipherFileIO.cpp:105) in setIV, current IV = 0, new IV = 16725694203599486310, fileIV = 0 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/o94olxB3orqarqyFviHKZ,ZF 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/o94olxB3orqarqyFviHKZ,ZF 13:16:49 (RawFileIO.cpp:191) getAttr error on /home/sri/Copy/OfficeData/o94olxB3orqarqyFviHKZ,ZF: No such file or directory 13:16:49 (encfs.cpp:138) getattr error: No such file or directory 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (RawFileIO.cpp:191) getAttr error on /home/sri/Copy/OfficeData/tVglci2rgp9o8qE-m9AvX6JNj1lQs-ER0OvnxfOb30Z,3,: No such file or directory 13:16:49 (CipherFileIO.cpp:105) in setIV, current IV = 0, new IV = 1354483141023495884, fileIV = 0 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/tVglci2rgp9o8qE-m9AvX6JNj1lQs-ER0OvnxfOb30Z,3, 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/tVglci2rgp9o8qE-m9AvX6JNj1lQs-ER0OvnxfOb30Z,3, 13:16:49 (RawFileIO.cpp:191) getAttr error on /home/sri/Copy/OfficeData/tVglci2rgp9o8qE-m9AvX6JNj1lQs-ER0OvnxfOb30Z,3,: No such file or directory 13:16:49 (encfs.cpp:138) getattr error: No such file or directory 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (RawFileIO.cpp:191) getAttr error on /home/sri/Copy/OfficeData/r1KIEqVkz-,7-6CobavHCSNn: No such file or directory 13:16:49 (CipherFileIO.cpp:105) in setIV, current IV = 0, new IV = 16720606331386655431, fileIV = 0 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/r1KIEqVkz-,7-6CobavHCSNn 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/r1KIEqVkz-,7-6CobavHCSNn 13:16:49 (RawFileIO.cpp:191) getAttr error on /home/sri/Copy/OfficeData/r1KIEqVkz-,7-6CobavHCSNn: No such file or directory 13:16:49 (encfs.cpp:138) getattr error: No such file or directory 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:16:49 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:16:49 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:16:49 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:16:49 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:16:49 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:16:49 (FileNode.cpp:127) calling setIV on (null) 13:16:49 (RawFileIO.cpp:191) getAttr error on /home/sri/Copy/OfficeData/r1KIEqVkz-,7-6CobavHCSNn: No such file or directory 13:16:49 (CipherFileIO.cpp:105) in setIV, current IV = 0, new IV = 16720606331386655431, fileIV = 0 13:16:49 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/r1KIEqVkz-,7-6CobavHCSNn 13:16:49 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/r1KIEqVkz-,7-6CobavHCSNn 13:16:49 (RawFileIO.cpp:191) getAttr error on /home/sri/Copy/OfficeData/r1KIEqVkz-,7-6CobavHCSNn: No such file or directory 13:16:49 (encfs.cpp:138) getattr error: No such file or directory 13:19:31 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:19:31 (FileNode.cpp:127) calling setIV on (null) 13:19:31 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/ 13:19:31 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/ 13:19:31 (encfs.cpp:685) doing statfs of /home/sri/Copy/OfficeData 13:19:32 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:19:32 (FileNode.cpp:127) calling setIV on (null) 13:19:32 (RawFileIO.cpp:191) getAttr error on /home/sri/Copy/OfficeData/LuT8R,DlpRnNH9b,fjWiKHKc: No such file or directory 13:19:32 (CipherFileIO.cpp:105) in setIV, current IV = 0, new IV = 13735228085838055696, fileIV = 0 13:19:32 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/LuT8R,DlpRnNH9b,fjWiKHKc 13:19:32 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/LuT8R,DlpRnNH9b,fjWiKHKc 13:19:32 (RawFileIO.cpp:191) getAttr error on /home/sri/Copy/OfficeData/LuT8R,DlpRnNH9b,fjWiKHKc: No such file or directory 13:19:32 (encfs.cpp:138) getattr error: No such file or directory 13:19:32 (encfs.cpp:685) doing statfs of /home/sri/Copy/OfficeData 13:19:32 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:19:32 (FileNode.cpp:127) calling setIV on (null) 13:19:32 (RawFileIO.cpp:191) getAttr error on /home/sri/Copy/OfficeData/UWbT-M-UKk1JpvNfN5uvOhGn: No such file or directory 13:19:32 (CipherFileIO.cpp:105) in setIV, current IV = 0, new IV = 4188221457101129840, fileIV = 0 13:19:32 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/UWbT-M-UKk1JpvNfN5uvOhGn 13:19:32 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/UWbT-M-UKk1JpvNfN5uvOhGn 13:19:32 (RawFileIO.cpp:191) getAttr error on /home/sri/Copy/OfficeData/UWbT-M-UKk1JpvNfN5uvOhGn: No such file or directory 13:19:32 (encfs.cpp:138) getattr error: No such file or directory 13:19:32 (MACFileIO.cpp:75) fs block size = 1024, macBytes = 8, randBytes = 0 13:19:32 (FileNode.cpp:127) calling setIV on (null) 13:19:32 (RawFileIO.cpp:191) getAttr error on /home/sri/Copy/OfficeData/o94olxB3orqarqyFviHKZ,ZF: No such file or directory 13:19:32 (CipherFileIO.cpp:105) in setIV, current IV = 0, new IV = 16725694203599486310, fileIV = 0 13:19:32 (DirNode.cpp:770) created FileNode for /home/sri/Copy/OfficeData/o94olxB3orqarqyFviHKZ,ZF 13:19:32 (encfs.cpp:134) getattr /home/sri/Copy/OfficeData/o94olxB3orqarqyFviHKZ,ZF 13:19:32 (RawFileIO.cpp:191) getAttr error on /home/sri/Copy/OfficeData/o94olxB3orqarqyFviHKZ,ZF: No such file or directory 13:19:32 (encfs.cpp:138) getattr error: No such file or directory 13:19:32 (encfs.cpp:213) getdir on /home/sri/Copy/OfficeData/ 13:19:32 (BlockNameIO.cpp:185) padding, _bx, finalSize = 208, 16, -192 13:19:32 (DirNode.cpp:132) error decoding filename: eWJrLh2dRFAY-7Brbsc,mTqf 13:19:32 (DirNode.cpp:132) error decoding filename: .encfs6.xml 13:19:32 (BlockNameIO.cpp:185) padding, _bx, finalSize = 218, 16, -202 13:19:32 (DirNode.cpp:132) error decoding filename: pvph9DkZ0BMPg2vN4UcfwuNU 13:24:10 (openssl.cpp:48) Allocating 41 locks for OpenSSL Please help me Thanks in advance.

    Read the article

  • Code Structure / Level Design: Plants vs Zombies game level dissection

    - by lalan
    Hi Friends, I am interested in learning the class structure of Plants vs Zombies, particularly level design; for those who haven't played it - this video contains nice play-through: http://www.youtube.com/watch?v=89DfdOIJ4xw. How would I go ahead and design the code, mostly structure & classes, which allows for maximum flexibility & clean development? I am familiar with data driven design concepts, and would use events to handle most of dynamic behavior. Dissection at macro level: (Once every Level) Load tilemap, props, etc -- basically build the map (Once every Level) Camera Movement - might consider it as short cut-scene (Once every Level) Show Enemies you'll face during present level (Once every Level) Unit Selection Window/Panel - selection of defensive plants (Once every Level) Camera Movement - might consider it as short cut-scene (Once every Level) HUD Creation - based on unit selection (Level Loop) Enemy creation - based on types of zombies allowed (Level Loop) Sun/Resource generation (Level Loop) Show messages like 'huge wave of zombies coming', 'final wave' (Level Loop) Other unique events - Spawn gifts, money, tombstones, etc (Once every Level) Unlock new plant Potential game scripts: a) Level definitions: Level_1_1.xml, Level_1_2.xml, etc. Level_1_1.xml :: Sample script <map> <tilemap>tilemapFrontLawn</tilemap> <SpawnPoints> tiles where particular type of zombies (land vs water) may spawn</spawnPoints> <props> position, entity array -- lawnmower, </props> </map> <zombies> <... list of zombies who gonna attack by ids...> </zombies> <plants> <... list by plants which are available for defense by ids...> </plants> <progression> <ZombieWave name='first wave' spawnScript='zombieLightWave.lua' unlock='null'> <startMessages time=1.5>Ready</startMessages> <endMessages time=1.5>Huge wave of zombies incoming</endMessages> </ZombieWave> </progression> b) Entities definitions: .xmls containing zombies, plants, sun, lawnmower, coins, etc description. Potential classes: //LevelManager - Based on the level under play, it will load level script. Few of the // functions it may have: class LevelManager { public: bool load(string levelFileName); bool enter(); bool update(float deltatime); bool exit(); private: LevelData* mLevelData; } // LevelData - Contains the details of level loaded by LevelManager. class LevelData { private: string file; // array of camera,dialog,attackwaves, etc in active level LevelCutSceneCamera** mArrayCutSceneCamera; LevelCutSceneDialog** mArrayCutSceneDialog; LevelAttackWave** mArrayAttackWave; .... // which camera,dialog,attackwave is active in level uint mCursorCutSceneCamera; uint mCursorCutSceneDialog; uint mCursorAttackWave; public: // based on cursor, get the next camera,dialog,attackwave,etc in active level // return false/true based on failure/success bool nextCutSceneCamera(LevelCutSceneCamera**); bool nextCutSceneDialog(LevelCutSceneDialog**); } // LevelUnderPlay- LevelManager class LevelUnderPlay { private: LevelCutSceneCamera* mCutSceneCamera; LevelCutSceneDialog* mCutSceneDialog; LevelAttackWave* mAttackWave; Entities** mSelectedPlants; Entities** mAllowedZombies; bool isCutSceneCameraActive; public: bool enter(); bool update(float deltatime); bool exit(); } I am totally confused.. :( Does it make sense of using class composition (have flat class hierarchy) for managing levels. Is it a good idea to just add/remove/update sprites (or any drawable stuff) to current scene from LevelManager or LevelUnderPlay? If I want to make non-linear level design, how should I go ahead? Perhaps I would need a LevelProgression class, which would decide what to do based on decision tree. Any suggestions would be appreciated very much. Thank for your time, lalan

    Read the article

  • SQLAuthority News – New Book Released – SQL Server Interview Questions And Answers

    - by pinaldave
    Two days ago, on birthday of my blog – I asked simple question – Guess! What is in this box? I have received lots of interesting comments on the blog about what is in it. Many of you got it absolutely incorrect and many got it close to the right answer but no one got it 100% correct. Well, no issue at all, I am going to give away the price to whoever has the closest answer first in personal email. Here is the answer to the question about what is in the box? Here it is – the box has my new book. In fact, I should say our new book as I co-authored this book with my very good friend Vinod Kumar. We had real blast writing this book together and had lots of interesting conversation when we were writing this book. This book has one simple goal – “master the basics.” This book is not only for people who are preparing for interview. This book is for every one who wants to revisit the basics and wants to prepare themselves to the technology. One always needs to have practical knowledge to do their duty efficiently. This book talks about more than basics. There are multiple ways to present learning – either we can create simple book or make it interesting. We have decided the learning should be interactive and have opted for Interview Questions and Answer format. Here is quick interview which we have done together. Details of the books are here The core concept of this book will continue to evolve over time. I am sure many of you will come along with us on this journey and submit your suggestions to us to make this book a key reference for anybody who wants to start with SQL server. Today we want to acknowledge the fact that you will help us keep this book alive forever with the latest updates. We want to thank everyone who participates in this journey with us. You can get the books from [Amazon] | [Flipkart]. Read Vinod‘s blog post. Do not forget to wish him happy birthday as today is his birthday and also book release day – two reason to wish him congratulations. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Data Warehousing, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Windows Azure: Major Updates for Mobile Backend Development

    - by ScottGu
    This week we released some great updates to Windows Azure that make it significantly easier to develop mobile applications that use the cloud. These new capabilities include: Mobile Services: Custom API support Mobile Services: Git Source Control support Mobile Services: Node.js NPM Module support Mobile Services: A .NET API via NuGet Mobile Services and Web Sites: Free 20MB SQL Database Option for Mobile Services and Web Sites Mobile Notification Hubs: Android Broadcast Push Notification Support All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Mobile Services: Custom APIs, Git Source Control, and NuGet Windows Azure Mobile Services provides the ability to easily stand up a mobile backend that can be used to support your Windows 8, Windows Phone, iOS, Android and HTML5 client applications.  Starting with the first preview we supported the ability to easily extend your data backend logic with server side scripting that executes as part of client-side CRUD operations against your cloud back data tables. With today’s update we are extending this support even further and introducing the ability for you to also create and expose Custom APIs from your Mobile Service backend, and easily publish them to your Mobile clients without having to associate them with a data table. This capability enables a whole set of new scenarios – including the ability to work with data sources other than SQL Databases (for example: Table Services or MongoDB), broker calls to 3rd party APIs, integrate with Windows Azure Queues or Service Bus, work with custom non-JSON payloads (e.g. Windows Periodic Notifications), route client requests to services back on-premises (e.g. with the new Windows Azure BizTalk Services), or simply implement functionality that doesn’t correspond to a database operation.  The custom APIs can be written in server-side JavaScript (using Node.js) and can use Node’s NPM packages.  We will also be adding support for custom APIs written using .NET in the future as well. Creating a Custom API Adding a custom API to an existing Mobile Service is super easy.  Using the Windows Azure Management Portal you can now simply click the new “API” tab with your Mobile Service, and then click the “Create a Custom API” button to create a new Custom API within it: Give the API whatever name you want to expose, and then choose the security permissions you’d like to apply to the HTTP methods you expose within it.  You can easily lock down the HTTP verbs to your Custom API to be available to anyone, only those who have a valid application key, only authenticated users, or administrators.  Mobile Services will then enforce these permissions without you having to write any code: When you click the ok button you’ll see the new API show up in the API list.  Selecting it will enable you to edit the default script that contains some placeholder functionality: Today’s release enables Custom APIs to be written using Node.js (we will support writing Custom APIs in .NET as well in a future release), and the Custom API programming model follows the Node.js convention for modules, which is to export functions to handle HTTP requests. The default script above exposes functionality for an HTTP POST request. To support a GET, simply change the export statement accordingly.  Below is an example of some code for reading and returning data from Windows Azure Table Storage using the Azure Node API: After saving the changes, you can now call this API from any Mobile Service client application (including Windows 8, Windows Phone, iOS, Android or HTML5 with CORS). Below is the code for how you could invoke the API asynchronously from a Windows Store application using .NET and the new InvokeApiAsync method, and data-bind the results to control within your XAML:     private async void RefreshTodoItems() {         var results = await App.MobileService.InvokeApiAsync<List<TodoItem>>("todos", HttpMethod.Get, parameters: null);         ListItems.ItemsSource = new ObservableCollection<TodoItem>(results);     }    Integrating authentication and authorization with Custom APIs is really easy with Mobile Services. Just like with data requests, custom API requests enjoy the same built-in authentication and authorization support of Mobile Services (including integration with Microsoft ID, Google, Facebook and Twitter authentication providers), and it also enables you to easily integrate your Custom API code with other Mobile Service capabilities like push notifications, logging, SQL, etc. Check out our new tutorials to learn more about to use new Custom API support, and starting adding them to your app today. Mobile Services: Git Source Control Support Today’s Mobile Services update also enables source control integration with Git.  The new source control support provides a Git repository as part your Mobile Service, and it includes all of your existing Mobile Service scripts and permissions. You can clone that git repository on your local machine, make changes to any of your scripts, and then easily deploy the mobile service to production using Git. This enables a really great developer workflow that works on any developer machine (Windows, Mac and Linux). To use the new support, navigate to the dashboard for your mobile service and select the Set up source control link: If this is your first time enabling Git within Windows Azure, you will be prompted to enter the credentials you want to use to access the repository: Once you configure this, you can switch to the configure tab of your Mobile Service and you will see a Git URL you can use to use your repository: You can use this URL to clone the repository locally from your favorite command line: > git clone https://scottgutodo.scm.azure-mobile.net/ScottGuToDo.git Below is the directory structure of the repository: As you can see, the repository contains a service folder with several subfolders. Custom API scripts and associated permissions appear under the api folder as .js and .json files respectively (the .json files persist a JSON representation of the security settings for your endpoints). Similarly, table scripts and table permissions appear as .js and .json files, but since table scripts are separate per CRUD operation, they follow the naming convention of <tablename>.<operationname>.js. Finally, scheduled job scripts appear in the scheduler folder, and the shared folder is provided as a convenient location for you to store code shared by multiple scripts and a few miscellaneous things such as the APNS feedback script. Lets modify the table script todos.js file so that we have slightly better error handling when an exception occurs when we query our Table service: todos.js tableService.queryEntities(query, function(error, todoItems){     if (error) {         console.error("Error querying table: " + error);         response.send(500);     } else {         response.send(200, todoItems);     }        }); Save these changes, and now back in the command line prompt commit the changes and push them to the Mobile Services: > git add . > git commit –m "better error handling in todos.js" > git push Once deployment of the changes is complete, they will take effect immediately, and you will also see the changes be reflected in the portal: With the new Source Control feature, we’re making it really easy for you to edit your mobile service locally and push changes in an atomic fashion without sacrificing ease of use in the Windows Azure Portal. Mobile Services: NPM Module Support The new Mobile Services source control support also allows you to add any Node.js module you need in the scripts beyond the fixed set provided by Mobile Services. For example, you can easily switch to use Mongo instead of Windows Azure table in our example above. Set up Mongo DB by either purchasing a MongoLab subscription (which provides MongoDB as a Service) via the Windows Azure Store or set it up yourself on a Virtual Machine (either Windows or Linux). Then go the service folder of your local git repository and run the following command: > npm install mongoose This will add the Mongoose module to your Mobile Service scripts.  After that you can use and reference the Mongoose module in your custom API scripts to access your Mongo database: var mongoose = require('mongoose'); var schema = mongoose.Schema({ text: String, completed: Boolean });   exports.get = function (request, response) {     mongoose.connect('<your Mongo connection string> ');     TodoItemModel = mongoose.model('todoitem', schema);     TodoItemModel.find(function (err, items) {         if (err) {             console.log('error:' + err);             return response.send(500);         }         response.send(200, items);     }); }; Don’t forget to push your changes to your mobile service once you are done > git add . > git commit –m "Switched to use Mongo Labs" > git push Now our Mobile Service app is using Mongo DB! Note, with today’s update usage of custom Node.js modules is limited to Custom API scripts only. We will enable it in all scripts (including data and custom CRON tasks) shortly. New Mobile Services NuGet package, including .NET 4.5 support A few months ago we announced a new pre-release version of the Mobile Services client SDK based on portable class libraries (PCL). Today, we are excited to announce that this new library is now a stable .NET client SDK for mobile services and is no longer a pre-release package. Today’s update includes full support for Windows Store, Windows Phone 7.x, and .NET 4.5, which allows developers to use Mobile Services from ASP.NET or WPF applications. You can install and use this package today via NuGet. Mobile Services and Web Sites: Free 20MB Database for Mobile Services and Web Sites Starting today, every customer of Windows Azure gets one Free 20MB database to use for 12 months free (for both dev/test and production) with Web Sites and Mobile Services. When creating a Mobile Service or a Web Site, simply chose the new “Create a new Free 20MB database” option to take advantage of it: You can use this free SQL Database together with the 10 free Web Sites and 10 free Mobile Services you get with your Windows Azure subscription, or from any other Windows Azure VM or Cloud Service. Notification Hubs: Android Broadcast Push Notification Support Earlier this year, we introduced a new capability in Windows Azure for sending broadcast push notifications at high scale: Notification Hubs. In the initial preview of Notification Hubs you could use this support with both iOS and Windows devices.  Today we’re excited to announce new Notification Hubs support for sending push notifications to Android devices as well. Push notifications are a vital component of mobile applications.  They are critical not only in consumer apps, where they are used to increase app engagement and usage, but also in enterprise apps where up-to-date information increases employee responsiveness to business events.  You can use Notification Hubs to send push notifications to devices from any type of app (a Mobile Service, Web Site, Cloud Service or Virtual Machine). Notification Hubs provide you with the following capabilities: Cross-platform Push Notifications Support. Notification Hubs provide a common API to send push notifications to iOS, Android, or Windows Store at once.  Your app can send notifications in platform specific formats or in a platform-independent way.  Efficient Multicast. Notification Hubs are optimized to enable push notification broadcast to thousands or millions of devices with low latency.  Your server back-end can fire one message into a Notification Hub, and millions of push notifications can automatically be delivered to your users.  Devices and apps can specify a number of per-user tags when registering with a Notification Hub. These tags do not need to be pre-provisioned or disposed, and provide a very easy way to send filtered notifications to an infinite number of users/devices with a single API call.   Extreme Scale. Notification Hubs enable you to reach millions of devices without you having to re-architect or shard your application.  The pub/sub routing mechanism allows you to broadcast notifications in a super-efficient way.  This makes it incredibly easy to route and deliver notification messages to millions of users without having to build your own routing infrastructure. Usable from any Backend App. Notification Hubs can be easily integrated into any back-end server app, whether it is a Mobile Service, a Web Site, a Cloud Service or an IAAS VM. It is easy to configure Notification Hubs to send push notifications to Android. Create a new Notification Hub within the Windows Azure Management Portal (New->App Services->Service Bus->Notification Hub): Then register for Google Cloud Messaging using https://code.google.com/apis/console and obtain your API key, then simply paste that key on the Configure tab of your Notification Hub management page under the Google Cloud Messaging Settings: Then just add code to the OnCreate method of your Android app’s MainActivity class to register the device with Notification Hubs: gcm = GoogleCloudMessaging.getInstance(this); String connectionString = "<your listen access connection string>"; hub = new NotificationHub("<your notification hub name>", connectionString, this); String regid = gcm.register(SENDER_ID); hub.register(regid, "myTag"); Now you can broadcast notification from your .NET backend (or Node, Java, or PHP) to any Windows Store, Android, or iOS device registered for “myTag” tag via a single API call (you can literally broadcast messages to millions of clients you have registered with just one API call): var hubClient = NotificationHubClient.CreateClientFromConnectionString(                   “<your connection string with full access>”,                   "<your notification hub name>"); hubClient.SendGcmNativeNotification("{ 'data' : {'msg' : 'Hello from Windows Azure!' } }", "myTag”); Notification Hubs provide an extremely scalable, cross-platform, push notification infrastructure that enables you to efficiently route push notification messages to millions of mobile users and devices.  It will make enabling your push notification logic significantly simpler and more scalable, and allow you to build even better apps with it. Learn more about Notification Hubs here on MSDN . Summary The above features are now live and available to start using immediately (note: some of the services are still in preview).  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SQL – Contest to Get The Date – Win USD 50 Amazon Gift Cards and Cool Gift

    - by Pinal Dave
    If you are a regular reader of this blog – you will find no issue at all in resolving this puzzle. This contest is based on my experience with NuoDB. If you are not familiar with NuoDB, here are few pointers for you. Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB Quick Start with Admin Sections of NuoDB – Manage NuoDB Database Quick Start with Explorer Sections of NuoDB – Query NuoDB Database In today’s contest you have to answer following questions: Q 1: Precision of NOW() What is the precision of the NuoDB’s NOW() function, which returns current date time? Hint: Run following script on NuoDB Console Explorer section: SELECT NOW() AS CurrentTime FROM dual; Here is the image. I have masked the area where the time precision is displayed. Q 2: Executing Date and Time Script When I execute following script - SELECT 'today' AS Today, 'tomorrow' AS Tomorrow, 'yesterday' AS Yesterday FROM dual; I will get the following result:   NOW – What will be the answer when we execute following script? and WHY? SELECT CAST('today' AS DATE) AS Today, CAST('tomorrow' AS DATE) AS Tomorrow, CAST('yesterday'AS DATE) AS Yesterday FROM dual; HINT: Install NuoDB (it takes 90 seconds). Prizes: 2 Amazon Gifts 2 Limited Edition Hoodies (US resident only)   Rules: Please leave an answer in the comments section below. You must answer both the questions together in a single comment. US resident who wants to qualify to win NuoDB apparel please mention your country in the comment. You can resubmit your answer multiple times, the latest entry will be considered valid. Last day to participate in the puzzle is June 24, 2013. All valid answer will be kept hidden till June 24, 2013. The winner will be announced on June 25, 2013. Two Winners will get USD 25 worth Amazon Gift Card. (Total Value = 25 x 2 = 50 USD) The winner will be selected using a random algorithm from all the valid answers. Anybody with a valid email address can take part in the contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >