Search Results

Search found 4585 results on 184 pages for 'master subdocument'.

Page 107/184 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Too many Bind query (cache) denied, DNS attack?

    - by Jake
    Once Bind crashed and I did: tail -f /var/log/messages I see a massive number of logs every second. Is this a DNS attack? or is there something wrong? Sometimes I see a domain in logs like this: dOmAin.com (upper and lower). As you see there is only one single domain in the logs with different IPs Oct 10 02:21:26 mail named[20831]: client 74.125.189.18#38921: query (cache) 'ns1.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.144.171#38833: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.17#42428: query (cache) 'ns2.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.27#37899: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 193.203.82.66#39263: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.170#59723: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 80.169.197.66#32903: query (cache) 'dOmAin.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 134.58.60.1#47558: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.34#47387: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.8#59392: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.19#64395: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 217.72.163.3#42190: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 83.146.21.252#22020: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.116#57342: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 193.203.82.66#52020: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.72#64317: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 80.169.197.66#31989: query (cache) 'dOmAin.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.18#47436: query (cache) 'ns2.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.16#44005: query (cache) 'ns1.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 85.132.31.10#50379: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 94.241.128.3#60106: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 85.132.31.10#59118: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 212.95.135.78#27811: query (cache) 'domain.com/A/IN' denied /etc/resolv.conf ; generated by /sbin/dhclient-script nameserver 4.2.2.4 nameserver 8.8.4.4 Bind config: // generated by named-bootconf.pl options { directory "/var/named"; /* * If there is a firewall between you and nameservers you want * to talk to, you might need to uncomment the query-source * directive below. Previous versions of BIND always asked * questions using port 53, but BIND 8.1 uses an unprivileged * port by default. */ // query-source address * port 53; allow-transfer { none; }; allow-recursion { localnets; }; //listen-on-v6 { any; }; notify no; }; // // a caching only nameserver config // controls { inet 127.0.0.1 allow { localhost; } keys { rndckey; }; }; zone "." IN { type hint; file "named.ca"; }; zone "localhost" IN { type master; file "localhost.zone"; allow-update { none; }; }; zone "0.0.127.in-addr.arpa" IN { type master; file "named.local"; allow-update { none; }; };

    Read the article

  • Setting up home DNS with Ubuntu Server

    - by Zeophlite
    I have a webserver (with static IP 192.168.1.5), and I want to have my machines on my local network to be able to access it without modifying /etc/hosts (or equivalent for Windows/OSX). My router has Primary DNS server 192.168.1.5 Secondary DNS server 8.8.8.8 (Google's public DNS). Nginx is set up to server websites externally as *.example.com Internally, I want *.example.local to point to the server. My webserver has BIND9 installed, but I'm unsure of the settings. I've been through various contradicting tutorials, and so most of my settings have been clobbered. I've stripped out the lines which I'm confused about. The tutorials I looked at are http://tech.surveypoint.com/blog/installing-a-local-dns-server-behind-a-hardware-router/ and http://ubuntuforums.org/showthread.php?t=236093 . They mostly differ on what should be put in /etc/bind/zones/db.example.local and /etc/bind/zones/db.192, so I've left the conflicting lines out below. Can someone suggest what the correct lines are to give my above behaviour (namely *.example.local pointing to 192.168.1.5)? /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.5 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.254 /etc/hostname avalon /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN /etc/bind/named.conf.options options { directory "/var/cache/bind"; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.local zone "example.local" { type master; file "/etc/bind/zones/db.example.local"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/db.192"; }; /etc/bind/zones/db.example.local $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 5 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL /etc/bind/zones/db.192 $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 4 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; What do I need to add to the above files so that on a laptop on the internal network, I can type in webapp.example.local, and be served by my webserver? EDIT I made several changes to the above files on the webserver. /etc/network/interfaces (end of file) dns-nameservers 127.0.0.1 dns-search example.local /etc/bind/zones/db.example.local (end of file) @ IN NS avalon.example.local. @ IN A 192.168.1.5 avalon IN A 192.168.1.5 webapp IN A 192.168.1.5 www IN CNAME 192.168.1.5 /etc/bind/zones/db.192 (end of file) IN NS avalon.example.local. 73 IN PTR avalon.example.local. As a side note, my spare Win7 machine was able to connect directly to webapp.example.local, but for a Ubuntu 13.10 machine, I had to make the following changes as well (not on the webserver, but on a separate machine): /etc/nsswitch.conf before hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 after hosts: files dns /etc/NetworkManager/NetworkManager.conf before dns=dnsmasq after #dns=dnsmasq The issue remains that its not wildcard DNS, and so I have to add entries to /etc/bind/zones/db.example.local for webapp1, webapp2, ...

    Read the article

  • Simplify your Ajax code by using jQuery Global Ajax Handlers and ajaxSetup low-level interface

    - by hajan
    Creating web applications with consistent layout and user interface is very important for your users. In several ASP.NET projects I’ve completed lately, I’ve been using a lot jQuery and jQuery Ajax to achieve rich user experience and seamless interaction between the client and the server. In almost all of them, I took advantage of the nice jQuery global ajax handlers and jQuery ajax functions. Let’s say you build web application which mainly interacts using Ajax post and get to accomplish various operations. As you may already know, you can easily perform Ajax operations using jQuery Ajax low-level method or jQuery $.get, $.post, etc. Simple get example: $.get("/Home/GetData", function (d) { alert(d); }); As you can see, this is the simplest possible way to make Ajax call. What it does in behind is constructing low-level Ajax call by specifying all necessary information for the request, filling with default information set for the required properties such as data type, content type, etc... If you want to have some more control over what is happening with your Ajax Request, you can easily take advantage of the global ajax handlers. In order to register global ajax handlers, jQuery API provides you set of global Ajax methods. You can find all the methods in the following link http://api.jquery.com/category/ajax/global-ajax-event-handlers/, and these are: ajaxComplete ajaxError ajaxSend ajaxStart ajaxStop ajaxSuccess And the low-level ajax interfaces http://api.jquery.com/category/ajax/low-level-interface/: ajax ajaxPrefilter ajaxSetup For global settings, I usually use ajaxSetup combining it with the ajax event handlers. $.ajaxSetup is very good to help you set default values that you will use in all of your future Ajax Requests, so that you won’t need to repeat the same properties all the time unless you want to override the default settings. Mainly, I am using global ajaxSetup function similarly to the following way: $.ajaxSetup({ cache: false, error: function (x, e) { if (x.status == 550) alert("550 Error Message"); else if (x.status == "403") alert("403. Not Authorized"); else if (x.status == "500") alert("500. Internal Server Error"); else alert("Error..."); }, success: function (x) { //do something global on success... } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, you can make ajax call using low-level $.ajax interface and you don’t need to worry about specifying any of the properties we’ve set in the $.ajaxSetup function. So, you can create your own ways to handle various situations when your Ajax requests are occurring. Sometimes, some of your Ajax Requests may take much longer than expected… So, in order to make user friendly UI that will show some progress bar or animated image that something is happening in behind, you can combine ajaxStart and ajaxStop methods to do the same. First of all, add one <div id=”loading” style=”display:none;”> <img src="@Url.Content("~/Content/images/ajax-loader.gif")" alt="Ajax Loader" /></div> anywhere on your Master Layout / Master page (you can download nice ajax loading images from http://ajaxload.info/). Then, add the following two handlers: $(document).ajaxStart(function () { $("#loading").attr("style", "position:absolute; z-index: 1000; top: 0px; "+ "left:0px; text-align: center; display:none; background-color: #ddd; "+ "height: 100%; width: 100%; /* These three lines are for transparency "+ "in all browsers. */-ms-filter:\"progid:DXImageTransform.Microsoft.Alpha(Opacity=50)\";"+ " filter: alpha(opacity=50); opacity:.5;"); $("#loading img").attr("style", "position:relative; top:40%; z-index:5;"); $("#loading").show(); }); $(document).ajaxStop(function () { $("#loading").removeAttr("style"); $("#loading img").removeAttr("style"); $("#loading").hide(); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Note: While you can reorganize the style in a more reusable way, since these are global Ajax Start/Stop, it is very possible that you won’t use the same style in other places. With this way, you will see that now for any ajax request in your web site or application, you will have the loading image appearing providing better user experience. What I’ve shown is several useful examples on how to simplify your Ajax code by using Global Ajax Handlers and the low-level AjaxSetup function. Of course, you can do a lot more with the other methods as well. Hope this was helpful. Regards, Hajan

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • Too Many Kittens To Juggle At Once

    - by Bil Simser
    Ahh, the Internet. That crazy, mixed up place where one tweet turns into a conversation between dozens of people and spawns a blogpost. This is the direct result of such an event this morning. It started innocently enough, with this: Then followed up by a blog post by Joel here. In the post, Joel introduces us to the term Business Solutions Architect with mad skillz like InfoPath, Access Services, Excel Services, building Workflows, and SSRS report creation, all while meeting the business needs of users in a SharePoint environment. I somewhat disagreed with Joel that this really wasn’t a new role (at least IMHO) and that a good Architect or BA should really be doing this job. As Joel pointed out when you’re building a SharePoint team this kind of role is often overlooked. Engineers might be able to build workflows but is the right workflow for the right problem? Michael Pisarek wrote about a SharePoint Business Architect a few months ago and it’s a pretty solid assessment. Again, I argue you really shouldn’t be looking for roles that don’t exist and I don’t suggest anyone create roles to hire people to fill them. That’s basically creating a solution looking for problems. Michael’s article does have some great points if you’re lost in the quagmire of SharePoint duties though (and I especially like John Ross’ quote “The coolest shit is worthless if it doesn’t meet business needs”). SharePoinTony summed it up nicely with “SharePoint Solutions knowledge is both lacking and underrated in most environments. Roles help”. Having someone on the team who can dance between a business user and a coder can be difficult. Remember the idea of telling something to someone and them passing it on to the next person. By the time the story comes round the circle it’s a shadow of it’s former self with little resemblance to the original tale. This is very much business requirements as they’re told by the user to a business analyst, written down on paper, read by an architect, tuned into a solution plan, and implemented by a developer. Transformations between what was said, what was heard, what was written down, and what was developed can be distant cousins. Not everyone has the skill of communication and even less have negotiation skills to suit the SharePoint platform. Negotiation is important because not everything can be (or should be) done in SharePoint. Sometimes it’s just not appropriate to build it on the SharePoint platform but someone needs to know enough about the platform and what limitations it might have, then communicate that (and/or negotiate) with a customer or user so it’s not about “You can’t have this” to “Let’s try it this way”. Visualize the possible instead of denying the impossible. So what is the right SharePoint team? My cromag brain came with a fairly simpleton answer (and I’m sure people will just say this is a cop-out). The perfect SharePoint team is just enough people to do the job that know the technology and business problem they’re solving. Bridge the gap between business need and technology platform and you have an architect. Communicate the needs of the business effectively so the entire team understands it and you have a business analyst. Can you get this with full time workers? Maybe but don’t expect miracles out of the gate. Also don’t take a consultant’s word as gospel. Some consultants just don’t have the diversity of the SharePoint platform to be worth their value so be careful. You really need someone who knows enough about SharePoint to be able to validate a consultants knowledge level. This is basically try for any consultant, not just a SharePoint one. Specialization is good and needed. A good, well-balanced SharePoint team is one of people that can solve problems with work with the technology, not against it. Having a top developer is great, but don’t rely on them to solve world hunger if they can’t communicate very well with users. An expert business analyst might be great at gathering requirements so the entire team can understand them, but if it means building 100% custom solutions because they don’t fit inside the SharePoint boundaries isn’t of much value. Just repeat. There is no silver bullet. There is no silver bullet. There is no silver bullet. A few people pointed out Nick Inglis’ article Excluding The Information Professional In SharePoint. It’s a good read too and hits home that maybe some developers and IT pros need some extra help in the information space. If you’re in an organization that needs labels on people, come up with something everyone understands and go with it. If that’s Business Solutions Architect, SharePoint Advisor, or Guy Who Knows A Lot About Portals, make it work for you. We all wish that one person could master all that is SharePoint but we also know that doesn’t scale very well and you quickly get into the hit-by-a-bus syndrome (with the organization coming to a full crawl when the guy or girl goes on vacation, gets sick, or pops out a baby). There are too many gaps in SharePoint knowledge to have any one person know it all and too many kittens to juggle all at once. We like to consider ourselves experts in our field, but trying to tackle too many roles at once and we end up being mediocre jack of all trades, master of none. Don't fall into this pit. It's a deep, dark hole you don't want to try to claw your way out of. Trust me. Been there. Done that. Got the t-shirt. In the end I don’t disagree with Joel. SharePoint is a beast and not something that should be taken on by newbies. If you just read “Teach Yourself SharePoint in 24 Hours” and want to go build your corporate intranet or the next killer business solution with all your new found knowledge plan to pony up consultant dollars a few months later when everything goes to Hell in a handbasket and falls over. I’m not saying don’t build solutions in SharePoint. I’m just saying that building effective ones takes skill like any craft and not something you can just cobble together with a little bit of cursory knowledge. Thanks to *everyone* who participated in this tweet rush. It was fun and educational.

    Read the article

  • CDN on Hosted Service in Windows Azure

    - by Shaun
    Yesterday I told Wang Tao, an annoying colleague sitting beside me, about how to make the static content enable the CDN in his website which had just been published on Windows Azure. The approach would be Move the static content, the images, CSS files, etc. into the blob storage. Enable the CDN on his storage account. Change the URL of those static files to the CDN URL. I think these are the very common steps when using CDN. But this morning I found that the new Windows Azure SDK 1.4 and new Windows Azure Developer Portal had just been published announced at the Windows Azure Blog. One of the new features in this release is about the CDN, which means we can enabled the CDN not only for a storage account, but a hosted service as well. Within this new feature the steps I mentioned above would be turned simpler a lot.   Enable CDN for Hosted Service To enable the CDN for a hosted service we just need to log on the Windows Azure Developer Portal. Under the “Hosted Services, Storage Accounts & CDN” item we will find a new menu on the left hand side said “CDN”, where we can manage the CDN for storage account and hosted service. As we can see the hosted services and storage accounts are all listed in my subscriptions. To enable a CDN for a hosted service is veru simple, just select a hosted service and click the New Endpoint button on top. In this dialog we can select the subscription and the storage account, or the hosted service we want the CDN to be enabled. If we selected the hosted service, like I did in the image above, the “Source URL for the CDN endpoint” will be shown automatically. This means the windows azure platform will make all contents under the “/cdn” folder as CDN enabled. But we cannot change the value at the moment. The following 3 checkboxes next to the URL are: Enable CDN: Enable or disable the CDN. HTTPS: If we need to use HTTPS connections check it. Query String: If we are caching content from a hosted service and we are using query strings to specify the content to be retrieved, check it. Just click the “Create” button to let the windows azure create the CDN for our hosted service. The CDN would be available within 60 minutes as Microsoft mentioned. My experience is that about 15 minutes the CDN could be used and we can find the CDN URL in the portal as well.   Put the Content in CDN in Hosted Service Let’s create a simple windows azure project in Visual Studio with a MVC 2 Web Role. When we created the CDN mentioned above the source URL of CDN endpoint would be under the “/cdn” folder. So in the Visual Studio we create a folder under the website named “cdn” and put some static files there. Then all these files would be cached by CDN if we use the CDN endpoint. The CDN of the hosted service can cache some kind of “dynamic” result with the Query String feature enabled. We create a controller named CdnController and a GetNumber action in it. The routed URL of this controller would be /Cdn/GetNumber which can be CDN-ed as well since the URL said it’s under the “/cdn” folder. In the GetNumber action we just put a number value which specified by parameter into the view model, then the URL could be like /Cdn/GetNumber?number=2. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:  7: namespace MvcWebRole1.Controllers 8: { 9: public class CdnController : Controller 10: { 11: // 12: // GET: /Cdn/ 13:  14: public ActionResult GetNumber(int number) 15: { 16: return View(number); 17: } 18:  19: } 20: } And we add a view to display the number which is super simple. 1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<int>" %> 2:  3: <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> 4: GetNumber 5: </asp:Content> 6:  7: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 8:  9: <h2>The number is: <% 1: : Model.ToString() %></h2> 10:  11: </asp:Content> Since this action is under the CdnController the URL would be under the “/cdn” folder which means it can be CDN-ed. And since we checked the “Query String” the content of this dynamic page will be cached by its query string. So if I use the CDN URL, http://az25311.vo.msecnd.net/GetNumber?number=2, the CDN will firstly check if there’s any content cached with the key “GetNumber?number=2”. If yes then the CDN will return the content directly; otherwise it will connect to the hosted service, http://aurora-sys.cloudapp.net/Cdn/GetNumber?number=2, and then send the result back to the browser and cached in CDN. But to be notice that the query string are treated as string when used by the key of CDN element. This means the URLs below would be cached in 2 elements in CDN: http://az25311.vo.msecnd.net/GetNumber?number=2&page=1 http://az25311.vo.msecnd.net/GetNumber?page=1&number=2 The final step is to upload the project onto azure. Test the Hosted Service CDN After published the project on azure, we can use the CDN in the website. The CDN endpoint we had created is az25311.vo.msecnd.net so all files under the “/cdn” folder can be requested with it. Let’s have a try on the sample.htm and c_great_wall.jpg static files. Also we can request the dynamic page GetNumber with the query string with the CDN endpoint. And if we refresh this page it will be shown very quickly since the content comes from the CDN without MCV server side process. We style of this page was missing. This is because the CSS file was not includes in the “/cdn” folder so the page cannot retrieve the CSS file from the CDN URL.   Summary In this post I introduced the new feature in Windows Azure CDN with the release of Windows Azure SDK 1.4 and new Developer Portal. With the CDN of the Hosted Service we can just put the static resources under a “/cdn” folder so that the CDN can cache them automatically and no need to put then into the blob storage. Also it support caching the dynamic content with the Query String feature. So that we can cache some parts of the web page by using the UserController and CDN. For example we can cache the log on user control in the master page so that the log on part will be loaded super-fast. There are some other new features within this release you can find here. And for more detailed information about the Windows Azure CDN please have a look here as well.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • SQL SERVER – 3 Online SQL Courses at Pluralsight and Free Learning Resources

    - by pinaldave
    Usain Bolt is an inspiration for all. He broke his own record multiple times because he wanted to do better! Read more about him on wikipedia. He is great and indeed fastest man on the planet. Usain Bolt – World’s Fastest Man “Can you teach me SQL Server Performance Tuning?” This is one of the most popular questions which I receive all the time. The answer is YES. I would love to do performance tuning training for anyone, anywhere.  It is my favorite thing to do, and it is my favorite thing to train others in.  If possible, I would love to do training 24 hours a day, 7 days a week, 365 days a year.  To me, it doesn’t feel like a job. Of course, as much as I would love to do performance tuning 24/7/365, obviously I am just one human being and can only be in one place t one time.  It is also very difficult to train more than one person at a time, and it is difficult to train two or more people at a time, especially when the two people are at different levels.  I am also limited by geography.  I live in India, and adjust to my own time zone.  Trying to teach a live course from India to someone whose time zone is 12 or more hours off of mine is very difficult.  If I am trying to teach at 2 am, I am sure I am not at my best! There was only one solution to scale – Online Trainings. I have built 3 different courses on SQL Server Performance Tuning with Pluralsight. Now I have no problem – I am 100% scalable and available 24/7 and 365. You can make me say the same things again and again till you find it right. I am in your mobile, PC as well as on XBOX. This is why I am such a big fan of online courses.  I have recorded many performance tuning classes and you can easily access them online, at your own time.  And don’t think that just because these aren’t live classes you won’t be able to get any feedback from me.  I encourage all my viewers to go ahead and ask me questions by e-mail, Twitter, Facebook, or whatever way you can get a hold of me. Here are details of three of my courses with Pluralsight. I suggest you go over the description of the course. As an author of the course, I have few FREE codes for watching the free courses. Please leave a comment with your valid email address, I will send a few of them to random winners. SQL Server Performance: Introduction to Query Tuning  SQL Server performance tuning is an art to master – for developers and DBAs alike. This course takes a systematic approach to planning, analyzing, debugging and troubleshooting common query-related performance problems. This includes an introduction to understanding execution plans inside SQL Server. In this almost four hour course we cover following important concepts. Introduction 10:22 Execution Plan Basics 45:59 Essential Indexing Techniques 20:19 Query Design for Performance 50:16 Performance Tuning Tools 01:15:14 Tips and Tricks 25:53 Checklist: Performance Tuning 07:13 The duration of each module is mentioned besides the name of the module. SQL Server Performance: Indexing Basics This course teaches you how to master the art of performance tuning SQL Server by better understanding indexes. In this almost two hour course we cover following important concepts. Introduction 02:03 Fundamentals of Indexing 22:21 Practical Indexing Implementation Techniques 37:25 Index Maintenance 16:33 Introduction to ColumnstoreIndex 08:06 Indexing Practical Performance Tips and Tricks 24:56 Checklist : Index and Performance 07:29 The duration of each module is mentioned besides the name of the module. SQL Server Questions and Answers This course is designed to help you better understand how to use SQL Server effectively. The course presents many of the common misconceptions about SQL Server, and then carefully debunks those misconceptions with clear explanations and short but compelling demos, showing you how SQL Server really works. In this almost 2 hours and 15 minutes course we cover following important concepts. Introduction 00:54 Retrieving IDENTITY value using @@IDENTITY 08:38 Concepts Related to Identity Values 04:15 Difference between WHERE and HAVING 05:52 Order in WHERE clause 07:29 Concepts Around Temporary Tables and Table Variables 09:03 Are stored procedures pre-compiled? 05:09 UNIQUE INDEX and NULLs problem 06:40 DELETE VS TRUNCATE 06:07 Locks and Duration of Transactions 15:11 Nested Transaction and Rollback 09:16 Understanding Date/Time Datatypes 07:40 Differences between VARCHAR and NVARCHAR datatypes 06:38 Precedence of DENY and GRANT security permissions 05:29 Identify Blocking Process 06:37 NULLS usage with Dynamic SQL 08:03 Appendix Tips and Tricks with Tools 20:44 The duration of each module is mentioned besides the name of the module. SQL in Sixty Seconds You will have to login and to get subscribed to the courses to view them. Here are my free video learning resources SQL in Sixty Seconds. These are 60 second video which I have built on various subjects related to SQL Server. Do let me know what you think about them? Here are three of my latest videos: Identify Most Resource Intensive Queries – SQL in Sixty Seconds #028 Copy Column Headers from Resultset – SQL in Sixty Seconds #027 Effect of Collation on Resultset – SQL in Sixty Seconds #026 You can watch and learn at your own pace.  Then you can easily ask me any questions you have.  E-mail is easiest, but for really tough questions I’m willing to talk on Skype, Gtalk, or even Facebook chat.  Please do watch and then talk with me, I am always available on the internet! Here is the video of the world’s fastest man.Usain St. Leo Bolt inspires us that we all do better than best. We can go the next level of our own record. We all can improve if we have a will and dedication.  Watch the video from 5:00 mark. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLServer, T SQL, Technology, Video

    Read the article

  • CodePlex Daily Summary for Monday, May 17, 2010

    CodePlex Daily Summary for Monday, May 17, 2010New Projects.NET Essentials Course: .NET Essentials course @ Telerik Academy Training project for the studentsAU/NZ Office 2010 Launch Demos: The AU/NZ Office 2010 Launch Demos are a collection of code samples that were used as part of the Office/SharePoint 2010 launch parties in Australi...CybennyCMS: Very simple CMS system for building sites with ASP.NET with templates for lay-out, content pages with only html content and a xml file for the site...essionPIM: essionPIMGIStance: A library for finding "nearest neighbor" among an in-memory set of positions, in C# and F#. A radius must be specified for making a meaningful s...IP Informer: IP Informer is IP Informer.Kurumsal Ofis Paketi: Kurumsal Ofis Paketi (KOP), Microsoft Ofis 2010 ürünleri için geliştirilmiş eklenti yazılımıdır. KOP, Word ve Excel’de bulunan işlevlerinin genişle...Mockup to XAML: Convert Balsamiq Mockups to XAML. This project supports BMML mockup control conversion using plugins. A standard set of controls are included wit...Open XML Validator: This WPF app give you a brief resume about errors in your Open XML documents.Paint.NET Bulk Image Processor: PDNBulkUpdater is a plug-in for Paint.NET that allows you to efficiently perform operations such as resizing and converting multiple images at the ...PiPiBugNet: PiPiBugNet是一套全新的开源Bug管理系统Roleplay character generator: The roleplay character generator allows the creation of characters for different roleplaying gamesSharePoint User Search WebParts: This project contains SharePoint webparts which provide advanced search configuration and experience for SharePoint 2007. It will be upgrade in few...Spodi: Spodi is created on 22-04-2010TfsPolicyPack: This project will provide a few checkin policies for VS 2010.vccodesandobx: vccodesandobxvccodesandobxvccodesandobxWhiteNile: test project using codeplexNew ReleasesAnimeStore.Net: 1.0.3.0: Build 1.0.3.0 Changes Move some functionality to features (MEF) Filter / Search functionality. Anime hard-copy records storage (e.g Disk Storage ...AU/NZ Office 2010 Launch Demos: Twitter map web part: This is the main twitter map web part download, see the Twitter Map web part page for all the information.Blueset Studio Opensource Projects: 推来: 稳定版本BUtil: BUtil 5.0 Alpha2: The initial implementation of multitasking (except ghost)CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1 and 4.0.1 beta: Beta 2 is released here: url http://cassinidev.codeplex.com/releases/view/45456 New in CassiniDev v3.5.1.0/v4.0.1.0 Added .Net 4 / VS10 build. ...CBM-Command: 2010-05-16: Release Notes - 2010-05-16New Features New navigation options: Page Up, Page Down, Top of Directory, Bottom of Directory. See documentation (http:...CCNet Conditional Plugin: CCNet Conditional for CCNet 1.5: A (quick) build of the plugin for CCNet 1.5 to fix the 17365 bug reported by Beakster. This also adds a new condition "timeCondition"CybennyCMS: Cybenny CMS beta 1: The first beta. Includes a small demo site.Data Extracting SDK: Data Extracting SDK v.1.1 RTM: RTM version of Data Extracting SDK.Duckworth Lewis Professional Edition Calculator: DLcalc 2.0: This software can perform all D/L calculations 100% accurately. From version 2.0 onwards, tables for par scores can also be produced.EPiServer CMS Page Type Builder: Page Type Builder 1.2: Release notes can be found in this blog post.Floe IRC Client: Floe IRC Client 2010-05 R5: - Many new context menu options for @s - Ability to select multiple users in the nick list for some operations (kick, ban) - Bunch of minor bug fix...Graffiti CMS Events Plugin: Version 1.0.1: Minor update to previous version to fix bug where deleted posts were still showing in the calendar.Microsoft Research Boogie: 2010-05-16: Binary release of Boogie and Dafny. (Note, Chalice is not pre-built as part of this binary release. To obtain it, you need to build it yourself f...MSBuild Launch Pad (mPad): 1.0 Beta 2: Basic support for sln, csproj, vbproj, vcxproj, shfbproj, ccproj, oxygene and proj files are added. Basic settings (Show Prompt, and Auto Hide) are...Multi-Language Words Memorizer: Memorizer 1.1: Issues fix, XML db update with new words.NShader - HLSL - GLSL - CG - Shader Syntax Highlighter AddIn for Visual Studio: NShader 1.1: New release of NShader! New : - a Visual Studio 2010 port can be installed through the new extension manager : you just have to download NShaderV...PHPExcel: PHPExcel 1.7.3 Production: Want to contribute?Please refer the Contribute page. DonationsDonate via PayPal. If you want to, we can also add your name / company on our Donati...Rollback - A social backup tool.: Rollback Setup 0.5.1.2 Build 48360: Bug fixes for backing up files which are hidden/system. Changes to make builds on 64 bit Windows 7 using VS 2010 Express edition.Rollback - A social backup tool.: Rollback Setup 0.5.1.3: Updated version number.Shake - C# Make: Shake v0.1.20: New: Simple console logger Changes: Command line params helper writes out syntax and samples (like msbuild) Fixes: Assembly info, file task and r...SharePoint User Search WebParts: v0.1 Friendly MOSS 2007 Search WebPart: Very first version of this webpart. A more stabilized version will follow in few days.Team Deploy: Team Deploy 2010 Beta 1: This is the initial release for Team Deploy 2010 for TFS Team Build 2010. All features from Team Build 2.x are functional in this version. Comp...Team Foundation Server Administration Tool: 2.0: TFS Administration Tool 2.0 TFS Administration Tool 2.0 is built on top of the Team Foundation Server 2008 object model and in order to connect to...The Ping Master: v0.9.0.0: Installer for The Ping Master binariesUseful Office Macros: All Macro Downloads: Please find above the downloads related to this project. Each Excel Workbook below works independently of the others, so you only need to download...VCC: Latest build, v2.1.30516.0: Automatic drop of latest buildVisual Studio DSite: Advanced Digital Board Game (Visual C++ 2008): An advanced digital board game made in visual c 2008.YUI Compressor Custom Tool for Visual Studio: YUI Compressor Custom Tool Full Version: Version 1.0 The following changes have been made: Merged classes to automatically sense if the target file is Javascript or CSS. Cleaned up setu...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryPHPExcelBlogEngine.NETRawrMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersDotNetZip LibraryCaliburn: An Application Framework for WPF and SilverlightSQL Server PowerShell Extensions

    Read the article

  • Service Broker, not ETL

    - by jamiet
    I have been very quiet on this blog of late and one reason for that is I have been very busy on a client project that I would like to talk about a little here. The client that I have been working for has a website that runs on a distributed architecture utilising a messaging infrastructure for communication between different endpoints. My brief was to build a system that could consume these messages and produce analytical information in near-real-time. More specifically I basically had to deliver a data warehouse however it was the real-time aspect of the project that really intrigued me. This real-time requirement meant that using an Extract transformation, Load (ETL) tool was out of the question and so I had no choice but to write T-SQL code (i.e. stored-procedures) to process the incoming messages and load the data into the data warehouse. This concerned me though – I had no way to control the rate at which data would arrive into the system yet we were going to have end-users querying the system at the same time that those messages were arriving; the potential for contention in such a scenario was pretty high and and was something I wanted to minimise as much as possible. Moreover I did not want the processing of data inside the data warehouse to have any impact on the customer-facing website. As you have probably guessed from the title of this blog post this is where Service Broker stepped in! For those that have not heard of it Service Broker is a queuing technology that has been built into SQL Server since SQL Server 2005. It provides a number of features however the one that was of interest to me was the fact that it facilitates asynchronous data processing which, in layman’s terms, means the ability to process some data without requiring the system that supplied the data having to wait for the response. That was a crucial feature because on this project the customer-facing website (in effect an OLTP system) would be calling one of our stored procedures with each message – we did not want to cause the OLTP system to wait on us every time we processed one of those messages. This asynchronous nature also helps to alleviate the contention problem because the asynchronous processing activity is handled just like any other task in the database engine and hence can wait on another task (such as an end-user query). Service Broker it was then! The stored procedure called by the OLTP system would simply put the message onto a queue and we would use a feature called activation to pick each message off the queue in turn and process it into the warehouse. At the time of writing the system is not yet up to full capacity but so far everything seems to be working OK (touch wood) and crucially our users are seeing data in near-real-time. By near-real-time I am talking about latencies of a few minutes at most and to someone like me who is used to building systems that have overnight latencies that is a huge step forward! So then, am I advocating that you all go out and dump your ETL tools? Of course not, no! What this project has taught me though is that in certain scenarios there may be better ways to implement a data warehouse system then the traditional “load data in overnight” approach that we are all used to. Moreover I have really enjoyed getting to grips with a new technology and even if you don’t want to use Service Broker you might want to consider asynchronous messaging architectures for your BI/data warehousing solutions in the future. This has been a very high level overview of my use of Service Broker and I have deliberately left out much of the minutiae of what has been a very challenging implementation. Nonetheless I hope I have caused you to reflect upon your own approaches to BI and question whether other approaches may be more tenable. All comments and questions gratefully received! Lastly, if you have never used Service Broker before and want to kick the tyres I have provided below a very simple “Service Broker Hello World” script that will create all of the objects required to facilitate Service Broker communications and then send the message “Hello World” from one place to anther! This doesn’t represent a “proper” implementation per se because it doesn’t close down down conversation objects (which you should always do in a real-world scenario) but its enough to demonstrate the capabilities! @Jamiet ----------------------------------------------------------------------------------------------- /*This is a basic Service Broker Hello World app. Have fun! -Jamie */ USE MASTER GO CREATE DATABASE SBTest GO --Turn Service Broker on! ALTER DATABASE SBTest SET ENABLE_BROKER GO USE SBTest GO -- 1) we need to create a message type. Note that our message type is -- very simple and allowed any type of content CREATE MESSAGE TYPE HelloMessage VALIDATION = NONE GO -- 2) Once the message type has been created, we need to create a contract -- that specifies who can send what types of messages CREATE CONTRACT HelloContract (HelloMessage SENT BY INITIATOR) GO --We can query the metadata of the objects we just created SELECT * FROM   sys.service_message_types WHERE name = 'HelloMessage'; SELECT * FROM   sys.service_contracts WHERE name = 'HelloContract'; SELECT * FROM   sys.service_contract_message_usages WHERE  service_contract_id IN (SELECT service_contract_id FROM sys.service_contracts WHERE name = 'HelloContract') AND        message_type_id IN (SELECT message_type_id FROM sys.service_message_types WHERE name = 'HelloMessage'); -- 3) The communication is between two endpoints. Thus, we need two queues to -- hold messages CREATE QUEUE SenderQueue CREATE QUEUE ReceiverQueue GO --more querying metatda SELECT * FROM sys.service_queues WHERE name IN ('SenderQueue','ReceiverQueue'); --we can also select from the queues as if they were tables SELECT * FROM SenderQueue   SELECT * FROM ReceiverQueue   -- 4) Create the required services and bind them to be above created queues CREATE SERVICE Sender   ON QUEUE SenderQueue CREATE SERVICE Receiver   ON QUEUE ReceiverQueue (HelloContract) GO --more querying metadata SELECT * FROM sys.services WHERE name IN ('Receiver','Sender'); -- 5) At this point, we can begin the conversation between the two services by -- sending messages DECLARE @conversationHandle UNIQUEIDENTIFIER DECLARE @message NVARCHAR(100) BEGIN   BEGIN TRANSACTION;   BEGIN DIALOG @conversationHandle         FROM SERVICE Sender         TO SERVICE 'Receiver'         ON CONTRACT HelloContract WITH ENCRYPTION=OFF   -- Send a message on the conversation   SET @message = N'Hello, World';   SEND  ON CONVERSATION @conversationHandle         MESSAGE TYPE HelloMessage (@message)   COMMIT TRANSACTION END GO --check contents of queues SELECT * FROM SenderQueue   SELECT * FROM ReceiverQueue   GO -- Receive a message from the queue RECEIVE CONVERT(NVARCHAR(MAX), message_body) AS MESSAGE FROM ReceiverQueue GO --If no messages were received and/or you can't see anything on the queues you may wish to check the following for clues: SELECT * FROM sys.transmission_queue -- Cleanup DROP SERVICE Sender DROP SERVICE Receiver DROP QUEUE SenderQueue DROP QUEUE ReceiverQueue DROP CONTRACT HelloContract DROP MESSAGE TYPE HelloMessage GO USE MASTER GO DROP DATABASE SBTest GO

    Read the article

  • Configuration "diff" across Oracle WebCenter Sites instances

    - by Mark Fincham-Oracle
    Problem Statement With many Oracle WebCenter Sites environments - how do you know if the various configuration assets and settings are in sync across all of those environments? Background At Oracle we typically have a "W" shaped set of environments.  For the "Production" environments we typically have a disaster recovery clone as well and sometimes additional QA environments alongside the production management environment. In the case of www.java.com we have 10 different environments. All configuration assets/settings (CSElements, Templates, Start Menus etc..) start life on the Development Management environment and are then published downstream to other environments as part of the software development lifecycle. Ensuring that each of these 10 environments has the same set of Templates, CSElements, StartMenus, TreeTabs etc.. is impossible to do efficiently without automation. Solution Summary  The solution comprises of two components. A JSON data feed from each environment. A simple HTML page that consumes these JSON data feeds.  Data Feed: Create a JSON WebService on each environment. The WebService is no more than a SiteEntry + CSElement. The CSElement queries various DB tables to obtain details of the assets/settings returning this data in a JSON feed. Report: Create a simple HTML page that uses JQuery to fetch the JSON feed from each environment and display the results in a table. Since all assets (CSElements, Templates etc..) are published between environments they will have the same last modified date. If the last modified date of an asset is different in the JSON feed or is mising from an environment entirely then highlight that in the report table. Example Solution Details Step 1: Create a Site Entry + CSElement that outputs JSON Site Entry & CSElement Setup  The SiteEntry should be uncached so that the most recent configuration information is returned at all times. In the CSElement set the contenttype accordingly: Step 2: Write the CSElement Logic The basic logic, that we repeat for each asset or setting that we are interested in, is to query the DB using <ics:sql> and then loop over the resultset with <ics:listloop>. For example: <ics:sql sql="SELECT name,updateddate FROM Template WHERE status != 'VO'" listname="TemplateList" table="Template" /> "templates": [ <ics:listloop listname="TemplateList"> {"name":"<ics:listget listname="TemplateList"  fieldname="name"/>", "modified":"<ics:listget listname="TemplateList"  fieldname="updateddate"/>"}, </ics:listloop> ], A comprehensive list of SQL queries to fetch each configuration asset/settings can be seen in the appendix at the end of this article. For the generation of the JSON data structure you could use Jettison (the library ships with the 11.1.1.8 version of the product), native Java 7 capabilities or (as the above example demonstrates) you could roll-your-own JSON output but that is not advised. Step 3: Create an HTML Report The JavaScript logic looks something like this.. 1) Create a list of JSON feeds to fetch: ENVS['dev-mgmngt'] = 'http://dev-mngmnt.example.com/sites/ContentServer?d=&pagename=settings.json'; ENVS['dev-dlvry'] = 'http://dev-dlvry.example.com/sites/ContentServer?d=&pagename=settings.json';  ENVS['test-mngmnt'] = 'http://test-mngmnt.example.com/sites/ContentServer?d=&pagename=settings.json';  ENVS['test-dlvry'] = 'http://test-dlvry.example.com/sites/ContentServer?d=&pagename=settings.json';   2) Create a function to get the JSON feeds: function getDataForEnvironment(url){ return $.ajax({ type: 'GET', url: url, dataType: 'jsonp', beforeSend: function (jqXHR, settings){ jqXHR.originalEnv = env; jqXHR.originalUrl = url; }, success: function(json, status, jqXHR) { console.log('....success fetching: ' + jqXHR.originalUrl); // store the returned data in ALLDATA ALLDATA[jqXHR.originalEnv] = json; }, error: function(jqXHR, status, e) { console.log('....ERROR: Failed to get data from [' + url + '] ' + status + ' ' + e); } }); } 3) Fetch each JSON feed: for (var env in ENVS) { console.log('Fetching data for env [' + env +'].'); var promisedData = getDataForEnvironment(ENVS[env]); promisedData.success(function (data) {}); }  4) For each configuration asset or setting create a table in the report For example, CSElements: 1) Get a list of unique CSElement names from all of the returned JSON data. 2) For each unique CSElement name, create a row in the table  3) Select 1 environment to represent the master or ideal state (e.g. "Everything should be like Production Delivery") 4) For each environment, compare the last modified date of this envs CSElement to the master. Highlight any differences in last modified date or missing CSElements. 5) Repeat...    Appendix This section contains various SQL statements that can be used to retrieve configuration settings from the DB.  Templates  <ics:sql sql="SELECT name,updateddate FROM Template WHERE status != 'VO'" listname="TemplateList" table="Template" /> CSElements <ics:sql sql="SELECT name,updateddate FROM CSElement WHERE status != 'VO'" listname="CSEList" table="CSElement" /> Start Menus <ics:sql sql="select sm.id, sm.cs_name, sm.cs_description, sm.cs_assettype, sm.cs_assetsubtype, sm.cs_itemtype, smr.cs_rolename, p.name from StartMenu sm, StartMenu_Sites sms, StartMenu_Roles smr, Publication p where sm.id=sms.ownerid and sm.id=smr.cs_ownerid and sms.pubid=p.id order by sm.id" listname="startList" table="Publication,StartMenu,StartMenu_Roles,StartMenu_Sites"/>  Publishing Configurations <ics:sql sql="select id, name, description, type, dest, factors from PubTarget" listname="pubTargetList" table="PubTarget" /> Tree Tabs <ics:sql sql="select tt.id, tt.title, tt.tooltip, p.name as pubname, ttr.cs_rolename, ttsect.name as sectname from TreeTabs tt, TreeTabs_Roles ttr, TreeTabs_Sect ttsect,TreeTabs_Sites ttsites LEFT JOIN Publication p  on p.id=ttsites.pubid where p.id is not null and tt.id=ttsites.ownerid and ttsites.pubid=p.id and tt.id=ttr.cs_ownerid and tt.id=ttsect.ownerid order by tt.id" listname="treeTabList" table="TreeTabs,TreeTabs_Roles,TreeTabs_Sect,TreeTabs_Sites,Publication" />  Filters <ics:sql sql="select name,description,classname from Filters" listname="filtersList" table="Filters" /> Attribute Types <ics:sql sql="select id,valuetype,name,updateddate from AttrTypes where status != 'VO'" listname="AttrList" table="AttrTypes" /> WebReference Patterns <ics:sql sql="select id,webroot,pattern,assettype,name,params,publication from WebReferencesPatterns" listname="WebRefList" table="WebReferencesPatterns" /> Device Groups <ics:sql sql="select id,devicegroupsuffix,updateddate,name from DeviceGroup" listname="DeviceList" table="DeviceGroup" /> Site Entries <ics:sql sql="select se.id,se.name,se.pagename,se.cselement_id,se.updateddate,cse.rootelement from SiteEntry se LEFT JOIN CSElement cse on cse.id = se.cselement_id where se.status != 'VO'" listname="SiteList" table="SiteEntry,CSElement" /> Webroots <ics:sql sql="select id,name,rooturl,updatedby,updateddate from WebRoot" listname="webrootList" table="WebRoot" /> Page Definitions <ics:sql sql="select pd.id, pd.name, pd.updatedby, pd.updateddate, pd.description, pdt.attributeid, pa.name as nameattr, pdt.requiredflag, pdt.ordinal from PageDefinition pd, PageDefinition_TAttr pdt, PageAttribute pa where pd.status != 'VO' and pa.id=pdt.attributeid and pdt.ownerid=pd.id order by pd.id,pdt.ordinal" listname="pageDefList" table="PageDefinition,PageAttribute,PageDefinition_TAttr" /> FW_Application <ics:sql sql="select id,name,updateddate from FW_Application where status != 'VO'" listname="FWList" table="FW_Application" /> Custom Elements <ics:sql sql="select elementname from ElementCatalog where elementname like 'CustomElements%'" listname="elementList" table="ElementCatalog" />

    Read the article

  • MVC Portable Area Modules *Without* MasterPages

    - by Steve Michelotti
    Portable Areas from MvcContrib provide a great way to build modular and composite applications on top of MVC. In short, portable areas provide a way to distribute MVC binary components as simple .NET assemblies where the aspx/ascx files are actually compiled into the assembly as embedded resources. I’ve blogged about Portable Areas in the past including this post here which talks about embedding resources and you can read more of an intro to Portable Areas here. As great as Portable Areas are, the question that seems to come up the most is: what about MasterPages? MasterPages seems to be the one thing that doesn’t work elegantly with portable areas because you specify the MasterPage in the @Page directive and it won’t use the same mechanism of the view engine so you can’t just embed them as resources. This means that you end up referencing a MasterPage that exists in the host application but not in your portable area. If you name the ContentPlaceHolderId’s correctly, it will work – but it all seems a little fragile. Ultimately, what I want is to be able to build a portable area as a module which has no knowledge of the host application. I want to be able to invoke the module by a full route on the user’s browser and it gets invoked and “automatically appears” inside the application’s visual chrome just like a MasterPage. So how could we accomplish this with portable areas? With this question in mind, I looked around at what other people are doing to address similar problems. Specifically, I immediately looked at how the Orchard team is handling this and I found it very compelling. Basically Orchard has its own custom layout/theme framework (utilizing a custom view engine) that allows you to build your module without any regard to the host. You simply decorate your controller with the [Themed] attribute and it will render with the outer chrome around it: 1: [Themed] 2: public class HomeController : Controller Here is the slide from the Orchard talk at this year MIX conference which shows how it conceptually works:   It’s pretty cool stuff.  So I figure, it must not be too difficult to incorporate this into the portable areas view engine as an optional piece of functionality. In fact, I’ll even simplify it a little – rather than have 1) Document.aspx, 2) Layout.ascx, and 3) <view>.ascx (as shown in the picture above); I’ll just have the outer page be “Chrome.aspx” and then the specific view in question. The Chrome.aspx not only takes the place of the MasterPage, but now since we’re no longer constrained by the MasterPage infrastructure, we have the choice of the Chrome.aspx living in the host or inside the portable areas as another embedded resource! Disclaimer: credit where credit is due – much of the code from this post is me re-purposing the Orchard code to suit my needs. To avoid confusion with Orchard, I’m going to refer to my implementation (which will be based on theirs) as a Chrome rather than a Theme. The first step I’ll take is to create a ChromedAttribute which adds a flag to the current HttpContext to indicate that the controller designated Chromed like this: 1: [Chromed] 2: public class HomeController : Controller The attribute itself is an MVC ActionFilter attribute: 1: public class ChromedAttribute : ActionFilterAttribute 2: { 3: public override void OnActionExecuting(ActionExecutingContext filterContext) 4: { 5: var chromedAttribute = GetChromedAttribute(filterContext.ActionDescriptor); 6: if (chromedAttribute != null) 7: { 8: filterContext.HttpContext.Items[typeof(ChromedAttribute)] = null; 9: } 10: } 11:   12: public static bool IsApplied(RequestContext context) 13: { 14: return context.HttpContext.Items.Contains(typeof(ChromedAttribute)); 15: } 16:   17: private static ChromedAttribute GetChromedAttribute(ActionDescriptor descriptor) 18: { 19: return descriptor.GetCustomAttributes(typeof(ChromedAttribute), true) 20: .Concat(descriptor.ControllerDescriptor.GetCustomAttributes(typeof(ChromedAttribute), true)) 21: .OfType<ChromedAttribute>() 22: .FirstOrDefault(); 23: } 24: } With that in place, we only have to override the FindView() method of the custom view engine with these 6 lines of code: 1: public override ViewEngineResult FindView(ControllerContext controllerContext, string viewName, string masterName, bool useCache) 2: { 3: if (ChromedAttribute.IsApplied(controllerContext.RequestContext)) 4: { 5: var bodyView = ViewEngines.Engines.FindPartialView(controllerContext, viewName); 6: var documentView = ViewEngines.Engines.FindPartialView(controllerContext, "Chrome"); 7: var chromeView = new ChromeView(bodyView, documentView); 8: return new ViewEngineResult(chromeView, this); 9: } 10:   11: // Just execute normally without applying Chromed View Engine 12: return base.FindView(controllerContext, viewName, masterName, useCache); 13: } If the view engine finds the [Chromed] attribute, it will invoke it’s own process – otherwise, it’ll just defer to the normal web forms view engine (with masterpages). The ChromeView’s primary job is to independently set the BodyContent on the view context so that it can be rendered at the appropriate place: 1: public class ChromeView : IView 2: { 3: private ViewEngineResult bodyView; 4: private ViewEngineResult documentView; 5:   6: public ChromeView(ViewEngineResult bodyView, ViewEngineResult documentView) 7: { 8: this.bodyView = bodyView; 9: this.documentView = documentView; 10: } 11:   12: public void Render(ViewContext viewContext, System.IO.TextWriter writer) 13: { 14: ChromeViewContext chromeViewContext = ChromeViewContext.From(viewContext); 15:   16: // First render the Body view to the BodyContent 17: using (var bodyViewWriter = new StringWriter()) 18: { 19: var bodyViewContext = new ViewContext(viewContext, bodyView.View, viewContext.ViewData, viewContext.TempData, bodyViewWriter); 20: this.bodyView.View.Render(bodyViewContext, bodyViewWriter); 21: chromeViewContext.BodyContent = bodyViewWriter.ToString(); 22: } 23: // Now render the Document view 24: this.documentView.View.Render(viewContext, writer); 25: } 26: } The ChromeViewContext (code excluded here) mainly just has a string property for the “BodyContent” – but it also makes sure to put itself in the HttpContext so it’s available. Finally, we created a little extension method so the module’s view can be rendered in the appropriate place: 1: public static void RenderBody(this HtmlHelper htmlHelper) 2: { 3: ChromeViewContext chromeViewContext = ChromeViewContext.From(htmlHelper.ViewContext); 4: htmlHelper.ViewContext.Writer.Write(chromeViewContext.BodyContent); 5: } At this point, the other thing left is to decide how we want to implement the Chrome.aspx page. One approach is the copy/paste the HTML from the typical Site.Master and change the main content placeholder to use the HTML helper above – this way, there are no MasterPages anywhere. Alternatively, we could even have Chrome.aspx utilize the MasterPage if we wanted (e.g., in the case where some pages are Chromed and some pages want to use traditional MasterPage): 1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" %> 2: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 3: <% Html.RenderBody(); %> 4: </asp:Content> At this point, it’s all academic. I can create a controller like this: 1: [Chromed] 2: public class WidgetController : Controller 3: { 4: public ActionResult Index() 5: { 6: return View(); 7: } 8: } Then I’ll just create Index.ascx (a partial view) and put in the text “Inside my widget”. Now when I run the app, I can request the full route (notice the controller name of “widget” in the address bar below) and the HTML from my Index.ascx will just appear where it is supposed to.   This means no more warnings for missing MasterPages and no more need for your module to have knowledge of the host’s MasterPage placeholders. You have the option of using the Chrome.aspx in the host or providing your own while embedding it as an embedded resource itself. I’m curious to know what people think of this approach. The code above was done with my own local copy of MvcContrib so it’s not currently something you can download. At this point, these are just my initial thoughts – just incorporating some ideas for Orchard into non-Orchard apps to enable building modular/composite apps more easily. Additionally, on the flip side, I still believe that Portable Areas have potential as the module packaging story for Orchard itself.   What do you think?

    Read the article

  • Agile Like Jazz

    - by Jeff Certain
    (I’ve been sitting on this for a week or so now, thinking that it needed to be tightened up a bit to make it less rambling. Since that’s clearly not going to happen, reader beware!) I had the privilege of spending around 90 minutes last night sitting and listening to Sonny Rollins play a concert at the Disney Center in LA. If you don’t know who Sonny Rollins is, I don’t know how to explain the experience; if you know who he is, I don’t need to. Suffice it to say that he has been recording professionally for over 50 years, and helped create an entire genre of music. A true master by any definition. One of the most intriguing aspects of a concert like this, however, is watching the master step aside and let the rest of the musicians play. Not just play their parts, but really play… letting them take over the spotlight, to strut their stuff, to soak up enthusiastic applause from the crowd. Maybe a lot of it has to do with the fact that Sonny Rollins has been doing this for more than a half-century. Maybe it has something to do with a kind of patience you learn when you’re on the far side of 80 – and the man can still blow a mean sax for 90 minutes without stopping! Maybe it has to do with the fact that he was out there for the love of the music and the love of the show, not because he had anything to prove to anyone and, I like to think, not for the money. Perhaps it had more to do with the fact that, when you’re at that level of mastery, the other musicians are going to be good. Really good. Whatever the reasons, there was a incredible freedom on that stage – the ability to improvise, for each musician to showcase their own specialization and skills, and them come back to the common theme, back to being on the same page, as it were. All this took place in the same venue that is home to the L.A. Phil. Somehow, I can’t ever see the same kind of free-wheeling improvisation happening in that context. And, since I’m a geek, I started thinking about agility. Rollins has put together a quintet that reflects his own particular style and past. No upright bass or piano for Rollins – drums, bongos, electric guitar and bass guitar along with his sax. It’s not about the mix of instruments. Other trios, quartets, and sextets use different mixes of instruments. New Orleans jazz tends towards trombones instead of sax; some prefer cornet or trumpet. But no matter what the choice of instruments, size matters. Team sizes are something I’ve been thinking about for a while. We’re on a quest to rethink how our teams are organized. They just feel too big, too unwieldy. In fact, they really don’t feel like teams at all. Most of the time, they feel more like collections or people who happen to report to the same manager. I attribute this to a couple factors. One is over-specialization; we have a tendency to have people work in silos. Although the teams are product-focused, within them our developers are both generalists and specialists. On the one hand, we expect them to be able to build an entire vertical slice of the application; on the other hand, each developer tends to be responsible for the vertical slice. As a result, developers often work on their own piece of the puzzle, in isolation. This sort of feels like working on a jigsaw in a group – each person taking a set of colors and piecing them together to reveal a portion of the overall picture. But what inevitably happens when you go to meld all those pieces together? Inevitably, you have some sections that are too big to move easily. These sections end up falling apart under their own weight as you try to move them. Not only that, but there are other challenges – figuring out where that section fits, and how to tie it into the rest of the puzzle. Often, this is when you find a few pieces need to be added – these pieces are “glue,” if you will. The other issue that arises is due to the overhead of maintaining communications in a team. My mother, who worked in IT for around 30 years, once told me that 20% per team member is a good rule of thumb for maintaining communication. While this is a rule of thumb, it seems to imply that any team over about 6 people is going to become less agile simple because of the communications burden. Teams of ten or twelve seem like they fall into the philharmonic organizational model. Complicated pieces of music requiring dozens of players to all be on the same page requires a much different model than the jazz quintet. There’s much less room for improvisation, originality or freedom. (There are probably orchestral musicians who will take exception to this characterization; I’m calling it like I see it from the cheap seats.) And, there’s one guy up front who is running the show, whose job is to keep all of those dozens of players on the same page, to facilitate communications. Somehow, the orchestral model doesn’t feel much like a self-organizing team, either. The first violin may be the best violinist in the orchestra, but they don’t get to perform free-wheeling solos. I’ve never heard of an orchestra getting together for a jam session. But I have heard of teams that organize their work based on the developers available, rather than organizing the developers based on the work required. I have heard of teams where desired functionality is deferred – or worse yet, schedules are missed – because one critical person doesn’t have any bandwidth available. I’ve heard of teams where people simply don’t have the big picture, because there is too much communication overhead for everyone to be aware of everything that is happening on a project. I once heard Paul Rayner say something to the effect of “you have a process that is perfectly designed to give you exactly the results you have.” Given a choice, I want a process that’s much more like jazz than orchestral music. I want a process that doesn’t burden me with lots of forms and checkboxes and stuff. Give me the simplest, most lightweight process that will work – and a smaller team of the best developers I can find. This seems like the kind of process that will get the kind of result I want to be part of.

    Read the article

  • The Faces in the Crowdsourcing

    - by Applications User Experience
    By Jeff Sauro, Principal Usability Engineer, Oracle Imagine having access to a global workforce of hundreds of thousands of people who can perform tasks or provide feedback on a design quickly and almost immediately. Distributing simple tasks not easily done by computers to the masses is called "crowdsourcing" and until recently was an interesting concept, but due to practical constraints wasn't used often. Enter Amazon.com. For five years, Amazon has hosted a service called Mechanical Turk, which provides an easy interface to the crowds. The service has almost half a million registered, global users performing a quarter of a million human intelligence tasks (HITs). HITs are submitted by individuals and companies in the U.S. and pay from $.01 for simple tasks (such as determining if a picture is offensive) to several dollars (for tasks like transcribing audio). What do we know about the people who toil away in this digital crowd? Can we rely on the work done in this anonymous marketplace? A rendering of the actual Mechanical Turk (from Wikipedia) Knowing who is behind Amazon's Mechanical Turk is fitting, considering the history of the actual Mechanical Turk. In the late 1800's, a mechanical chess-playing machine awed crowds as it beat master chess players in what was thought to be a mechanical miracle. It turned out that the creator, Wolfgang von Kempelen, had a small person (also a chess master) hiding inside the machine operating the arms to provide the illusion of automation. The field of human computer interaction (HCI) is quite familiar with gathering user input and incorporating it into all stages of the design process. It makes sense then that Mechanical Turk was a popular discussion topic at the recent Computer Human Interaction usability conference sponsored by the Association for Computing Machinery in Atlanta. It is already being used as a source for input on Web sites (for example, Feedbackarmy.com) and behavioral research studies. Two papers shed some light on the faces in this crowd. One paper tells us about the shifting demographics from mostly stay-at-home moms to young men in India. The second paper discusses the reliability and quality of work from the workers. Just who exactly would spend time doing tasks for pennies? In "Who are the crowdworkers?" University of California researchers Ross, Silberman, Zaldivar and Tomlinson conducted a survey of Mechanical Turk worker demographics and compared it to a similar survey done two years before. The initial survey reported workers consisting largely of young, well-educated women living in the U.S. with annual household incomes above $40,000. The more recent survey reveals a shift in demographics largely driven by an influx of workers from India. Indian workers went from 5% to over 30% of the crowd, and this block is largely male (two-thirds) with a higher average education than U.S. workers, and 64% report an annual income of less than $10,000 (keeping in mind $1 has a lot more purchasing power in India). This shifting demographic certainly has implications as language and culture can play critical roles in the outcome of HITs. Of course, the demographic data came from paying Turkers $.10 to fill out a survey, so there is some question about both a self-selection bias (characteristics which cause Turks to take this survey may be unrepresentative of the larger population), not to mention whether we can really trust the data we get from the crowd. Crowds can perform tasks or provide feedback on a design quickly and almost immediately for usability testing. (Photo attributed to victoriapeckham Flikr While having immediate access to a global workforce is nice, one major problem with Mechanical Turk is the incentive structure. Individuals and companies that deploy HITs want quality responses for a low price. Workers, on the other hand, want to complete the task and get paid as quickly as possible, so that they can get on to the next task. Since many HITs on Mechanical Turk are surveys, how valid and reliable are these results? How do we know whether workers are just rushing through the multiple-choice responses haphazardly answering? In "Are your participants gaming the system?" researchers at Carnegie Mellon (Downs, Holbrook, Sheng and Cranor) set up an experiment to find out what percentage of their workers were just in it for the money. The authors set up a 30-minute HIT (one of the more lengthy ones for Mechanical Turk) and offered a very high $4 to those who qualified and $.20 to those who did not. As part of the HIT, workers were asked to read an email and respond to two questions that determined whether workers were likely rushing through the HIT and not answering conscientiously. One question was simple and took little effort, while the second question required a bit more work to find the answer. Workers were led to believe other factors than these two questions were the qualifying aspect of the HIT. Of the 2000 participants, roughly 1200 (or 61%) answered both questions correctly. Eighty-eight percent answered the easy question correctly, and 64% answered the difficult question correctly. In other words, about 12% of the crowd were gaming the system, not paying enough attention to the question or making careless errors. Up to about 40% won't put in more than a modest effort to get paid for a HIT. Young men and those that considered themselves in the financial industry tended to be the most likely to try to game the system. There wasn't a breakdown by country, but given the demographic information from the first article, we could infer that many of these young men come from India, which makes language and other cultural differences a factor. These articles raise questions about the role of crowdsourcing as a means for getting quick user input at low cost. While compensating users for their time is nothing new, the incentive structure and anonymity of Mechanical Turk raises some interesting questions. How complex of a task can we ask of the crowd, and how much should these workers be paid? Can we rely on the information we get from these professional users, and if so, how can we best incorporate it into designing more usable products? Traditional usability testing will still play a central role in enterprise software. Crowdsourcing doesn't replace testing; instead, it makes certain parts of gathering user feedback easier. One can turn to the crowd for simple tasks that don't require specialized skills and get a lot of data fast. As more studies are conducted on Mechanical Turk, I suspect we will see crowdsourcing playing an increasing role in human computer interaction and enterprise computing. References: Downs, J. S., Holbrook, M. B., Sheng, S., and Cranor, L. F. 2010. Are your participants gaming the system?: screening mechanical turk workers. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI '10. ACM, New York, NY, 2399-2402. Link: http://doi.acm.org/10.1145/1753326.1753688 Ross, J., Irani, L., Silberman, M. S., Zaldivar, A., and Tomlinson, B. 2010. Who are the crowdworkers?: shifting demographics in mechanical turk. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI EA '10. ACM, New York, NY, 2863-2872. Link: http://doi.acm.org/10.1145/1753846.1753873

    Read the article

  • Maintenance plans love story

    - by Maria Zakourdaev
    There are about 200 QA and DEV SQL Servers out there.  There is a maintenance plan on many of them that performs a backup of all databases and removes the backup history files. First of all, I must admit that I’m no big fan of maintenance plans in particular or the SSIS packages in general.  In this specific case, if I ever need to change anything in the way backup is performed, such as the compression feature or perform some other change, I have to open each plan one by one. This is quite a pain. Therefore, I have decided to replace the maintenance plans with a stored procedure that will perform exactly the same thing.  Having such a procedure will allow me to open multiple server connections and just execute an ALTER PROCEDURE whenever I need to change anything in it. There is nothing like good ole T-SQL. The first challenge was to remove the unneeded maintenance plans. Of course, I didn’t want to do it server by server.  I found the procedure msdb.dbo.sp_maintplan_delete_plan, but it only has a parameter for the maintenance plan id and it has no other parameters, like plan name, which would have been much more useful. Now I needed to find the table that holds all maintenance plans on the server. You would think that it would be msdb.dbo.sysdbmaintplans but, unfortunately, regardless of the number of maintenance plans on the instance, it contains just one row.    After a while I found another table: msdb.dbo.sysmaintplan_subplans. It contains the plan id that I was looking for, in the plan_id column and well as the agent’s job id which is executing the plan’s package: That was all I needed and the rest turned out to be quite easy.  Here is a script that can be executed against hundreds of servers from a multi-server query window to drop the specific maintenance plans. DECLARE @PlanID uniqueidentifier   SELECT @PlanID = plan_id FROM msdb.dbo.sysmaintplan_subplans Where name like ‘BackupPlan%’   EXECUTE msdb.dbo.sp_maintplan_delete_plan @plan_id=@PlanID   The second step was to create a procedure that will perform  all of the old maintenance plan tasks: create a folder for each database, backup all databases on the server and clean up the old files. The script is below. Enjoy.   ALTER PROCEDURE BackupAllDatabases                                   @PrintMode BIT = 1 AS BEGIN          DECLARE @BackupLocation VARCHAR(500)        DECLARE @PurgeAferDays INT        DECLARE @PurgingDate VARCHAR(30)        DECLARE @SQLCmd  VARCHAR(MAX)        DECLARE @FileName  VARCHAR(100)               SET @PurgeAferDays = -14        SET @BackupLocation = '\\central_storage_servername\BACKUPS\'+@@servername               SET @PurgingDate = CONVERT(VARCHAR(19), DATEADD (dd,@PurgeAferDays,GETDATE()),126)               SET @FileName = '?_full_'+                      + REPLACE(CONVERT(VARCHAR(19), GETDATE(),126),':','-')                      +'.bak';          SET @SQLCmd = '               IF ''?'' <> ''tempdb'' BEGIN                      EXECUTE master.dbo.xp_create_subdir N'''+@BackupLocation+'\?\'' ;                        BACKUP DATABASE ? TO  DISK = N'''+@BackupLocation+'\?\'+@FileName+'''                      WITH NOFORMAT, NOINIT,  SKIP, REWIND, NOUNLOAD, COMPRESSION,  STATS = 10 ;                        EXECUTE master.dbo.xp_delete_file 0,N'''+@BackupLocation+'\?\'',N''bak'',N'''+@PurgingDate+''',1;               END'          IF @PrintMode = 1 BEGIN               PRINT @SQLCmd        END               EXEC sp_MSforeachdb @SQLCmd        END

    Read the article

  • Could not start ZK at requested port of 2181, while export HBASE_MANAGES_ZK=false

    - by utrecht
    Problem The first aim was to run HBase standalone. Navigating to ip:60010/master-status is succesfull once HBase has been started. The second aim is to run a distinct ZooKeeper quorum. ZooKeeper has been downloaded and has been started: netstat -nato | grep 2181 tcp 0 0 :::2181 :::* LISTEN off (0.00/0/0) The conf/hbase-env.sh was changed as follows: # Tell HBase whether it should manage it's own instance of Zookeeper or not. export HBASE_MANAGES_ZK=false in order to avoid HBase starts ZooKeeper once HBase has been started. However, the following error occurs once HBase has been started. Could not start ZK at requested port of 2181. ZK was started at port: 2182. Aborting as clients (e.g. shell) will not be able to find this ZK quorum. Question How to disable the startup of ZooKeeper by HBase and run ZooKeeper separately?

    Read the article

  • OpenNebula: [HostPoolInfo] User couldn't be authenticated, aborting call

    - by ulf
    I installed OpenNebula 3.2.1 following the guide found under http://opennebula.org/documentation:rel3.2:ignc on a Debian 6.0.4 machine. Everything seemed fine until trying to execute the command onevm list Then I always get this: oneadmin@opennebula-master:~$ onevm list [VirtualMachinePoolInfo] User couldn't be authenticated, aborting call. The file one_auth exists. I even gave the oneadmin user a password although it doesn't seem to be required according to the guide. I copied the password hash from /etc/shadow to the one_auth file. Still no success. Any ideas are appreciated.

    Read the article

  • setting up bind to work with nsupdate (SERVFAIL)

    - by funny_ha_ha
    I'm trying to update my DNS-Server dynamically using nsupdate. Prerequisite I'm using Debian 6 on my DNS-Server and Debian 4 on my client. I created a public/private key pair using: dnssec-keygen -C -a HMAC-MD5 -b 512 -n USER sub.example.com. I then edited my named.conf.local to contain my public key and the new zone i wish to update. It now looks like this (note: I also tried allow-update { any; }; without success): zone "example.com" { type master; file "/etc/bind/primary/example.com"; notify yes; allow-update { none; }; allow-query { any; }; }; zone "sub.example.com" { type master; file "/etc/bind/primary/sub.example.com"; notify yes; allow-update { key "sub.example.com."; }; allow-query { any; }; }; key sub.example.com. { algorithm HMAC-MD5; secret "xxxx xxxx"; }; Next, I copied the private key file (key.private) to another server I want to update the zone from. I also created a textfile (update) on this server which contained the update information (note: I tried toying around with this stuff too. no success): server example.com zone sub.example.com update add sub.example.com. 86400 A 10.10.10.1 show send Now I'm trying to update the zone using: nsupdate -k key.private -v update The Problem Said command gives me the following output: Outgoing update query: ;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 0 ;; flags: ; ZONE: 0, PREREQ: 0, UPDATE: 0, ADDITIONAL: 0 ;; ZONE SECTION: ;sub.example.com. IN SOA ;; UPDATE SECTION: sub.example.com. 86400 IN A 10.10.10.1 update failed: SERVFAIL named debug Level 3 gives me the following information when I issue the nsupdate command on the remote server (note: I obfuscated the client IP): 06-Aug-2012 14:51:33.977 client X.X.X.X#33182: new TCP connection 06-Aug-2012 14:51:33.977 client X.X.X.X#33182: replace 06-Aug-2012 14:51:33.978 clientmgr @0x2ada3c7ee760: createclients 06-Aug-2012 14:51:33.978 clientmgr @0x2ada3c7ee760: recycle 06-Aug-2012 14:51:33.978 client @0x2ada475f1120: accept 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: read 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: TCP request 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: request has valid signature 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: recursion not available 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: update 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: send 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: sendto 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: senddone 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: next 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: endrequest 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: read 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: next 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: request failed: end of file 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: endrequest 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: closetcp But it doesn't do anything. The zone isn't updated, nor does my nsupdate change anything. I'm not sure if the file /etc/bind/primary/sub.example.com should exist prior to the first update or not. I tried it without the file, with an empty file and with a pre-configured zone file. Without success. The sparse information I found on the net pointed me towards file and folder permissions regarding the bind working directory, so I changed the permissions of both /etc/bind and /var/cache/bind (which is the home dir of my "bind" user). I'm not a 100% sure if the permissions are correct.. but it looks good to me: ls -lah /var/cache/bind/ total 224K drwxrwxr-x 2 bind bind 4.0K Aug 6 03:13 . drwxr-xr-x 12 root root 4.0K Jul 21 11:27 .. -rw-r--r-- 1 bind bind 211K Aug 6 03:21 named.run ls -lah /etc/bind/ total 72K drwxr-sr-x 3 bind bind 4.0K Aug 6 14:41 . drwxr-xr-x 87 root root 4.0K Jul 30 01:24 .. -rw------- 1 bind bind 125 Aug 6 02:54 key.public -rw------- 1 bind bind 156 Aug 6 02:54 key.private -rw-r--r-- 1 bind bind 2.5K Aug 6 03:07 bind.keys -rw-r--r-- 1 bind bind 237 Aug 6 03:07 db.0 -rw-r--r-- 1 bind bind 271 Aug 6 03:07 db.127 -rw-r--r-- 1 bind bind 237 Aug 6 03:07 db.255 -rw-r--r-- 1 bind bind 353 Aug 6 03:07 db.empty -rw-r--r-- 1 bind bind 270 Aug 6 03:07 db.local -rw-r--r-- 1 bind bind 3.0K Aug 6 03:07 db.root -rw-r--r-- 1 bind bind 493 Aug 6 03:32 named.conf -rw-r--r-- 1 bind bind 490 Aug 6 03:07 named.conf.default-zones -rw-r--r-- 1 bind bind 1.2K Aug 6 14:18 named.conf.local -rw-r--r-- 1 bind bind 666 Jul 29 22:51 named.conf.options drwxr-sr-x 2 bind bind 4.0K Aug 6 03:57 primary/ -rw-r----- 1 root bind 77 Mar 19 02:57 rndc.key -rw-r--r-- 1 bind bind 1.3K Aug 6 03:07 zones.rfc1918 ls -lah /etc/bind/primary/ total 20K drwxr-sr-x 2 bind bind 4.0K Aug 6 03:57 . drwxr-sr-x 3 bind bind 4.0K Aug 6 14:41 .. -rw-r--r-- 1 bind bind 356 Jul 30 00:45 example.com

    Read the article

  • Multi-level navigation controller on left-hand side of UISplitView with a small twist.

    - by user141146
    Hi. I'm trying make something similar to (but not exactly like) the email app found on the iPad. Specifically, I'd like to create a tab-based app, but each tab would present the user with a different UISplitView. Each UISplitView contains a Master and a Detail view (obviously). In each UISplitView I would like the Master to be a multi-level navigational controller where new UIViewControllers are pushed onto (or popped off of) the stack. This type of navigation within the UISplitView is where the application is similar to the native email app. To the best of my knowledge, the only place that has described a decent "splitviewcontroller inside of a uitabbarcontroller" is here: http://stackoverflow.com/questions/2475139/uisplitviewcontroller-in-a-tabbar-uitabbarcontroller and I've tried to follow the accepted answer. The accepted solution seems to work for me (i.e., I get a tab-bar controller that allows me to switch between different UISplitViews). The problem is that I don't know how to make the left-hand side of the UISplitView to be a multi-level navigation controller. Here is the code I used within my app delegate to create the initial "split view 'inside' of a tab bar controller" (it's pretty much as suggested in the aforementioned link). - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { NSMutableArray *tabArray = [NSMutableArray array]; NSMutableArray *array = [NSMutableArray array]; UISplitViewController *splitViewController = [[UISplitViewController alloc] init]; MainViewController *viewCont = [[MainViewController alloc] initWithNibName:@"MainViewController" bundle:nil]; [array addObject:viewCont]; [viewCont release]; viewCont = [[DetailViewController alloc] initWithNibName:@"DetailViewController" bundle:nil]; [array addObject:viewCont]; [viewCont release]; [splitViewController setViewControllers:array]; [tabArray addObject:splitViewController]; [splitViewController release]; array = [NSMutableArray array]; splitViewController = [[UISplitViewController alloc] init]; viewCont = [[Master2 alloc] initWithNibName:@"Master2" bundle:nil]; [array addObject:viewCont]; [viewCont release]; viewCont = [[Slave2 alloc] initWithNibName:@"Slave2" bundle:nil]; [array addObject:viewCont]; [viewCont release]; [splitViewController setViewControllers:array]; [tabArray addObject:splitViewController]; [splitViewController release]; // Add the tab bar controller's current view as a subview of the window [tabBarController setViewControllers:tabArray]; [window addSubview:tabBarController.view]; [window makeKeyAndVisible]; return YES; } the class MainViewController is a UIViewController that contains the following method: - (IBAction)push_me:(id)sender { M2 *m2 = [[[M2 alloc] initWithNibName:@"M2" bundle:nil] autorelease]; [self.navigationController pushViewController:m2 animated:YES]; } this method is attached (via interface builder) to a UIButton found within MainViewController.xib Obviously, the method above (push_me) is supposed to create a second UIViewController (called m2) and push m2 into view on the left-side of the split-view when the UIButton is pressed. And yet it does nothing when the button is pressed (even though I can tell that the method is called). Thoughts on where I'm going wrong? TIA!

    Read the article

  • /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID

    - by user1495181
    using ubuntu with net-snmp snmp work but in sys.log i see a lot of errors about snmpd.conf snmpd.conf: rwcommunity community 10.0.0.1 rwcommunity community 10.0.0.2 agentAddress udp:10.0.0.1:161 view systemonly included .1.3.6.1.2.1.1 view systemonly included .1.3.6.1.2.1.25.1 # Default access to basic system info rocommunity public default -V systemonly rouser authOnlyUser sysLocation Sitting on the Dock of the Bay sysContact Me <[email protected]> sysServices 72 proc mountd proc ntalkd 4 proc sendmail 10 1 disk / 10000 disk /var 5% includeAllDisks 10% load 12 10 5 trapsink localhost public iquerySecName internalUser rouser internalUser defaultMonitors yes linkUpDownNotifications yes master agentx errors: Sep 12 16:35:00 test snmpd[5485]: payload OID: prNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: prNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: prErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: prErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: prErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: memErrorName Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: memErrorName Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: memSwapErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: memSwapErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: memSwapError Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: extNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: extNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: extOutput Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: extOutput Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: extResult Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: dskPath Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: dskPath Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: dskErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: dskErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: dskErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: laNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: laNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: laErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: laErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: laErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: fileName Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: fileName Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: fileErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: fileErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: fileErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: snmperrErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: snmperrErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: snmperrErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: Turning on AgentX master support. Sep 12 16:35:00 test snmpd[5485]: net-snmp: 33 error(s) in config file(s)

    Read the article

  • C# XmlSchemaSet - Resolving included schemas to enumerate attribute groups

    - by satixx
    Hi, I currently have a compiled XmlSchemaSet from which I get possible elements/attributes for each specific "parent" element defined in the schema. I have a single "master" xsd schema which includes another schema and uses attributeGroup references for some "common" elements. Here is a sample: (MasterSchema.xsd) <?xml version="1.0" encoding="utf-8"?> <xs:schema id="MasterSchema" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="CommonNamespace.xsd" xmlns="CommonNamespace.xsd" xmlns:mstns="CommonNamespace.xsd" elementFormDefault="qualified" > <xs:include schemaLocation="SourceAttributeGroups.xsd"/> <xs:element name="Binding" minOccurs="0" maxOccurs="2"> <xs:complexType> <xs:attribute name="Name" type="xs:string"/> <xs:attributeGroup ref="BindingSourceAttributeGroup"/> </xs:complexType> </xs:element> </xs:schema> (SourceAttributeGroups.xsd) <?xml version="1.0" encoding="utf-8"?> <xs:schema id="SourceAttributeGroups" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="CommonNamespace.xsd" xmlns="CommonNamespace.xsd" xmlns:mstns="CommonNamespace.xsd" elementFormDefault="qualified" > <xs:attributeGroup id="BindingSourceAttributeGroup" name="BindingSourceAttributeGroup"> <xs:attribute name="Source"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="Data"/> </xs:restriction> </xs:simpleType> </xs:attribute> <!-- When Source is None --> <xs:attribute name="Value" type="xs:string"/> <!-- Label --> <xs:attribute name="Label" type="xs:string"/> </attributeGroup> </xs:schema> I would like to create an XmlSchemaSet in C# which would resolve, compile and "merge" every references of the MasterSchema so it would finaly look like this: (MasterSchema.xsd) <?xml version="1.0" encoding="utf-8"?> <xs:schema id="MasterSchema" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="CommonNamespace.xsd" xmlns="CommonNamespace.xsd" xmlns:mstns="CommonNamespace.xsd" elementFormDefault="qualified" > <xs:include schemaLocation="SourceAttributeGroups.xsd"/> <xs:element name="Binding" minOccurs="0" maxOccurs="2"> <xs:complexType> <xs:attribute name="Name" type="xs:string"/> <xs:attribute name="Source"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="Data"/> </xs:restriction> </xs:simpleType> </xs:attribute> <!-- When Source is None --> <xs:attribute name="Value" type="xs:string"/> <!-- Label --> <xs:attribute name="Label" type="xs:string"/> </xs:complexType> </xs:element> </xs:schema> This way, I could traverse each XmlSchemaParticle of the compiled schema to get every single attribute for a specific element, even when its attributes are defined in an external schema. At the moment, when I get the possible attributes for a "Binding" element, I only get the "Name" attribute since it is originally defined in the "master" schema. What would be the possible solutions to this problem? Thanks! Satixx

    Read the article

  • Postfix Submission port issue

    - by RevSpot
    I have setup postfix+mailman on my debian server and i have an issue with postfix submission port. My ISP blocks SMTP on port 25 to prevent *spams and i must to use submission port (587). I have uncomment the following line from master.cf (/etc/postfix/) but nothing happens. submission inet n - - - - smtpd This is my mail logs file when i try to invite a user to mailman list Nov 6 00:35:34 myhostname postfix/qmgr[1763]: C90BF1060D: from=<[email protected]>, size=1743, nrcpt=1 (queue active) Nov 6 00:35:34 myhostname postfix/qmgr[1763]: DF54B10608: from=<[email protected]>, size=488, nrcpt=1 (queue active) Nov 6 00:35:34 myhostname postfix/qmgr[1763]: 80F0D10609: from=<[email protected]>, size=483, nrcpt=1 (queue active) Nov 6 00:35:55 myhostname postfix/smtp[2269]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:35:55 myhostname postfix/smtp[2270]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:35:55 myhostname postfix/smtp[2271]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2269]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2270]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2271]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:37 myhostname postfix/smtp[2269]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:37 myhostname postfix/smtp[2270]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:37 myhostname4 postfix/smtp[2271]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2269]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2270]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2271]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2269]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2270]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2269]: C90BF1060D: to=<[email protected]>, relay=none, delay=23711, delays=23606/0.03/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) Nov 6 00:37:19 myhostname postfix/smtp[2271]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2270]: DF54B10608: to=<[email protected]>, relay=none, delay=23882, delays=23777/0.03/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) Nov 6 00:37:19 myhostname postfix/smtp[2271]: 80F0D10609: to=<[email protected]>, relay=none, delay=23875, delays=23770/0.04/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) main.cf smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no append_dot_mydomain = no readme_directory = no smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = mail.mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mail.mydomain.com, localhost.mydomain.com,localhost relayhost = relay_domains = $mydestination, mail.mydomain.com relay_recipient_maps = hash:/var/lib/mailman/data/virtual-mailman transport_maps = hash:/etc/postfix/transport mailman_destination_recipient_limit = 1 mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all local_recipient_maps = master.cf smtp inet n - - - - smtpd submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user}

    Read the article

  • MySQL replication Slave_IO_Running: No

    - by Christy
    Hi all, I have two servers that I am trying to get replication of one database between. I found a setup guide on sourceforge that I followed and I have tried various other settings since then, but no matter what I do, when I start the slave, the 'Slave_IO_Running' setting is always No.... I have no idea why or what to look at, any suggestions are appreciated. The slave setup was: CHANGE MASTER TO MASTER_HOST='myserver.mydomain.net', MASTER_USER='slave_user', 'MASTER_PASSWORD='mypassword', 'MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=1368363 (last data from today, trying to do setup again. I deleted and recreated the database on the slave from a new dump and tried to redo the setup.) I have slave_user setup for %, localhost, and the specific IP of the slave computer but nothing seems to be working... Thanks in advance for any advice or suggestions

    Read the article

  • Can't connect to MySQL server on '127.0.0.1' + Postfix

    - by Andrew Dakin
    I just installed Postfix and configured it to use MySQL. It wasn't sending any emails out after I did that so I checked /var/log/mail.log and it came back with this: postfix/trivial-rewrite[5283]: fatal: proxy:mysql:/etc/postfix/mysql-domains.cf(0,lock|fold_fix): table lookup problem postfix/cleanup[5258]: warning: AFCDC30437: virtual_alias_maps map lookup problem for [email protected] postfix/master[4761]: warning: process /usr/lib/postfix/trivial-rewrite pid 5282 exit status 1 postfix/proxymap[4126]: warning: connect to mysql server 127.0.0.1: Can't connect to MySQL server on '127.0.0.1' (110) In mysql-domains.cf I'm using: Hosts 127.0.0.1 I can connect to MySQL with this: mysql -u postfixuser -p But I can't connect this way: mysql -u postfixuser -h 127.0.0.1 -p maildbname Also when I run netstat -l it comes back with: tcp 0 0 localhost:mysql *:* LISTEN I've tried changing my hosts to: Hosts localhost But then I just get a socket error: postfix/cleanup[4870]: warning: connect to mysql server localhost: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' I also have this set up in the MySQL config file: bind-address = 127.0.0.1 I'm sure I'm missing something obvious, but I am pretty new to all this. Thanks! Andy

    Read the article

  • Cucumber can't find installed gems

    - by artemave
    environment/cucumber.rb: ... # gem dependencies config.gem 'cucumber-rails', :lib => false, :version => '>=0.3.0' unless File.directory?(File.join(Rails.root, 'vend config.gem 'database_cleaner', :lib => false, :version => '>=0.5.0' unless File.directory?(File.join(Rails.root, 'vend config.gem 'webrat', :lib => false, :version => '>=0.7.0' unless File.directory?(File.join(Rails.root, 'vend config.gem 'spork', :lib => false, :version => '>=0.7.5' unless File.directory?(File.join(Rails.root, 'vend config.gem 'factory_girl', :source => 'http://gemcutter.org' config.gem 'selenium-client', :lib => false config.gem 'Selenium', :lib => false config.gem 'rspec', :lib => 'spec' config.gem 'rspec-rails', :lib => 'spec/rails' config.gem 'test-unit', :lib => false Running cucumber gives missing gems error: artem:~/projects/food4feed (master)$ cucumber ... no such file to load -- test-unit /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/rails/gem_dependency.rb:208:in `load' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:307:in `block in load_gems' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:307:in `each' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:307:in `load_gems' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:169:in `process' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:113:in `run' /home/artem/projects/food4feed/config/environment.rb:9:in `<top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/projects/food4feed/features/support/env.rb:12:in `block in <top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/spork-0.8.1/lib/spork.rb:23:in `prefork' /home/artem/projects/food4feed/features/support/env.rb:10:in `<top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/rb_support/rb_language.rb:124:in `load_code_file' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:85:in `load_code_file' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:77:in `block in load_code_files' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:76:in `each' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:76:in `load_code_files' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/cli/main.rb:48:in `execute!' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/cli/main.rb:20:in `execute' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/bin/cucumber:8:in `<top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/bin/cucumber:19:in `load' /home/artem/.rvm/gems/ruby-1.9.1-p378/bin/cucumber:19:in `<main>' Missing these required gems: selenium-client Selenium rspec-rails test-unit You're running: ruby 1.9.1.378 at /home/artem/.rvm/rubies/ruby-1.9.1-p378/bin/ruby rubygems 1.3.5 at /home/artem/.rvm/gems/ruby-1.9.1-p378, /home/artem/.rvm/gems/ruby-1.9.1-p378%global All gems are obviously there: artem:~/projects/food4feed (master)$ gem list | egrep "elenium|rspec|test-unit" rspec (1.3.0) rspec-rails (1.3.2) Selenium (1.1.14) selenium-client (1.2.18) test-unit (2.0.7) The even more confusing part is that it only complains about certain gems. factory_girl and rspec don't cause problems. Any idea what is going on? My environment: Rails 2.3.5 cucumber (0.6.3) cucumber-rails (0.3.0)

    Read the article

  • MySql Replication with a star topology

    - by Riotopsys
    My company currently operates in 3 separate locations connected by slow vpn links. Each site hosts a dedicated MySql server. I need to aggregate the data from all three of them onto a single server for corporate reporting. The powers that be have stated I cannot use circular replication or federated tables. Is there a third party tool for MySql that can replicate from multiple masters? Basically the diagram would be a daisy with the reporting server slave at center with multiple replication connections coming in from the master sites on the petals.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >