Search Results

Search found 25075 results on 1003 pages for 'default trace'.

Page 545/1003 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • Failure to toubleshoot a juju charm deployment

    - by Bruno Pereira
    My environments.yaml looks like this: environments: test: type: local control-bucket: juju-a14dfae3830142d9ac23c499395c2785999 admin-secret: 6608267bbd6b447b8c90934167b2a294999 default-series: oneiric juju-origin: distro data-dir: /home/bruno/projects/juju juju bootstrap runs perfect: 2011-11-22 19:19:31,999 INFO Bootstrapping environment 'test' (type: local)... 2011-11-22 19:19:32,004 INFO Checking for required packages... 2011-11-22 19:19:33,584 INFO Starting networking... 2011-11-22 19:19:34,058 INFO Starting zookeeper... 2011-11-22 19:19:34,283 INFO Starting storage server... 2011-11-22 19:19:40,051 INFO Initializing zookeeper hierarchy 2011-11-22 19:19:40,247 INFO Starting machine agent (origin: distro)... [sudo] password for bruno: 2011-11-22 19:23:16,054 INFO Environment bootstrapped 2011-11-22 19:23:16,079 INFO 'bootstrap' command finished successfully Deploy from a known good charm is accepted (tried it with one that I am trying to create): juju deploy --repository=/home/bruno/projects/charms_repo/ local:teamspeak 2011-11-22 19:28:49,929 INFO Charm deployed as service: 'teamspeak' 2011-11-22 19:28:49,962 INFO 'deploy' command finished successfully After this I can see that juju debug-log shows activity and I can see the network indicator going on and off and activity on my hard-disk. Wait... Looking at juju status I get: services: teamspeak: charm: local:oneiric/teamspeak-1 relations: {} units: teamspeak/0: machine: 0 public-address: 192.168.122.226 relations: {} state: start_error juju debug-log does not help and I have no files under /var/log/juju or /var/lib/juju. Last juju debug-log only shows this: 2011-11-22 19:45:20,790 Machine:0: juju.agents.machine DEBUG: Units changed old:set(['wordpress/0']) new:set(['wordpress/0', 'teamspeak/0']) 2011-11-22 19:45:20,823 Machine:0: juju.agents.machine DEBUG: Starting service unit: teamspeak/0 ... 2011-11-22 19:45:21,137 Machine:0: juju.agents.machine DEBUG: Downloading charm local:oneiric/teamspeak-1 to /home/bruno/projects/juju/bruno-test/charms 2011-11-22 19:45:22,115 Machine:0: juju.agents.machine DEBUG: Starting service unit teamspeak/0 2011-11-22 19:45:22,133 Machine:0: unit.deploy INFO: Creating container teamspeak-0... 2011-11-22 19:47:04,586 Machine:0: unit.deploy INFO: Container created for teamspeak/0 2011-11-22 19:47:04,781 Machine:0: unit.deploy DEBUG: Charm extracted into container 2011-11-22 19:47:04,801 Machine:0: unit.deploy DEBUG: Starting container... 2011-11-22 19:47:07,086 Machine:0: unit.deploy INFO: Started container for teamspeak/0 2011-11-22 19:47:07,107 Machine:0: juju.agents.machine INFO: Started service unit teamspeak/0 How can I troubleshot what is happening here?

    Read the article

  • Dovecot throws obsolete warnings, even though dovecot.conf updated on Ubuntu 11

    - by John Bowlinger
    In trying to set up SASL for dovecot on Ubuntu 11, I keep getting obsolete warnings in my log: Sep 10 15:33:53 server1 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:24: passdb {} has been replaced by passdb { driver= } Sep 10 15:33:53 server1 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:27: userdb {} has been replaced by userdb { driver= } Even though my dovecot.conf file looks like this: protocols = none auth default { mechanisms = plain login passdb { driver=pam } userdb { driver=passwd } socket listen { client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } } } Even when I try: driver=etc/pam.d/dovecot driver=etc/passwd I still get the same error. Looking at the example config file: cat /usr/share/doc/dovecot-common/dovecot/example-config/dovecot.conf was of no help. Dovecot is running: ps -A | grep 'dovecot' 9663 ? 00:00:00 dovecot But I can't seem to get that elusive "dovecot-auth" process. Anyone know what's going on?

    Read the article

  • The way I think about Diagnostic tools

    - by Daniel Moth
    Every software has issues, or as we like to call them "bugs". That is not a discussion point, just a mere fact. It follows that an important skill for developers is to be able to diagnose issues in their code. Of course we need to advance our tools and techniques so we can prevent bugs getting into the code (e.g. unit testing), but beyond designing great software, diagnosing bugs is an equally important skill. To diagnose issues, the most important assets are good techniques, skill, experience, and maybe talent. What also helps is having good diagnostic tools and what helps further is knowing all the features that they offer and how to use them. The following classification is how I like to think of diagnostics. Note that like with any attempt to bucketize anything, you run into overlapping areas and blurry lines. Nevertheless, I will continue sharing my generalizations ;-) It is important to identify at the outset if you are dealing with a performance or a correctness issue. If you have a performance issue, use a profiler. I hear people saying "I am using the debugger to debug a performance issue", and that is fine, but do know that a dedicated profiler is the tool for that job. Just because you don't need them all the time and typically they cost more plus you are not as familiar with them as you are with the debugger, doesn't mean you shouldn't invest in one and instead try to exclusively use the wrong tool for the job. Visual Studio has a profiler and a concurrency visualizer (for profiling multi-threaded apps). If you have a correctness issue, then you have several options - that's next :-) This is how I think of identifying a correctness issue Do you want a tool to find the issue for you at design time? The compiler is such a tool - it gives you an exact list of errors. Compilers now also offer warnings, which is their way of saying "this may be an error, but I am not smart enough to know for sure". There are also static analysis tools, which go a step further than the compiler in identifying issues in your code, sometimes with the aid of code annotations and other times just by pointing them at your raw source. An example is FxCop and much more in Visual Studio 11 Code Analysis. Do you want a tool to find the issue for you with code execution? Just like static tools, there are also dynamic analysis tools that instead of statically analyzing your code, they analyze what your code does dynamically at runtime. Whether you have to setup some unit tests to invoke your code at runtime, or have to manually run your app (and interact with it) under the tool, or have to use a script to execute your binary under the tool… that varies. The result is still a list of issues for you to address after the analysis is complete or a pause of the execution when the first issue is encountered. If a code path was not taken, no analysis for it will exist, obviously. An example is the GPU Race detection tool that I'll be talking about on the C++ AMP team blog. Another example is the MSR concurrency CHESS tool. Do you want you to find the issue at design time using a tool? Perform a code walkthrough on your own or with colleagues. There are code review tools that go beyond just diffing sources, and they help you with that aspect too. For example, there is a new one in Visual Studio 11 and searching with my favorite search engine yielded this article based on the Developer Preview. Do you want you to find the issue with code execution? Use a debugger - let’s break this down further next. This is how I think of debugging: There is post mortem debugging. That means your code has executed and you did something in order to examine what happened during its execution. This can vary from manual printf and other tracing statements to trace events (e.g. ETW) to taking dumps. In all cases, you are left with some artifact that you examine after the fact (after code execution) to discern what took place hoping it will help you find the bug. Learn how to debug dump files in Visual Studio. There is live debugging. I will elaborate on this in a separate post, but this is where you inspect the state of your program during its execution, and try to find what the problem is. More from me in a separate post on live debugging. There is a hybrid of live plus post-mortem debugging. This is for example what tools like IntelliTrace offer. If you are a tools vendor interested in the diagnostics space, it helps to understand where in the above classification your tool excels, where its primary strength is, so you can market it as such. Then it helps to see which of the other areas above your tool touches on, and how you can make it even better there. Finally, see what areas your tool doesn't help at all with, and evaluate whether it should or continue to stay clear. Even though the classification helps us think about this space, the reality is that the best tools are either extremely excellent in only one of this areas, or more often very good across a number of them. Another approach is to offer a toolset covering all areas, with appropriate integration and hand off points from one to the other. Anyway, with that brain dump out of the way, in follow-up posts I will dive into live debugging, and specifically live debugging in Visual Studio - stay tuned if that interests you. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Routing to a Controller with no View in Angular

    - by Rick Strahl
    I've finally had some time to put Angular to use this week in a small project I'm working on for fun. Angular's routing is great and makes it real easy to map URL routes to controllers and model data into views. But what if you don't actually need a view, if you effectively need a headless controller that just runs code, but doesn't render a view?Preserve the ViewWhen Angular navigates a route and and presents a new view, it loads the controller and then renders the view from scratch. Views are not cached or stored, but displayed and then removed. So if you have routes configured like this:'use strict'; // Declare app level module which depends on filters, and services window.myApp = angular.module('myApp', ['myApp.filters', 'myApp.services', 'myApp.directives', 'myApp.controllers']). config(['$routeProvider', function($routeProvider) { $routeProvider.when('/map', { template: "partials/map.html ", controller: 'mapController', reloadOnSearch: false, animation: 'slide' }); … $routeProvider.otherwise({redirectTo: '/map'}); }]); Angular routes to the mapController and then re-renders the map.html template with the new data from the $scope filled in.But, but… I don't want a new View!Now in most cases this works just fine. If I'm rendering plain DOM content, or textboxes in a form interface that is all fine and dandy - it's perfectly fine to completely re-render the UI.But in some cases, the UI that's being managed has state and shouldn't be redrawn. In this case the main page in question has a Google Map on it. The map is  going to be manipulated throughout the lifetime of the application and the rest of the pages. In my application I have a toolbar on the bottom and the rest of the content is replaced/switched out by the Angular Views:The problem is that the map shouldn't be redrawn each time the Location view is activated. It should maintain its state, such as the current position selected (which can move), and shouldn't redraw due to the overhead of re-rendering the initial map.Originally I set up the map, exactly like all my other views - as a partial, that is rendered with a separate file, but that didn't work.The Workaround - Controller Only RoutesThe workaround for this goes decidedly against Angular's way of doing things:Setting up a Template-less RouteIn-lining the map view directly into the main pageHiding and showing the map view manuallyLet's see how this works.Controller Only RouteThe template-less route is basically a route that doesn't have any template to render. This is not directly supported by Angular, but thankfully easy to fake. The end goal here is that I want to simply have the Controller fire and then have the controller manage the display of the already active view by hiding and showing the map and any other view content, in effect bypassing Angular's view display management.In short - I want a controller action, but no view rendering.The controller-only or template-less route looks like this: $routeProvider.when('/map', { template: " ", // just fire controller controller: 'mapController', animation: 'slide' });Notice I'm using the template property rather than templateUrl (used in the first example above), which allows specifying a string template, and leaving it blank. The template property basically allows you to provide a templated string using Angular's HandleBar like binding syntax which can be useful at times. You can use plain strings or strings with template code in the template, or as I'm doing here a blank string to essentially fake 'just clear the view'. In-lined ViewSo if there's no view where does the HTML go? Because I don't want Angular to manage the view the map markup is in-lined directly into the page. So instead of rendering the map into the Angular view container, the content is simply set up as inline HTML to display as a sibling to the view container.<div id="MapContent" data-icon="LocationIcon" ng-controller="mapController" style="display:none"> <div class="headerbar"> <div class="right-header" style="float:right"> <a id="btnShowSaveLocationDialog" class="iconbutton btn btn-sm" href="#/saveLocation" style="margin-right: 2px;"> <i class="icon-ok icon-2x" style="color: lightgreen; "></i> Save Location </a> </div> <div class="left-header">GeoCrumbs</div> </div> <div class="clearfix"></div> <div id="Message"> <i id="MessageIcon"></i> <span id="MessageText"></span> </div> <div id="Map" class="content-area"> </div> </div> <div id="ViewPlaceholder" ng-view></div>Note that there's the #MapContent element and the #ViewPlaceHolder. The #MapContent is my static map view that is always 'live' and is initially hidden. It is initially hidden and doesn't get made visible until the MapController controller activates it which does the initial rendering of the map. After that the element is persisted with the map data already loaded and any future access only updates the map with new locations/pins etc.Note that default route is assigned to the mapController, which means that the mapController is fired right as the page loads, which is actually a good thing in this case, as the map is the cornerstone of this app that is manipulated by some of the other controllers/views.The Controller handles some UISince there's effectively no view activation with the template-less route, the controller unfortunately has to take over some UI interaction directly. Specifically it has to swap the hidden state between the map and any of the other views.Here's what the controller looks like:myApp.controller('mapController', ["$scope", "$routeParams", "locationData", function($scope, $routeParams, locationData) { $scope.locationData = locationData.location; $scope.locationHistory = locationData.locationHistory; if ($routeParams.mode == "currentLocation") { bc.getCurrentLocation(false); } bc.showMap(false,"#LocationIcon"); }]);bc.showMap is responsible for a couple of display tasks that hide/show the views/map and for activating/deactivating icons. The code looks like this:this.showMap = function (hide,selActiveIcon) { if (!hide) $("#MapContent").show(); else { $("#MapContent").hide(); } self.fitContent(); if (selActiveIcon) { $(".iconbutton").removeClass("active"); $(selActiveIcon).addClass("active"); } };Each of the other controllers in the app also call this function when they are activated to basically hide the map and make the View Content area visible. The map controller makes the map.This is UI code and calling this sort of thing from controllers is generally not recommended, but I couldn't figure out a way using directives to make this work any more easily than this. It'd be easy to hide and show the map and view container using a flag an ng-show, but it gets tricky because of scoping of the $scope. I would have to resort to storing this setting on the $rootscope which I try to avoid. The same issues exists with the icons.It sure would be nice if Angular had a way to explicitly specify that a View shouldn't be destroyed when another view is activated, so currently this workaround is required. Searching around, I saw a number of whacky hacks to get around this, but this solution I'm using here seems much easier than any of that I could dig up even if it doesn't quite fit the 'Angular way'.Angular nice, until it's notOverall I really like Angular and the way it works although it took me a bit of time to get my head around how all the pieces fit together. Once I got the idea how the app/routes, the controllers and views snap together, putting together Angular pages becomes fairly straightforward. You can get quite a bit done never going beyond those basics. For most common things Angular's default routing and view presentation works very well.But, when you do something a bit more complex, where there are multiple dependencies or as in this case where Angular doesn't appear to support a feature that's absolutely necessary, you're on your own. Finding information on more advanced topics is not trivial especially since versions are changing so rapidly and the low level behaviors are changing frequently so finding something that works is often an exercise in trial and error. Not that this is surprising. Angular is a complex piece of kit as are all the frameworks that try to hack JavaScript into submission to do something that it was really never designed to. After all everything about a framework like Angular is an elaborate hack. A lot of shit has to happen to make this all work together and at that Angular (and Ember, Durandel etc.) are pretty amazing pieces of JavaScript code. So no harm, no foul, but I just can't help feeling like working in toy sandbox at times :-)© Rick Strahl, West Wind Technologies, 2005-2013Posted in Angular  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Permission issues causes Unity Segmentation fault

    - by Dj Gilcrease
    I upgraded from 13.04 to 13.10 and I can boot to the unity-greeter and login just fine, but after login I just get a black screen with a cursor. I have tried following http://help.ubuntu.com/community/BinaryDriverHowto/ATI http://help.ubuntu.com/community/RadeonDriver and the manual install of the downloaded AMD drivers. All have the same affect. Also I have read Black screen after login with cursor I get a black screen after logging in ubuntu 13.04 black screen after login Ubuntu 13.10 - Black screen after login session Ubuntu 13.04 - Black screen with unresponsive cursor Upgrade to Ubuntu 13.04 Problem - Boots into Blank Black Screen All of which were no further help then the two support articles about ATI drivers So I switched back to the default drivers and went a little further into debugging, when I do ctrl+alt+F1 and login and try unity --debug > unity_start.log then ctrl+alt+F8 the screen stays black with a cursor and when I switch back ctrl+alt+F1 the contents of the log output are http://pastebin.com/rdQG4Hb0 However when I try sudo unity --debug > unity_start_root.log then ctrl+alt+F8, unity starts and the output of the log is http://pastebin.com/Yv4RD2j7 The fact that it starts as root tells be it is either a permissions issue of some required file or there is some setting that is specific to my user that is causing the SIGSEGV. So to narrow this down I activated the guest account and tried to login and got the same black screens with only a mouse cursor, so this tells me that it is not a configuration issue, but a permissions issue, so how do I narrow down which file has the wrong permissions? Also is there anything further that may help debug this issue? Ok after a few more hours of googling I found that if I add myself to the video group I can login and see the desktop, but there are lots of other permission related issues, so I am thinking something went wonky with PolicyKit during the upgrade, is there a way to reset PolicyKit settings for a user?

    Read the article

  • Bit by bit comparison of using Java or Python for unit testing frameworks and Selenium

    - by Anirudh
    Currently we are in the process of finalizing which language out of Java, Python should be used for Automation using selenium webdriver and a suitable unit testing frameworks. I have made use of Junit, TestNG and webdriver while using with Java and have designed frameworks without much fuss before. I am new to python though I came across pyhton's unit testing frameworks like unittest, pyunit, nose e.t.c but I have doubts if they would be as successful as testNG or Java. I would like to analyze point by point when used with selenium webdriver as below: 1)I have read that as Python is an interpreted language hence it's execution is slower, so say if I have to run 1000 test cases which take about 6 hours to run in Java, would python take considerably longer time for the same test cases like 8 hours? 2)Can the Python unit testing framework be as flexible as a Java unit testing framework like testNG in terms or Grouping the tests, parallel execution, skipping test. e.t.c 3)Also one point that I think of is that Python with selenium webdriver doeasn't have as big or learned community as we have for Java with webdriver, say if I run into trouble with something I am more likely to find an answer for Java as compared to python? 4)Somewhat related to point 3, is it safe to rely on tools, plugins or even webderiver's python's binding as a continuously well maintained? 5)One major drawback as I see while using python's unit testing framework is lack of boilerplate code or libraries for nicely illustrative HTML reports preferably historical reports with Pie charts, bar graphs and timelines as we have in case of Java like Allure, TestNG's default reports, reportNG or Junit reports with the help of ANT as shown below Allure Reports Junit Historical reports Also I would like to emphasize on the fact if there is a way for one to write the framework in java and make libraries or utilities according to out application in webdriver which can easily be called or integrated in with python code or modules? That would actually solve the problem for us as the client would be able to use the code we write in Java and make use of the same or call it from their python modules?

    Read the article

  • Book Review - Programming Windows Azure by Siriram Krishnan

    - by BuckWoody
    As part of my professional development, I’ve created a list of books to read throughout the year, starting in June of 2011. This a review of the first one, called Programming Windows Azure by Siriram Krishnan. You can find my entire list of books I’m reading for my career here: http://blogs.msdn.com/b/buckwoody/archive/2011/06/07/head-in-the-clouds-eyes-on-the-books.aspx  Why I Chose This Book: As part of my learning style, I try to read multiple books about a single subject. I’ve found that at least 3 books are necessary to get the right amount of information to me. This is a “technical” work, meaning that it deals with technology and not business, writing or other facets of my career. I’ll have a mix of all of those as I read along. I chose this work in addition to others I’ve read since it covers everything from an introduction to more advanced topics in a single book. It also has some practical examples of actually working with the product, particularly on storage. Although it’s dated, many examples normally translate. I also saw that it had pretty good reviews. What I learned: I learned a great deal about storage, and many useful code snippets. I do think that there could have been more of a focus on the application fabric - but of course that wasn’t as mature a feature when this book was written. I learned some great architecture examples, and in one section I learned more about encryption. In that example, however, I would rather have seen the examples go the other way - the book focused on moving data from on-premise to Azure storage in an encrypted fashion. Using the Application Fabric I would rather see sensitive data left in a hybrid fashion on premise, and connect to for the Azure application. Even so, the examples were very useful. If you’re looking for a good “starter” Azure book, this is a good choice. I also recommend the last chapter as a quick read for a DBA, or Database Administrator. It’s not very long, but useful. Note that the limits described are incorrect - which is one of the dangers of reading a book about any cloud offering. The services offered are updated so quickly that the information is in constant danger of being “stale”. Even so, I found this a useful book, which I believe will help me work with Azure better. Raw Notes: I take notes as I read, calling that process “reading with a pencil”. I find that when I do that I pay attention better, and record some things that I need to know later. I’ll take these notes, categorize them into a OneNote notebook that I synchronize in my Live.com account, and that way I can search them from anywhere. I can even read them on the web, since the Live.com has a OneNote program built in. Note that these are the raw notes, so they might not make a lot of sense out of context - I include them here so you can watch my though process. Programming Windows Azure by Siriram Krishnan: Learning about how to select applications suitable for Distributed Technology. Application Fabric gets the least attention; probably because it was newer at the time. Very clear (Chapter One) Good foundation Background and history, but not too much I normally arrange my descriptions differently, starting with the use-cases and moving to physicality, but this difference helps me. Interesting that I am reading this using Safari Books Online, which uses many of these concepts. Taught me some new aspects of a Hypervisor – very low-level information about the Azure Fabric (not to be confused with the Application Fabric feature) (Chapter Two) Good detail of what is included in the SDK. Even more is available now. CS = Cloud Service (Chapter 3) Place Storage info in the configuration file, since it can be streamed in-line with a running app. Ditto for logging, and keep separated configs for staging and testing. Easy-switch in and switch out.  (Chapter 4) There are two Runtime API’s, one of external and one for internal. Realizing how powerful this paradigm really is. Some places seem light, and to drop off but perhaps that’s best. Managing API is not charged, which is nice. I don’t often think about the price, until it comes to an actual deployment (Chapter 5) Csmanage is something I want to dig into deeper. API requires package moves to Blob storage first, so it needs a URL. Csmanage equivalent can be written in Unix scripting using openssl. Upgrades are possible, and you use the upgradeDomainCount attribute in the Service-Definition.csdef file  Always use a low-privileged account to test on the dev fabric, since Windows Azure runs in partial trust. Full trust is available, but can be dangerous and must be well-thought out. (Chapter 6) Learned how to run full CMD commands in a web window – not that you would ever do that, but it was an interesting view into those links. This leads to a discussion on hosting other runtimes (such as Java or PHP) in Windows Azure. I got an expanded view on this process, although this is where the book shows its age a little. Books can be a problem for Cloud Computing for this reason – things just change too quickly. Windows Azure storage is not eventually consistent – it is instantly consistent with multi-phase commit. Plumbing for this is internal, not required to code that. (Chapter 7) REST API makes the service interoperable, hybrid, and consistent across code architectures. Nicely done. Use affinity groups to keep data and code together. Side note: e-book readers need a common “notes” feature. There’s a decent quick description of REST in this chapter. Learned about CloudDrive code – PowerShell sample that mounts Blob storage as a local provider. Works against Dev fabric by default, can be switched to Account. Good treatment in the storage chapters on the differences between using Dev storage and Azure storage. These can be mitigated. No, blobs are not of any size or number. Not a good statement (Chapter 8) Blob storage is probably Azure’s closest play to Infrastructure as a Service (Iaas). Blob change operations must be authenticated, even when public. Chapters on storage are pretty in-depth. Queue Messages are base-64 encoded (Chapter 9) The visibility timeout ensures processing of message in a disconnected system. Order is not guaranteed for a message, so if you need that set an increasing number in the queue mechanism. While Queues are accessible via REST, they are not public and are secured by default. Interesting – the header for a queue request includes an estimated count. This can be useful to create more worker roles in a dynamic system. Each Entity (row) in the Azure Table service is atomic – all or nothing. (Chapter 10) An entity can have up to 255 Properties  Use “ID” for the class to indicate the key value, or use the [DataServiceKey] Attribute.  LINQ makes working with the Azure Table Service much easier, although Interop is certainly possible. Good description on the process of selecting the Partition and Row Key.  When checking for continuation tokens for pagination, include logic that falls out of the check in case you are at the last page.  On deleting a storage object, it is instantly unavailable, however a background process is dispatched to perform the physical deletion. So if you want to re-create a storage object with the same name, add retry logic into the code. Interesting approach to deleting an index entity without having to read it first – create a local entity with the same keys and apply it to the Azure system regardless of change-state.  Although the “Indexes” description is a little vague, it’s interesting to see a Folding and Stemming discussion a-la the Porter Stemming Algorithm. (Chapter 11)  Presents a better discussion of indexes (at least inverted indexes) later in the chapter. Great treatment for DBA’s in Chapter 11. We need to work on getting secondary indexes in Table storage. There is a limited form of transactions called “Entity Group Transactions” that, although they have conditions, makes a transactional system more possible. Concurrency also becomes an issue, but is handled well if you’re using Data Services in .NET. It watches the Etag and allows you to take action appropriately. I do not recommend using Azure as a location for secure backups. In fact, I would rather have seen the examples in (Chapter 12) go the other way, showing how data could be brought back to a local store as a DR or HA strategy. Good information on cryptography and so on even so. Chapter seems out of place, and should be combined with the Blob chapter.  (Chapter 13) on SQL Azure is dated, although the base concepts are OK.  Nice example of simple ADO.NET access to a SQL Azure (or any SQL Server Really) database.  

    Read the article

  • emacs - Widget Custom Theme

    - by Abdillah
    I'm starting using emacs. I know that everything of emacs is customizable. But, one thing I can't even found on internet is modifying the default emacs' widget's Look and Feel. Is there some way to custom it's background, shape, etc or just using UTF-8 character (for example checkbox using ? character)? Well, as a new user, I was quite boring with the customize interface. Some kind of I want to change onto a flat-styled ui or a better one.

    Read the article

  • Wireless clients have no route to ethernet clients in OpenWrt router

    - by superjoe30
    I'm using OpenWrt Kamikaze 8.09 on a Linksys WRT54g v1.1 router. I just flashed it with default settings and got everything working, except my wireless laptop cannot ping my desktop which is wired to the router. What can I do to fix this? (My desktop can ping other desktops wired to the router) My routing table: config 'defaults' option 'syn_flood' '1' option 'input' 'ACCEPT' option 'output' 'ACCEPT' option 'forward' 'REJECT' config 'zone' option 'name' 'lan' option 'input' 'ACCEPT' option 'output' 'ACCEPT' option 'forward' 'REJECT' config 'zone' option 'name' 'wan' option 'input' 'REJECT' option 'output' 'ACCEPT' option 'forward' 'REJECT' option 'masq' '1' config 'forwarding' option 'src' 'lan' option 'dest' 'wan' option 'mtu_fix' '1' config 'redirect' option 'src' 'wan' option '_name' 'ssh' option 'proto' 'tcp' option 'src_dport' '22' option 'dest_ip' '192.168.1.100' option 'dest_port' '22' config 'redirect' option 'src' 'wan' option '_name' 'http' option 'proto' 'tcp' option 'src_dport' '8888' option 'dest_ip' '192.168.1.100' option 'dest_port' '8888'

    Read the article

  • How to get nginx to serve up on an elastic IP

    - by geekbri
    I have an EC2 instance which is serving up PHP pages with nginx and php-fpm. This works perfectly fine when accessed through the public DNS for the instance. However if I try to access the site with the Elastic IP which is bound to it, it serves up a generic "Welcome to nginx" page, even though in my server block i have listen 80 (which i thought listened on all incoming IPs on port 80). Here is my nginx config. server { listen 80; access_log /var/log/nginx/access.log; root "/var/www/clipperz/"; index index.html index.php; # Default location location / { try_files $uri $uri/ index.html; } # Parse all .php file in the $document_root directory location ~ .php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } }

    Read the article

  • How can i remove some installed python modules in centos

    - by user1513613
    I am getting ths error Python 2.7.5 (default, Jul 2 2013, 13:33:13) [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import MySQLdb Traceback (most recent call last): File "<stdin>", line 1, in <module> File "MySQLdb/__init__.py", line 23, in <module> (version_info, _mysql.version_info)) ImportError: this is MySQLdb version (1, 2, 4, 'final', 1), but _mysql is version (1, 2, 3, 'final', 0) >>> Now i dont know how i have installed that. i treid so many things like yum , pip easy, install etc. how can i remove all versions of MysqlDB FROM THERE

    Read the article

  • Dynamic DNS registration for VPN clients

    - by Eric Falsken
    I've got a VPN server set up in my Active Directory on a remote network. (VPN Server is separate box from DNS/AD) When I dial into the network (client machine is not a member of the AD) the machine does not register its IP or Hostname in the DNS. I've played with all possible combinations of DHCP and RRAS-allocated IP pools, and none of them seem to cause my client to register. Is it because my client has to be a member of the domain? Are there some security settins I can tweak so that it can register its hostname/ip? I've looked in the event logs (System and Security) for the AD, DNS, DHCP, RRAS, and the client machine, and don't see anything relating to DNS Registration. Here's the IPConfig on the client machine (once connected): PPP adapter My VPN Name: Connection-specific DNS Suffix . : mydomain.local Description . . . . . . . . . . . : My VPN Name Physical Address. . . . . . . . . : DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.1.22(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : DNS Servers . . . . . . . . . . . : 192.168.1.52 <- DC1 192.168.1.53 <- DC2 NetBIOS over Tcpip. . . . . . . . : Enabled Edit: It looks like my clients are not recieving the DHCP Scoope Options. I found this great article in Microsoft's KB. So the problem here is that the VPN Server "pre-reserves" the DHCP addresses, but then you have to add the DHCP Relay Agent to relay the secondary request for scope options. My problem is that the DHCP Relay Agent isn't relaying to the local DHCP server (same box as the VPN/RRAS). I've configured the DHCP Relay Agent according to this KB, but it dosn't work for a local DHCP server. (I see the request count increasing, but no responses) I was able to get everything working by specifying the DNS server and domain name in the VPN connection properties on the client. But am still unable to assign it (or the default gateway) dynamically via DHCP. The client also has to be a member of the remote domain.

    Read the article

  • Where is Amazon Linux AMI Test Page EC2?

    - by fuzzybee
    I have set up my websites as directories directly under /var/www/html/ and they are working just fine (the websites are mapped to virtual hosts). So, this is mainly out of curiosity for the moment. Furthermore, being able to customise this might bring some benefits in the future e.g. branding the elastic IPs my computer use temporarily. Notes I can always create a index.html page under /var/www/html/ and modify it but that's not my goal here. I can also map the elastic IP address to a directory /var/www/html/default/ and do my stuffs there but that is not also my goal here My goal is the find the Amazon Linux AMI test page I've tried running Linux command to find it but it takes too long obviously

    Read the article

  • spawn-fcgi/ fast CGi php crashes without traces in logs, on Gentoo

    - by user39046
    Hello, I recently moved from apache to a Nginx/fastcgi solution, I had it running on a Fedora system and had no problems, but, since i moved all to Gentoo , the Spawn-fCGI / fastcgi php daemon dies, and i can't find out any errors reports on /var/log/messages , so i don't know why this happens. I've seen that fastcgi is somehow different from the fedora distro, on gentoo as it has different conf files and init.d startup scripts, Can someone help me make it more stable? The number of requests that i had isn't any different from the ones I had on fedora, so i use the default conf that comes with the distro..and in about some hours it simply dies... Thank you very much

    Read the article

  • open_basedir vs sessions

    - by liquorvicar
    On a virtual hosting server I have the open_basedir set to .:/path/to/vhost/web:/tmp:/usr/share/pear for each virtual host. I have a client who's running WordPress and he's complaining about open_basedir errors thus: PHP WARNING: file_exists() [function.file-exists]: open_basedir restriction in effect. File(/var/lib/php/session/sess_42k7jn3vjenj43g3njorrnrmf2) is not within the allowed path(s): (.:/path/to/vhost/web:/tmp:/usr/share/pear) So the PHP session save_path isn't included in open_basedir but sessions across all sites on the server seems to be working fine apart from in this intermittent instance. I thought that perhaps the default session handler ignored open_basedir and this warning was caused by WP accessing the session file directly. However from what I can see PHP 5.2.4 introduced open_basedir checking to the session.save_path config: http://www.php.net/ChangeLog-5.php#5.2.4 (I am on PHP 5.2.13). Any ideas?

    Read the article

  • How can Bonjour be setup to function over a VPN connection using Mac OS X — Mountain Lion Server?

    - by Ben Coppock
    I purchased Mountain Lion Server for our office thinking that Bonjour would automatically enable any computers connected via VPN to see all computers and applications (such as Bento) running on the office network. The hope was that those of us working at home would feel just like we were in the office, with all network services working transparently over the VPN connection. However, I see that Bonjour (aka mDNS) is not enabled to work over the VPN by default. Can I configure Mountain Lion Server to automatically pass Bonjour traffic over the VPN? Is there any reason not to do this?

    Read the article

  • CentOS 5.8 server, installed Web Server vs yum install httpd

    - by Shiro
    I would like to know, what is the different between, the build-in CentOS server web server vs I install manual with yum install httpd I am new in CentOS. I just installed my CentOS 5.8 Server with Web Server checked. With some Google Search with LAMP installation, they listed with yum install httpd I had check inside /etc/init.d/ already had httpd I try to run yum install httpd http-devel It shows not yet installed. What will happen if I install yum version? What should I do? What is the best practise? Should I remove the Web Server install by default with CentOS? My goal is install LAMP (PHP v5.2.17).

    Read the article

  • Find out what fields are available to IIS 7 Advanced Logging from Modules

    - by Grummle
    You can install the Advanced Logging module for IIS 7. Once installed you have the option to define new fields from several different sources. One of those sources is other modules. What I am unable to figure out is how to get a list of the fields that the other modules 'publish'. There a boat load of modules installed by default and I have to imagine they are publishing some data I would care to know about (hopefully UrlRoutingModule publishes what I'm specifically looking for). Also as an aside if you know how to or know where good documentation on writing .net HttpModules that publish custom fields I'd love to see/hear about it.

    Read the article

  • How to create a Linux user without a password but being able to set it?

    - by Leonid Shevtsov
    I have a username and an SSH key for a (hypothetical) guy and I need to give him admin access to a Linux (Ubuntu) server. I want him to be able to log in via SSH and then set his password by himself over a secure connection, instead of passing the password around. I know how to make the password expire and force him to reset it on first login. But this doesn't work unless he has some password already, which I then have to tell him. I thought about making the password blank - SSH wouldn't allow login, but then anyone can su into the user. My question is, is there some best practice to creating accounts in such a way? Or setting a default password is unavoidable?

    Read the article

  • Fresh Red Hat Enterprise Linux fails to install httpd using yum

    - by Julian
    I'm trying to install a LAMP stack in a fresh red hat server but yum is misbehaving. Being linux illiterate I'm at a loss. $yum install httpd Loaded plugins: security Setting up Install Process No package httpd available. Nothing to do My yum config $ cat /etc/yum.conf [main] cachedir=/var/cache/yum keepcache=0 debuglevel=2 logfile=/var/log/yum.log distroverpkg=redhat-release tolerant=1 exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 # Note: yum-RHN-plugin doesn't honor this. metadata_expire=1h # Default. # installonly_limit = 3 # PUT YOUR REPOS HERE OR IN separate files named file.repo # in /etc/yum.repos.d Other stuff in the yum.repos.d dir $ ls -lah /etc/yum.repos.d/ total 12K drwxr-xr-x 2 root root 4.0K Feb 4 01:15 . drwxr-xr-x 59 root root 4.0K Feb 4 01:28 .. -rw-r--r-- 1 root root 561 Mar 10 2010 rhel-debuginfo.repo What could be going on? I thought "out of the box" RHEL5.5 would be friendlier :)

    Read the article

  • Add IP Address without Plesk

    - by CrackerJack9
    I have an dedicated unmanaged server and added a few IP addresses to it (allocated), and the only information my hosting company has provided is instructions on how to use Plesk to Add IP Addresses. However, one of the first things I did was uninstall Plesk (for numerous reasons). Does anyone know what exactly Plesk does when you "Add IP Address"? Does it just create an alias on the default interface (I currently only have one and the loopback)? I can manage that myself without Plesk, but I was hoping someone might know if there is anything else Plesk does. I also have DHCP Client running (eth0 is static), not sure why my hosting company put that there either, and not sure if they're related.

    Read the article

  • Is it safe to disable nmi_watchdog?

    - by Rayne
    I'm using RHEL 6 (kernel 2.6.32-131) and I'm trying to start oprofile, but I keep getting an error. My oprofile version is 0.9.6-12. I was able to successfully do opcontrol --init But when I did a opcontrol --start --no-vmlinux I got the error Using default event: CPU_CLK_UNHALTED:100000:0:1:1 Error: counter 0 not available nmi_watchdog using this resource ? Try: opcontrol --deinit echo 0 > /proc/sys/kernel/nmi_watchdog I don't know if it's safe for me to do this? For what I googled, it seems that nmi_watchdog is used to reboot the system if it hangs, and that sounds like something I want. Also, grep NMI /proc/interrupts shows a bunch of numbers, which I guess indicates that nmi_watchdog is already running? How can I run both oprofile and nmi_watchdog?

    Read the article

  • T-SQL Tuesday #53-Matt's Making Me Do This!

    - by Most Valuable Yak (Rob Volk)
    Hello everyone! It's that time again, time for T-SQL Tuesday, the wonderful blog series started by Adam Machanic (b|t). This month we are hosted by Matt Velic (b|t) who asks the question, "Why So Serious?", in celebration of April Fool's Day. He asks the contributors for their dirty tricks. And for some reason that escapes me, he and Jeff Verheul (b|t) seem to think I might be able to write about those. Shocked, I am! Nah, not really. They're absolutely right, this one is gonna be fun! I took some inspiration from Matt's suggestions, namely Resource Governor and Login Triggers.  I've done some interesting login trigger stuff for a presentation, but nothing yet with Resource Governor. Best way to learn it! One of my oldest pet peeves is abuse of the sa login. Don't get me wrong, I use it too, but typically only as SQL Agent job owner. It's been a while since I've been stuck with it, but back when I started using SQL Server, EVERY application needed sa to function. It was hard-coded and couldn't be changed. (welllllll, that is if you didn't use a hex editor on the EXE file, but who would do such a thing?) My standard warning applies: don't run anything on this page in production. In fact, back up whatever server you're testing this on, including the master database. Snapshotting a VM is a good idea. Also make sure you have other sysadmin level logins on that server. So here's a standard template for a logon trigger to address those pesky sa users: CREATE TRIGGER SA_LOGIN_PRIORITY ON ALL SERVER WITH ENCRYPTION, EXECUTE AS N'sa' AFTER LOGON AS IF ORIGINAL_LOGIN()<>N'sa' OR APP_NAME() LIKE N'SQL Agent%' RETURN; -- interesting stuff goes here GO   What can you do for "interesting stuff"? Books Online limits itself to merely rolling back the logon, which will throw an error (and alert the person that the logon trigger fired).  That's a good use for logon triggers, but really not tricky enough for this blog.  Some of my suggestions are below: WAITFOR DELAY '23:59:59';   Or: EXEC sp_MSforeach_db 'EXEC sp_detach_db ''?'';'   Or: EXEC msdb.dbo.sp_add_job @job_name=N'`', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @delete_level=3; EXEC msdb.dbo.sp_add_jobserver @job_name=N'`', @server_name=@@SERVERNAME; EXEC msdb.dbo.sp_add_jobstep @job_name=N'`', @step_id=1, @step_name=N'`', @command=N'SHUTDOWN;'; EXEC msdb.dbo.sp_start_job @job_name=N'`';   Really, I don't want to spoil your own exploration, try it yourself!  The thing I really like about these is it lets me promote the idea that "sa is SLOW, sa is BUGGY, don't use sa!".  Before we get into Resource Governor, make sure to drop or disable that logon trigger. They don't work well in combination. (Had to redo all the following code when SSMS locked up) Resource Governor is a feature that lets you control how many resources a single session can consume. The main goal is to limit the damage from a runaway query. But we're not here to read about its main goal or normal usage! I'm trying to make people stop using sa BECAUSE IT'S SLOW! Here's how RG can do that: USE master; GO CREATE FUNCTION dbo.SA_LOGIN_PRIORITY() RETURNS sysname WITH SCHEMABINDING, ENCRYPTION AS BEGIN RETURN CASE WHEN ORIGINAL_LOGIN()=N'sa' AND APP_NAME() NOT LIKE N'SQL Agent%' THEN N'SA_LOGIN_PRIORITY' ELSE N'default' END END GO CREATE RESOURCE POOL SA_LOGIN_PRIORITY WITH ( MIN_CPU_PERCENT = 0 ,MAX_CPU_PERCENT = 1 ,CAP_CPU_PERCENT = 1 ,AFFINITY SCHEDULER = (0) ,MIN_MEMORY_PERCENT = 0 ,MAX_MEMORY_PERCENT = 1 -- ,MIN_IOPS_PER_VOLUME = 1 ,MAX_IOPS_PER_VOLUME = 1 -- uncomment for SQL Server 2014 ); CREATE WORKLOAD GROUP SA_LOGIN_PRIORITY WITH ( IMPORTANCE = LOW ,REQUEST_MAX_MEMORY_GRANT_PERCENT = 1 ,REQUEST_MAX_CPU_TIME_SEC = 1 ,REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 1 ,MAX_DOP = 1 ,GROUP_MAX_REQUESTS = 1 ) USING SA_LOGIN_PRIORITY; ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.SA_LOGIN_PRIORITY); ALTER RESOURCE GOVERNOR RECONFIGURE;   From top to bottom: Create a classifier function to determine which pool the session should go to. More info on classifier functions. Create the pool and provide a generous helping of resources for the sa login. Create the workload group and further prioritize those resources for the sa login. Apply the classifier function and reconfigure RG to use it. I have to say this one is a bit sneakier than the logon trigger, least of all you don't get any error messages.  I heartily recommend testing it in Management Studio, and click around the UI a lot, there's some fun behavior there. And DEFINITELY try it on SQL 2014 with the IO settings included!  You'll notice I made allowances for SQL Agent jobs owned by sa, they'll go into the default workload group.  You can add your own overrides to the classifier function if needed. Some interesting ideas I didn't have time for but expect you to get to before me: Set up different pools/workgroups with different settings and randomize which one the classifier chooses Do the same but base it on time of day (Books Online example covers this)... Or, which workstation it connects from. This can be modified for certain special people in your office who either don't listen, or are attracted (and attractive) to you. And if things go wrong you can always use the following from another sysadmin or Dedicated Admin connection: ALTER RESOURCE GOVERNOR DISABLE;   That will let you go in and either fix (or drop) the pools, workgroups and classifier function. So now that you know these types of things are possible, and if you are tired of your team using sa when they shouldn't, I expect you'll enjoy playing with these quite a bit! Unfortunately, the aforementioned Dedicated Admin Connection kinda poops on the party here.  Books Online for both topics will tell you that the DAC will not fire either feature. So if you have a crafty user who does their research, they can still sneak in with sa and do their bidding without being hampered. Of course, you can still detect their login via various methods, like a server trace, SQL Server Audit, extended events, and enabling "Audit Successful Logins" on the server.  These all have their downsides: traces take resources, extended events and SQL Audit can't fire off actions, and enabling successful logins will bloat your error log very quickly.  SQL Audit is also limited unless you have Enterprise Edition, and Resource Governor is Enterprise-only.  And WORST OF ALL, these features are all available and visible through the SSMS UI, so even a doofus developer or manager could find them. Fortunately there are Event Notifications! Event notifications are becoming one of my favorite features of SQL Server (keep an eye out for more blogs from me about them). They are practically unknown and heinously underutilized.  They are also a great gateway drug to using Service Broker, another great but underutilized feature. Hopefully this will get you to start using them, or at least your enemies in the office will once they read this, and then you'll have to learn them in order to fix things. So here's the setup: USE msdb; GO CREATE PROCEDURE dbo.SA_LOGIN_PRIORITY_act WITH ENCRYPTION AS DECLARE @x XML, @message nvarchar(max); RECEIVE @x=CAST(message_body AS XML) FROM SA_LOGIN_PRIORITY_q; IF @x.value('(//LoginName)[1]','sysname')=N'sa' AND @x.value('(//ApplicationName)[1]','sysname') NOT LIKE N'SQL Agent%' BEGIN -- interesting activation procedure stuff goes here END GO CREATE QUEUE SA_LOGIN_PRIORITY_q WITH STATUS=ON, RETENTION=OFF, ACTIVATION (PROCEDURE_NAME=dbo.SA_LOGIN_PRIORITY_act, MAX_QUEUE_READERS=1, EXECUTE AS OWNER); CREATE SERVICE SA_LOGIN_PRIORITY_s ON QUEUE SA_LOGIN_PRIORITY_q([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]); CREATE EVENT NOTIFICATION SA_LOGIN_PRIORITY_en ON SERVER WITH FAN_IN FOR AUDIT_LOGIN TO SERVICE N'SA_LOGIN_PRIORITY_s', N'current database' GO   From top to bottom: Create activation procedure for event notification queue. Create queue to accept messages from event notification, and activate the procedure to process those messages when received. Create service to send messages to that queue. Create event notification on AUDIT_LOGIN events that fire the service. I placed this in msdb as it is an available system database and already has Service Broker enabled by default. You should change this to another database if you can guarantee it won't get dropped. So what to put in place for "interesting activation procedure code"?  Hmmm, so far I haven't addressed Matt's suggestion of writing a lengthy script to send an annoying message: SET @[email protected]('(//HostName)[1]','sysname') + N' tried to log in to server ' + @x.value('(//ServerName)[1]','sysname') + N' as SA at ' + @x.value('(//StartTime)[1]','sysname') + N' using the ' + @x.value('(//ApplicationName)[1]','sysname') + N' program. That''s why you''re getting this message and the attached pornography which' + N' is bloating your inbox and violating company policy, among other things. If you know' + N' this person you can go to their desk and hit them, or use the following SQL to end their session: KILL ' + @x.value('(//SPID)[1]','sysname') + N'; Hopefully they''re in the middle of a huge query that they need to finish right away.' EXEC msdb.dbo.sp_send_dbmail @recipients=N'[email protected]', @subject=N'SA Login Alert', @query_result_width=32767, @body=@message, @query=N'EXEC sp_readerrorlog;', @attach_query_result_as_file=1, @query_attachment_filename=N'UtterlyGrossPorn_SeriouslyDontOpenIt.jpg' I'm not sure I'd call that a lengthy script, but the attachment should get pretty big, and I'm sure the email admins will love storing multiple copies of it.  The nice thing is that this also fires on Dedicated Admin connections! You can even identify DAC connections from the event data returned, I leave that as an exercise for you. You can use that info to change the action taken by the activation procedure, and since it's a stored procedure, it can pretty much do anything! Except KILL the SPID, or SHUTDOWN the server directly.  I'm still working on those.

    Read the article

  • EM12c Release 4: Cloud Control to Major Tom...

    - by abulloch
    With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages ! 

    Read the article

  • Is there a standard Mac OS X Snow Leopard home folder naming convention?

    - by JGFMK
    I recently re-installed a fresh copy of Snow Leopard on my Mac. I was a bit surprised when the process completed it had changed the name of my home folder to use my name rather than my initials. This messed up all the paths in my Eclipse IDE workspace. I vaguely remember being asked if I use it for home or business use and wondered if something I did there has a bearing on the way the default name is assigned. Does anyone have more information on this? Links too official Apple install etc. Cheers.

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >