Search Results

Search found 414 results on 17 pages for 'agents'.

Page 11/17 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • puppet onlyif specified nodes

    - by Valintinr
    I'm trying to write a puppet template. I have a puppet-master and a few puppet-agents and they all must be divided. I think it's good to do this by the node's hostname. But when I tried to do this I've encountered an error "puppet-agent[169037]: (/Stage[main]//Exec[adduser]) Could not evaluate: Could not find command 'ru1'" see code below exec { 'adduser': command => 'sudo adduser -m -p pawSfQewWrUAA test -G wheel', path => [ '/bin','/usr/bin' ], onlyif => "$hostname == ru1" } I need to specify this task for only one node with the hostname ru1. So have can I do this? Thanks.

    Read the article

  • Enterprise Level Monitoring Solution

    - by Garthmeister J.
    My company is currently looking to replace our current solution used for monitoring our web-based enterprise solutions for both up-time and performance. Please note this is not intended to be a network monitoring-type solution (internally we currently use Nagios). If anyone has a provider that they have had a positive experience with, it would be much appreciated. Here is a list of our requirements: • Must have a large number of probes/agents around the globe to be representative of our customer base • Must have a flexible scripting capability to automate multi-step user actions • 24 hour a day monitoring • Flexible alerting system • Report generation capability • Mimic browser specific monitoring (optional, not a must-have)

    Read the article

  • Finding which tiles are intersected by a line, without looping through all of them or skipping any

    - by JustSuds
    I've been staring at this problem for a few days now. I rigged up this graphic to help me visualise the issue: http://i.stack.imgur.com/HxyP9.png (from the graph, we know that the line intersects [1, 1], [1, 2], [2, 2], [2, 3], ending in [3,3]) I want to step along the line to each grid space and check to see if the material of the grid space is solid. I feel like I already know the math involved, but I haven't been able to string it together yet. I'm using this to test line of sight and eliminate nodes after a path is found via my pathfinding algorithms - my agents cant see through a solid block, therefore they cant move through one, therefore the node is not eliminated from the path because it is required to navigate a corner. So, I need an algorithm that will step along the line to each grid space that it intersects. Any ideas? I've taken a look at a lot of common algorithms, like Bresenham's, and one that steps at predefined intervals along the line (unfortunately, this method skips tiles if they're intersecting with a smaller wedge than the step size). I'm populating my whiteboard now with a mass of floor() and ceil() functions - but its getting overly complicated and I'm afraid it might cause a slowdown.

    Read the article

  • How do you quantify competency in terms of time (years)?

    - by o.k.w
    While looking for a job via agencies some time ago, I kept having questions from the recuitment agents or in the application forms like: How many years of experience do you have in: Oracle ASP.NET J2EE etc etc etc.... At first I answered faithfully... 5yrs, 7yrs, 2 yrs, none, few months etc etc.. Then I thought; I can be doing something shallow for 7 years and not being competent at it simply because I am just doing a minor support for a legacy system running SQL2000 which requires 10 days of my time for the past 7 years. Eventualy I declined to answers such questions. I wonder why do they ask these questions anymore. Anyone who just graduated with a computer science can claim 3 to 4 years experience in anything they 'touched' in the cirriculum, which to me can be equivalent to zero or 10 years depending how you look at it. It might hold true decades ago where programmers and IT skills are of very different nature. I might be wrong but I really doubt 'time' or 'years' are a good gauge of competency or experience anymore. Any opinion/rebuttal are welcome!

    Read the article

  • Server spec for a small business [duplicate]

    - by I'll-Be-Back
    This question already has an answer here: Can you help me with my capacity planning? 2 answers I will need to buy a decent server for Windows Server 2012 and Linux for Web server (Internal use only - Intranet). I will install ESXi with 2 or 3 VM's. There will be about 80-100 Agents at work, they will login (domain controller) on client PC in the morning (between 9:40am to 10:05am). They can only use IE browser and everything else will be locked. They will not have any storage space, no email, etc. Is this spec decent enough? 2u Supermicro 825 chassis, X9SCL-F x1 Intel E3-1290v2 16Gb DDR3 x2 Intel 520 Series 240Gb x2 2Tb Seagate Barracuda, LSi 4 port SAS raid controller

    Read the article

  • HIGH CPU USAGE FROM SUSTEM [on hold]

    - by user195641
    CAN ANYONE EXPLAIN THIS ? IT IS BAD FOR MY CPU ? CAN ANYONE HELP ME ? I DONT KNOW WHAT TO DO ! THERE IS HIGH DELAY WHEN I OPEN A NEW TASK HOWEVER ITS A INTEL CORE DUO EXTREAMA 3.0GH THANKS There are about 100 items monitored with the agent. They are also monitored on other identical hosts where Zabbix agent does not consume so much of CPU. Agents send collected data to Zabbix proxy. The agent configuration is default. The host CPU has 8 cores (2.4 Gz). The smallest time value for monitored items is 60 seconds.

    Read the article

  • Openfire and LDAP issues

    - by clsmith
    Thanks in advance for the help. Has anyone see this issue with openfire? Currently I use Openfire Fedora with Auth using windows 2003 and also use mysql for the database. When I bring up two clients and talk to each other the time is slow between messages. Sometimes it can take between 5-15 minutes for something sent to get to the person (this is with only two people on the openfire server). I ran a tcp dump using port 389 and see that the machine is running thousands of queries against ldap. When i plug it into wireshark I notice that it is transferring the entire contact list or checking on the status of the entire contact list ? When I run debug on openfire itself I am presented with only this small message in the log: 2010.06.08 07:01:17 LdapManager: Starting LDAP search... 2010.06.08 07:01:17 LdapManager: ... search finished 2010.06.08 07:01:17 LdapManager: Creating a DirContext in LdapManager.getContext()... 2010.06.08 07:01:17 LdapManager: Created hashtable with context values, attempting to create context... 2010.06.08 07:01:17 LdapManager: ... context created successfully, returning. 2010.06.08 07:01:17 LdapManager: Trying to find a groups's DN based on it's groupname. cn: Spark agents CLT, Base DN: OU="Hidden",DC="Hidden",DC="net"... 2010.06.08 07:01:17 LdapManager: Creating a DirContext in LdapManager.getContext()... 2010.06.08 07:01:17 LdapManager: Created hashtable with context values, attempting to create context... 2010.06.08 07:01:17 LdapManager: ... context created successfully, returning. 2010.06.08 07:01:17 LdapManager: Starting LDAP search... 2010.06.08 07:01:17 LdapManager: ... search finished 2010.06.08 07:01:17 LdapManager: Trying to find a groups's DN based on it's groupname. cn: Spark agents CLT, Base DN: OU="Hidden",DC="Hidden",DC="net"... 2010.06.08 07:01:17 LdapManager: Creating a DirContext in LdapManager.getContext()... 2010.06.08 07:01:17 LdapManager: Created hashtable with context values, attempting to create context... 2010.06.08 07:01:17 LdapManager: ... context created successfully, returning. 2010.06.08 07:01:17 LdapManager: Starting LDAP search... 2010.06.08 07:01:17 LdapManager: ... search finished I thought this was a configuration on my end and started to look into the cache settings on the openfire webpages. I tweaked the settings as recommend by the pages and still get the same issues. I doesnt seem to cache the contact list or this might be a feature never fixed or implemented. Has anyone gone through this before ? I have searched online and I see others have great experience with openfire with no issues like I have, or is it because noone checked the queries ? For the time being I created a new Domain Controller and moved openfire to that computer so it can run local queries. This seems to help reduce the speed alot, but when I run the server performance manager tool I see that with two people only using that openfire server I run 593.7 request per second. Thanks for your help, if I didnt provide enough data please let me know what you need and I can find it.

    Read the article

  • Call an member function implementing the {{#linkTo ...}} helper from javascript code

    - by gonvaled
    I am trying to replace this navigation menu: <nav> {{#linkTo "nodes" }}<i class="icon-fixed-width icon-cloud icon-2x"></i>&nbsp;&nbsp;{{t generic.nodes}} {{grayOut "(temporal)"}}{{/linkTo}} {{#linkTo "services" }}<i class="icon-fixed-width icon-phone icon-2x"></i>&nbsp;&nbsp;{{t generic.services}}{{/linkTo}} {{#linkTo "agents" }}<i class="icon-fixed-width icon-headphones icon-2x"></i>&nbsp;&nbsp;{{t generic.agents}}{{/linkTo}} {{#linkTo "extensions" }}<i class="icon-fixed-width icon-random icon-2x"></i>&nbsp;&nbsp;{{t generic.extensions}}{{/linkTo}} {{#linkTo "voiceMenus" }}<i class="icon-fixed-width icon-sitemap icon-2x"></i>&nbsp;&nbsp;{{t generic.voicemenus}}{{/linkTo}} {{#linkTo "queues" }}<i class="icon-fixed-width icon-tasks icon-2x"></i>&nbsp;&nbsp;{{t generic.queues}}{{/linkTo}} {{#linkTo "contacts" }}<i class="icon-fixed-width icon-user icon-2x"></i>&nbsp;&nbsp;{{t generic.contacts}}{{/linkTo}} {{#linkTo "accounts" }}<i class="icon-fixed-width icon-building icon-2x"></i>&nbsp;&nbsp;{{t generic.accounts}}{{/linkTo}} {{#linkTo "locators" }}<i class="icon-fixed-width icon-phone-sign icon-2x"></i>&nbsp;&nbsp;{{t generic.locators}}{{/linkTo}} {{#linkTo "phonelocations" }}<i class="icon-fixed-width icon-globe icon-2x"></i>&nbsp;&nbsp;{{t generic.phonelocations}}{{/linkTo}} {{#linkTo "billing" }}<i class="icon-fixed-width icon-euro icon-2x"></i>&nbsp;&nbsp;{{t generic.billing}}{{/linkTo}} {{#linkTo "profile" }}<i class="icon-fixed-width icon-cogs icon-2x"></i>&nbsp;&nbsp;{{t generic.profile}}{{/linkTo}} {{#linkTo "audio" }}<i class="icon-fixed-width icon-music icon-2x"></i>&nbsp;&nbsp;{{t generic.audio}}{{/linkTo}} {{#linkTo "editor" }}<i class="icon-fixed-width icon-puzzle-piece icon-2x"></i>&nbsp;&nbsp;{{t generic.node_editor}}{{/linkTo}} </nav> With a more dynamic version. What I am trying to do is to reproduce the html inside Ember.View.render, but I would like to reuse as much Ember functionality as possible. Specifically, I would like to reuse the {{#linkTo ...}} helper, with two goals: Reuse existing html rendering implemented in the {{#linkTo ...}} helper Get the same routing support that using the {{#linkTo ...}} in a template provides. How can I call this helper from within javascript code? This is my first (incomplete) attempt. The template: {{view SettingsApp.NavigationView}} And the view: var trans = Ember.I18n.t; var MAIN_MENU = [ { 'linkTo' : 'nodes', 'i-class' : 'icon-cloud', 'txt' : trans('generic.nodes') }, { 'linkTo' : 'services', 'i-class' : 'icon-phone', 'txt' : trans('generic.services') }, ]; function getNavIcon (data) { var linkTo = data.linkTo, i_class = data['i-class'], txt = data.txt; var html = '<i class="icon-fixed-width icon-2x ' + i_class + '"></i>&nbsp;&nbsp;' + txt; return html; } SettingsApp.NavigationView = Ember.View.extend({ menu : MAIN_MENU, render: function(buffer) { for (var i=0, l=this.menu.length; i<l; i++) { var data = this.menu[i]; // TODO: call the ember function implementing the {{#linkTo ...}} helper buffer.push(getNavIcon(data)); } return buffer; } });

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • CruiseControl [.Net] vs TeamCity for continuous integration?

    - by zappan
    i would like to ask you which automated build environment you consider better, based on practical experience. i'm planning to do some .Net and some Java development, so i would like to have a tool that supports both these platforms. i've been reading around and found out about CruiseControl.NET, used on stackoverflow development, and TeamCity with its support for build agents on different OS-platforms and based on different programming languages. so, if you have some practical experience on both of those, which one you prefer and why. currently, i'm mostly interested in the ease of use and management of the tool, much less in the fact that CC is open source, and TC is a subject to licensing at some point when you have much projects to run (because, i need it for a small amount of projects). also, if there is some other tool that meets the above-mentioned and you believe it's worth a recommendation - feel free to include it in the discussion.

    Read the article

  • Is there a list of known web crawlers?

    - by J. Pablo Fernández
    I'm trying to get accurate download numbers for some files on a web server. I look at the user agents and some are clearly bots or web crawlers, but many for many I'm not sure, they may or may not be a web crawler and they are causing many downloads so it's important for me to know. Is there somewhere a list of know web crawlers with some documentation like user agent, IPs, behavior, etc? I'm not interested in the official ones, like Google's, Yahoo's, or Microsoft's. Those are generally well behaved and self-indentified.

    Read the article

  • The specified type member 'EntityKey' is not supported in LINQ to Entities

    - by user300992
    I have 2 Entities "UserProfile" and "Agent", they are 1-many relationship. I want to do a query to get a list of Agents by giving the userProfileEntityKey. When I run it, I got this "The specified type member 'EntityKey' is not supported in LINQ to Entities" error. public IQueryable<Agent> GetAgentListByUserProfile(EntityKey userProfileEntityKey) { ObjectQuery<Agent> agentObjects = this.DataContext.AgentSet; IQueryable<Agent> resultQuery = (from p in agentObjects where p.UserProfile.EntityKey == userProfileEntityKey select p); return resultQuery; } So, what is the correct way to do this? Do I use p.UserProfile.UserId = UserId ? If that's the case, it's not conceptual anymore. Or should I write object query instead of LINQ query?

    Read the article

  • Is there any killer application for Ontology/semantics/OWL/RDF yet?

    - by narnirajesh
    Hi Guys, I got interested in semantic technologies after reading a lot of books, blogs and articles on the net saying that it would make data machine-understandable, allow intelligent agents make great reasoning, automated & dynamic service composition etc.. I am still reading the same stuff from 2 years. The number of articles/blogs/semantic-conferences have increased considerably. But I am still unable to see any killer-application. Why is it so? Or is there some application/product (commercial/open-source) already existing, which actually is doing all that being boasted of? To put it more precisely, is there any product that leverages semantic technologies (esp RDF/OWL/SPARQL) and is delivering functionality/performance/maintainability, which would not have been possible with the existing (no-semantic) technologies? Some product that is completely dependent on semantic technologies and really adds value to the customers and generating revenues?

    Read the article

  • How can I prevent Rails from "pluralizing" a column name?

    - by Mike
    I'm using dwilkie's foreigner plugin for rails. I have a table creation statement that looks like: create_table "agents_games", :force => true, :id => false do |t| t.references :agents, :column => :agent_id, :foreign_key => true, :null => false t.references :games, :column => :game_id, :foreign_key => true, :null => false end However, this generates the following SQL: [4;35;1mSQL (2.7ms)[0m [0mCREATE TABLE "agents_games" ("agents_id" integer NOT NULL, "games_id" integer NOT NULL) [0m I want the columns to be called agent_id and game_id - not agents_id and agent_id. How can I prevent Rails from pluralizing the columns? I tried the following in my enviornment.rb file, which didn't help: ActiveSupport::Inflector.inflections do |inflect| inflect.uncountable "agent_id", "game_id" end

    Read the article

  • Fail to profile remote java app using TPTP

    - by rperez
    Hi all, I am trying to profile CPU usage using TPTP. Application to profile run on Linux RH AS5. I installed and configured Agent Controller like described here I ran the java application using the command java '-agentlib:JPIBootLoader=JPIAgent:server=standalone,file=log.trcxml;CGProf' MyApp The monitoring station is All-In-one TPTP version 4.6.2. I followed the stepes described here on Eclipse - on the "Profile Configuration" I choose a new configuration for "Attach to Agent", set the host to my remote linux machine where MyApp is running, test connection succeed and when I get to the "Agents" tab, I see "Pending...", a background process "Feching children for host" is running and can't find anything which makes it impossible to profile. Any idea?

    Read the article

  • There's a way to avoid AppleStore?

    - by Grassino87
    Hi, i need to develop an iPhone application that is a Client of serverside application. This application is not for customer but for sell agents. I know that if i try to send to Apple to put on Apple Store they reject it because the application have no sense for Apple Store. The company is small so i can't use the Enterprise program. The only way i can use now is to use Ad Hoc mode but in this case if i made an update you need iTunes and i must find a way to avoid this. Thanks for the help.

    Read the article

  • Basic questions about SNMP

    - by David Hodgson
    Hi, I'm learning about SNMP, and writing some applications using it. I have some basic questions about the protocol: Do the agents store its state on the device itself? If there is a trap set on an agent, can you do a poll on the same OID to get the same information? Without using a mib file, is there a way to query a device for all of its information at once? If not, and you're writing your own customized manager, do you have to know the structure of what it reports up front? If you're setting up an agent to report, is there usually a way to control the frequency of how often it sends a trap? Or does it usually send a trap as often as some condition is satisfied?

    Read the article

  • Interface for classes that have nothing in common

    - by Tomek Tarczynski
    Lets say I want to make few classes to determine behaviour of agents. The good practice would be to make some common interface for them, such interface (simplified) could look like this: interface IModel { void UpdateBehaviour(); } All , or at least most, of such model would have some parameters, but parameters from one model might have nothing in common with parameters of other model. I would like to have some common way of loading parameters. Question What is the best way to do that? Is it maybe just adding method void LoadParameters(object parameters) to the IModel? Or creating empty interface IParameters and add method void LoadParameters(IParameters parameters)? That are two ideas I came up with, but I don't like either of them.

    Read the article

  • Opera Mobile for Windows + Reported Screen Size

    - by Antony Scott
    I know this isn't a direct programming question, but's it's kinda relevant as I'm trying to get a good testing environment set up before I embark on my latest project. I'm trying to set up Opera Mobile for Windows to allow me to test a new website. The UserAgent I get is a fairly generic one, so my workaround is to tweak my mobile.browser file to have the correct screen width and height of the target device. Is it possible to add to the list of "fake" user agents that Opera Mobile for Windows can pretend to be? It currently supports S60, Android and Windows Mobile.

    Read the article

  • Browser Incompatible Please Use Mozilla Firefox 3.2 above OR IE 8.0 above

    - by Satish Kalepu
    Hi, chrome browser showing this: Browser Incompatible Please Use Mozilla Firefox 3.2 above OR IE 8.0 above for website http://test.theartness.com/... I didn't understand what causing this error. i checked char encoding etc etc. everything.. I have seen in http://web-sniffer.net/ , it seems server sending that response for netscape/mozilla browsers. It is working for Internet explorer7 .. it seems server blocked it for those user agents? please help me to figure out what is wrong with it? Best Regards, Satish Kalepu.

    Read the article

  • Prevent users from being able to access a webpage via web browser?

    - by Rob
    My friend and I are working on a program. This program is going to submit GET data to our webpage. However, we don't want users accessing the webpage any other way than the program. We can prevent users from sharing the program using HWID authentication, but nothing prevents them from using a packet scanner to get the URL of the webpage. We thought about user-agent authentication, which we will implement, but user-agents can easily be spoofed. So my question is, how can we prevent users from accessing the webpage directly, instead of through the program? Even if you don't have an answer that will completely work, anything that will help deter them would be nice. Currently we will be implementing: HWID Authentication to use the program User-Agent Authentication to access the web page Instant IP Blacklisting to anyone accessing the webpage without the proper User-Agent

    Read the article

  • Clojure lots of threads

    - by Timothy Baldridge
    I just got done watching Rick Hickey's "Clojure Concurrency" talk, and I have a few questions about threads. Let's say I have a situation with lots of Agents, let's say 10,000 of them running one machine. I'd rather not have 10,000 CPU threads running at once, but I don't want threads to be blocked by the actions of other threads. In this example I won't really be waiting for replies, instead each Agent will be sending a message or two, and then waiting until it gets a message. How would I structure a program like this without getting 10k OS threads which would probably end up slowing the system down.

    Read the article

  • Zend_Form validation problem

    - by GrumpyCanuck
    I am having problems getting validation to work for a form built using Zend_Form. The idea is this: I have two dropdown. One is a list of players. The other is a list of free agents who play the same position as the player. I am using an onChange javascript callback to run some Ajax code that replaces the free agent list dropdown with a new one at the position of the player they've selected from the player dropdown. Now, perhaps this is the wrong way, but I built the form by creating an instance of Zend_Form and then creating all these setX methods that add elements to the form. My reasoning was that I wanted to display certain elements in specific places on the page, not just output $this-form on my template. The problem appears to be when I get the form post back, the validator seems to not know about the validation rule I set up for the free agent drop down. Here's some relevant code to look at. I'm a relative ZF n00b so feel free to tell me I am not doing things the ZF way if it leaps out at you. The action in the controller: public function indexAction() { if ($this->getRequest()->isPost()) { $form = new Baseball_Form_Transactions(); if ($form->isValid($this->_request->getPost())) { $data = $this->_request->getPost(); $leagueInfo = Doctrine::getTable('League')->findOneByShortName($data['shortLeagueName'])->toArray(); // Create the request top drop an existing player $transactionInfo = array( 'league_id' => $leagueInfo['id'], 'team_id' => $data['teamId'], 'player_id' => $data['players'], 'type' => 'drop', 'target_team_id' => 0, 'transaction_date' => date('Y-m-d H:m:s') ); $transaction = new Transaction(); $transaction->fromArray($transactionInfo); $transaction->save(); // Now we do the request to add a player $transactionInfo['team_id'] = 0; $transactionInfo['player_id'] = $data['freeAgents']; $transactionInfo['target_team_id'] = $data['teamId']; $transactionInfo['type'] = 'add'; $transaction = new Transaction(); $transaction->fromArray($transactionInfo); $transaction->save(); $this->_flashMessenger->addMessage('Added transaction'); } } $options = array( 'teamId' => $this->teamId, 'position' => 'C', 'leagueShortName' => $this->league ); $this->transactionForm->setMyPlayers($options); $this->transactionForm->setFreeAgents($options); $this->transactionForm->setTeamId($options); $this->transactionForm->setShortLeagueName($options); $this->view->transactionForm = $this->transactionForm; $this->view->messages = $this->_flashMessenger->getMessages(); $transaction = new Transaction(); $this->view->transactions = $transaction->byTeam($options); } Next we have the form itself public function setMyPlayers($options) { $data = Doctrine::getTable('Team')->find($options['teamId']); $players = array(); foreach ($data->Players->toArray() as $player) { $players[$player['id']] = "{$player['position']} - {$player['first_name']} {$player['last_name']}"; } $playersSelect = new Zend_Form_Element_Select( 'players', array( 'required' => true, 'label' => 'Players', 'multiOptions' => $players, ) ); $this->addElement($playersSelect); } public function setFreeAgents($options) { $q = Doctrine_Query::create() ->select('CONCAT(p.first_name, " ", p.last_name) as full_name, p.id, p.position') ->from('Player p') ->leftJoin('p.Teams t') ->leftJoin('t.League l ON l.short_name = ?', $options['leagueShortName']) ->where('t.id IS NULL') ->andWhere('p.position = ?', $options['position']) ->orderBy('p.last_name'); $q->setHydrationMode(Doctrine_Core::HYDRATE_ARRAY); $data = $q->execute(); $freeAgents = array(); foreach ($data as $player) { $freeAgents[$player['id']] = $player['full_name']; } $freeAgentsSelect = new Zend_Form_Element_Select( 'freeAgents', array( 'label' => 'Free Agents', 'multiOptions' => $freeAgents, 'size' => 15 ) ); $freeAgentsSelect->setRequired(true); $this->addElement($freeAgentsSelect); } public function setShortLeagueName($options) { $shortLeagueNameHidden = new Zend_Form_Element_Hidden( 'shortLeagueName', array('value' => $options['leagueShortName']) ); $this->addElement($shortLeagueNameHidden); } public function setTeamId($options) { $teamIdHidden = new Zend_Form_Element_Hidden( 'teamId', array('value' => $options['teamId']) ); $this->addElement($teamIdHidden); } There is no init or __construct() method in the form. My problem seems simple enough: reject the form contents as invalid if they have not selected someone from the free agent list. Right now, it sails through as valid. I've spent some considerable time searching online for an answer, and haven't been able to find it. Thanks in advance for any help.

    Read the article

  • C# Button Text Unicode characters.

    - by Fossaw
    C# doesn't want to put Unicode characters on buttons. If I put \u2129 in the Text attribute of the button, the button displays the \u2129, not the Unicode character, (example - I chose 2129 because I could see it in the font currently active on the machine). I saw this question before, link text, but the question isn't really answered, just got around. I am working on applications which are going all over the world, and don't want to install all the fonts, more then "don't want", there are that many that I doubt the machine I am working on has sufficient disk space. Our overseas sales agents supply the Unicode character "numbers". Is there another way forward with this? As an aside, (curiosity), why does it not work?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17  | Next Page >