Search Results

Search found 19525 results on 781 pages for 'say'.

Page 13/781 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Is Perl's flip-flop operator bugged? It has global state, how can I reset it?

    - by Evan Carroll
    I'm dismayed. Ok, so this was probably the most fun perl bug I've ever found. Even today I'm learning new stuff about perl. Essentially, the flip-flop operator .. which returns false until the left-hand-side returns true, and then true until the right-hand-side returns false keep global state (or that is what I assume.) My question is can I reset it, (perhaps this would be a good addition to perl4-esque hardly ever used reset())? Or, is there no way to use this operator safely? I also don't see this (the global context bit) documented anywhere in perldoc perlop is this a mistake? Code use feature ':5.10'; use strict; use warnings; sub search { my $arr = shift; grep { !( /start/ .. /never_exist/ ) } @$arr; } my @foo = qw/foo bar start baz end quz quz/; my @bar = qw/foo bar start baz end quz quz/; say 'first shot - foo'; say for search \@foo; say 'second shot - bar'; say for search \@bar; Spoiler $ perl test.pl first shot foo bar second shot

    Read the article

  • HTTP POST prarameters order / REST urls

    - by pq
    Let's say that I'm uploading a large file via a POST HTTP request. Let's also say that I have another parameter (other than the file) that names the resource which the file is updating. The resource cannot be not part of the URL the way you can do it with REST (e.g. foo.com/bar/123). Let's say this is due to a combination of technical and political reasons. The server needs to ignore the file if the resource name is invalid or, say, the IP address and/or the logged in user are not authorized to update the resource. Looks like, if this POST came from an HTML form that contains the resource name first and file field second, for most (all?) browsers, this order is preserved in the POST request. But it would be naive to fully rely on that, no? In other words the order of HTTP parameters is insignificant and a client is free to construct the POST in any order. Isn't that true? Which means that, at least in theory, the server may end up storing the whole large file before it can deny the request. It seems to me that this is a clear case where RESTful urls have an advantage, since you don't have to look at the POST content to perform certain authorization/error checking on the request. Do you agree? What are your thoughts, experiences?

    Read the article

  • vb.net documentation and exception question

    - by dcp
    Let's say I have this sub in vb.net: ''' <summary> ''' Validates that <paramref name="value"/> is not <c>null</c>. ''' </summary> ''' ''' <param name="value">The object to validate.</param> ''' ''' <param name="name">The variable name of the object.</param> ''' ''' <exception cref="ArgumentNullException">If <paramref name="value"/> is <c>null</c>.</exception> Sub ValidateNotNull(ByVal value As Object, ByVal name As String) If value Is Nothing Then Throw New ArgumentNullException(name, String.Format("{0} cannot be null.", name)) End If End Sub My question is, is it proper to call this ValidateNotNull (which is what I would call it in C#) or should I stick with VB terminology and call it ValidateNotNothing instead? Also, in my exception, is it proper to say "cannot be null", or would it be better to say "cannot be Nothing"? I sort of like the way I have it, but since this is VB, maybe I should use Nothing. But since the exception itself is called ArgumentNullException, it feels weird to make the message say "cannot be Nothing". Anyway, I guess it's pretty nitkpicky, just wondered what you folks thought.

    Read the article

  • How to call a service operation at a REST style WCF endpoint uri?

    - by Dieter Domanski
    Hi, is it possible to call a service operation at a wcf endpoint uri with a self hosted service? I want to call some default service operation when the client enters the endpoint uri of the service. In the following sample these uris correctly call the declared operations (SayHello, SayHi): - http://localhost:4711/clerk/hello - http://localhost:4711/clerk/hi But the uri - http://localhost:4711/clerk does not call the declared SayWelcome operation. Instead it leads to the well known 'Metadata publishing disabled' page. Enabling mex does not help, in this case the mex page is shown at the endpoint uri. private void StartSampleServiceHost() { ServiceHost serviceHost = new ServiceHost(typeof(Clerk), new Uri( "http://localhost:4711/clerk/")); ServiceEndpoint endpoint = serviceHost.AddServiceEndpoint(typeof(IClerk), new WebHttpBinding(), ""); endpoint.Behaviors.Add(new WebHttpBehavior()); serviceHost.Open(); } [ServiceContract] public interface IClerk { [OperationContract, WebGet(UriTemplate = "")] Stream SayWelcome(); [OperationContract, WebGet(UriTemplate = "/hello/")] Stream SayHello(); [OperationContract, WebGet(UriTemplate = "/hi/")] Stream SayHi(); } public class Clerk : IClerk { public Stream SayWelcome() { return Say("welcome"); } public Stream SayHello() { return Say("hello"); } public Stream SayHi() { return Say("hi"); } private Stream Say(string what) { string page = @"<html><body>" + what + "</body></html>"; return new MemoryStream(Encoding.UTF8.GetBytes(page)); } } Is there any way to disable the mex handling and to enable a declared operation instead? Thanks in advance, Dieter

    Read the article

  • Is there a reason to use the XML::LibXML::Number-object in my XML::LibXML-example?

    - by sid_com
    In this example I get to times '96'. Is there a possible case where I would need a XML::LibXML-Number-object to to achieve the goal? #!/usr/bin/env perl use warnings; use strict; use 5.012; use XML::LibXML; my $xml_string =<<EOF; <?xml version="1.0" encoding="UTF-8"?> <filesystem> <path> <dirname>/var</dirname> <files> <action>delete</action> <age units="hours">10</age> </files> <files> <action>delete</action> <age units="hours">96</age> </files> </path> </filesystem> EOF #/ my $doc = XML::LibXML->load_xml( string => $xml_string ); my $root = $doc->documentElement; my $result = $root->find( '//files/age[@units="hours"]' ); $result = $result->get_node( 1 ); say ref $result; # XML::LibXML::Element say $result->textContent; # 96 $result = $root->find ( 'number( //files/age[@units="hours"] )' ); say ref $result; # XML::LibXML::Number say $result; # 96

    Read the article

  • Unit Testing Private Method in Resource Managing Class (C++)

    - by BillyONeal
    I previously asked this question under another name but deleted it because I didn't explain it very well. Let's say I have a class which manages a file. Let's say that this class treats the file as having a specific file format, and contains methods to perform operations on this file: class Foo { std::wstring fileName_; public: Foo(const std::wstring& fileName) : fileName_(fileName) { //Construct a Foo here. }; int getChecksum() { //Open the file and read some part of it //Long method to figure out what checksum it is. //Return the checksum. } }; Let's say I'd like to be able to unit test the part of this class that calculates the checksum. Unit testing the parts of the class that load in the file and such is impractical, because to test every part of the getChecksum() method I might need to construct 40 or 50 files! Now lets say I'd like to reuse the checksum method elsewhere in the class. I extract the method so that it now looks like this: class Foo { std::wstring fileName_; static int calculateChecksum(const std::vector<unsigned char> &fileBytes) { //Long method to figure out what checksum it is. } public: Foo(const std::wstring& fileName) : fileName_(fileName) { //Construct a Foo here. }; int getChecksum() { //Open the file and read some part of it return calculateChecksum( something ); } void modifyThisFileSomehow() { //Perform modification int newChecksum = calculateChecksum( something ); //Apply the newChecksum to the file } }; Now I'd like to unit test the calculateChecksum() method because it's easy to test and complicated, and I don't care about unit testing getChecksum() because it's simple and very difficult to test. But I can't test calculateChecksum() directly because it is private. Does anyone know of a solution to this problem?

    Read the article

  • How Do i handle both the checked and clicked events in Jquery

    - by Someone
    Say I have a form and onload of that form I have 2 radio buttons say "yes" or "no " I have set of action to perform when we select either one radio button .Say if we select "yes " //Onload of Form if(("#yes:checked")){ $("#table").show(); $("#div1").show(); $("#div2").show(); alert("Checkedd on Load"); } //for click $("#yes").bind('click',function(){ $("#table").show(); $("#div1").show(); $("#div2").show(); alert("Checked by Click"); }); }); Iam Writing the same repeated lines of code for two similar set of events .Is There anyway to combine the "Checked" and "Clicked" events Say if i do inthis way than "Checked" doesnot get called untill "Clicked"?Is there any other way to do it. $("#yes").bind('click',function(){ if(("#yes:checked")){ $("#table").show(); $("#div1").show(); $("#div2").show(); alert("Checked by Click"); } }); });

    Read the article

  • Manipulating a NSTextField via AppleScript

    - by Garry
    A little side project I'm working on is a digital life assistant, much like project JARVIS. What I'm trying to do is speak to my mac, have my words translated to text and then have the text interpreted by my program. Currently, my app is very simple, consisting of a single window containing a single wrapped NSTextView. Using MacSpeech Dictate, When I say the custom command "Jeeves", MacSpeech ensures that my app is frontmost, highlights any text in the TextField and clears it, then presses the Return key to trigger the textDidEndEditing method of NSTextField. This is done via Applescript. MacSpeech then switches to dictation mode and the next sentence I say will appear in the NSTextField. What I can't figure out is how to signify that I have finished saying a command to my program. I could simply say another keyword like "execute" or something similar that would send an AppleScript return keystroke to my app (thereby triggering the textDidEndEditing event) but this is cumbersome. Is there a notification that happens when text is pasted into a NSTextField? Would a timer work that would fire after maybe three seconds once my program becomes frontmost (three seconds should be sufficient for me to say a command)? Thanks,

    Read the article

  • How is timezone handled in the lifecycle of an ADO.NET + SQL Server DateTime column?

    - by stimpy77
    Using SQL Server 2008. This is a really junior question and I could really use some elaborate information, but the information on Google seems to dance around the topic quite a bit and it would be nice if there was some detailed elaboration on how this works... Let's say I have a datetime column and in ADO.NET I set it to DateTime.UtcNow. 1) Does SQL Server store DateTime.UtcNow accordingly, or does it offset it again based on the timezone of where the server is installed, and then return it offset-reversed when queried? I think I know that the answer is "of course it stores it without offsetting it again" but want to be certain. So then I query for it and cast it from, say, an IDataReader column to a DateTime. As far as I know, System.DateTime has metadata that internally tracks whether it is a UTC DateTime or it is an offsetted DateTime, which may or may not cause .ToLocalTime() and .ToUniversalTime() to have different behavior depending on this state. So, 2) Does this casted System.DateTime object already know that it is a UTC DateTime instance, or does it assume that it has been offset? Now let's say I don't use UtcNow, I use DateTime.Now, when performing an ADO.NET INSERT or UPDATE. 3) Does ADO.NET pass the offset to SQL Server and does SQL Server store DateTime.Now with the offset metadata? So then I query for it and cast it from, say, an IDataReader column to a DateTime. 4) Does this casted System.DateTime object already know that it is an offset time, or does it assume that it is UTC?

    Read the article

  • selection issue in listview or TreeView

    - by Maneesh
    Hello i have a list view control, While the form is being loaded i fill the list, i have aprrox. say 100+ items. While filling i check some parameters and decide which item/row need to be selected. i set the Selected property to true... refer the code below: some lines here ..... ListViewItem listViewItem = new ListViewItem("COL1"); listViewItem.SubItems.Add("COL2"); check for some condition and then listViewItem.Selected = true; this.m_lstViewCtrl.Items.Add(listViewItem); This does select the item, there are no issues with it... however, say the ctrl is sized to see onlu say some 15 items, but the selection is say some 35th item.... currently the scroll bar appears the item is selected but i have to scroll to see what was selected? is it possible to scroll to the selected item so that is selection is clearly visible... Will the same apply for a Treeview?

    Read the article

  • More efficient way to update multiple elements in javascript and/or jquery?

    - by Seiverence
    Say I have 6 divs with ID "#first", ""#second" ... "#sixth". Say if I wanted to execute functions on each of those divs, I would set up an array that contains each of the names of the divs I want to update as an element in the array of strings. ["first", "second", "third"] that I want to update. If I wanted to apply I function, I set up a for loop that iterates through each element in the array and say if I wanted to change the background color to red: function updateAllDivsInTheList() { for(var i = 0; i < array.size; i++) $("#"+array[i]).changeCssFunction(); } } Whenever I create a new div, i would add it to the array. The issue is, if I have a large number of divs that need to get updated, say if I wanted to update 1000 out of 1200 divs, it may be a pain/performance tank to have to sequentially iterate through every single element in the array. Is there some alternative more efficient way of updating multiple divs without having to sequentially iterate through every element in an array, maybe with some other more efficient data structure besides array? Or is what I am doing the most efficient way to do it? If can provide some example, that would be great.

    Read the article

  • Developer’s Life – Attitude and Communication – They Can Cause Problems – Notes from the Field #027

    - by Pinal Dave
    [Note from Pinal]: This is a 27th episode of Notes from the Field series. The biggest challenge for anyone is to understand human nature. We human have so many things on our mind at any moment of time. There are cases when what we say is not what we mean and there are cases where what we mean we do not say. We do say and things as per our mood and our agenda in mind. Sometimes there are incidents when our attitude creates confusion in the communication and we end up creating a situation which is absolutely not warranted. In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue we face in our career, which is not technical but more to relate to human nature. Read on this may be the best blog post you might read in recent times. In this week’s note from the field, I’m taking a slight departure from technical knowledge and concepts explained. We’ll be back to it next week, I’m sure. Pinal wanted us to explain some of the issues we bump into and how we see some of our customers arrive at problem situations and how we have helped get them back on the right track. Often it is a technical problem we are officially solving – but in a lot of cases as a consultant, we are really helping fix some communication difficulties. This is a technical blog post and not an “advice column” in a newspaper – but the longer I am a consultant, the more years I add to my experience in technology the more I learn that the vast majority of the problems we encounter have “soft skills” included in the chain of causes for the issue we are helping overcome. This is not going to be exhaustive but I hope that sharing four pieces of advice inspired by real issues starts a process of searching for places where we can be the cause of these challenges and look at fixing them in ourselves. Or perhaps we can begin looking at resolving them in teams that we manage. I’ll share three statements that I’ve either heard, read or said and talk about some of the communication or attitude challenges highlighted by the statement. 1 – “But that’s the SAN Administrator’s responsibility…” I heard that early on in my consulting career when talking with a customer who had serious corruption and no good recent backups – potentially no good backups at all. The statement doesn’t have to be this one exactly, but the attitude here is an attitude of “my job stops here, and I don’t care about the intent or principle of why I’m here.” It’s also a situation of having the attitude that as long as there is someone else to blame, I’m fine…  You see in this case, the DBA had a suspicion that the backups were not being handled right.  They were the DBA and they knew that they had responsibility to ensure SQL backups were good to go – it’s a basic requirement of a production DBA. In my “As A DBA Where Do I start?!” presentation, I argue that is job #1 of a DBA. But in this case, the thought was that there was someone else to blame. Rather than create extra work and take on responsibility it was decided to just let it be another team’s responsibility. This failed the company, the company’s customers and no one won. As technologists – we should strive to go the extra mile. If there is a lack of clarity around roles and responsibilities and we know it – we should push to get it resolved. Especially as the DBAs who should act as the advocates of the data contained in the databases we are responsible for. 2 – “We’ve always done it this way, it’s never caused a problem before!” Complacency. I have to say that many failures I’ve been paid good money to help recover from would have not happened had it been for an attitude of complacency. If any thoughts like this have entered your mind about your situation you may be suffering from it. If, while reading this, you get this sinking feeling in your stomach about that one thing you know should be fixed but haven’t done it.. Why don’t you stop and go fix it then come back.. “We should have better backups, but we’re on a SAN so we should be fine really.” “Technically speaking that could happen, but what are the chances?” “We’ll just clean that up as a fast follow” ..and so on. In the age of tightening IT budgets, increased expectations of up time, availability and performance there is no room for complacency. Our customers and business units expect – no demand – the best. Complacency says “we will give you second best or hopefully good enough and we accept the risk and know this may hurt us later. Sometimes an organization will opt for “good enough” and I agree with the concept that at times the perfect can be the enemy of the good. But when we make those decisions in a vacuum and are not reporting them up and discussing them as an organization that is different. That is us unilaterally choosing to do something less than the best and purposefully playing a game of chance. 3 – “This device must accept interference from other devices but not create any” I’ve paraphrased this one – but it’s something the Federal Communications Commission – a federal agency in the United States that regulates electronic communication – requires of all manufacturers of any device that could cause or receive interference electronically. I blogged in depth about this here (http://www.straightpathsql.com/archives/2011/07/relationship-advice-from-the-fcc/) so I won’t go into much detail other than to say this… If we all operated more on the premise that we should do our best to not be the cause of conflict, and to be less easily offended and less upset when we perceive offense life would be easier in many areas! This doesn’t always cause the issues we are called in to help out. Not directly. But where we see it is in unhealthy relationships between the various technology teams at a client. We’ll see teams hoarding knowledge, not sharing well with others and almost working against other teams instead of working with them. If you trace these problems back far enough it often stems from someone or some group of people violating this principle from the FCC. To Sum It Up Technology problems are easy to solve. At Linchpin People we help many customers get past the toughest technological challenge – and at the end of the day it is really just a repeatable process of pattern based troubleshooting, logical thinking and starting at the beginning and carefully stepping through to the end. It’s easy at the end of the day. The tough part of what we do as consultants is the people skills. Being able to help get teams working together, being able to help teams take responsibility, to improve team to team communication? That is the difficult part, and we get to use the soft skills on every engagement. Work on professional development (http://professionaldevelopment.sqlpass.org/) and see continuing improvement here, not just with technology. I can teach just about anyone how to be an excellent DBA and performance tuner, but some of these soft skills are much more difficult to teach. If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Building The Right SharePoint Team For Your Organization

    - by Mark Rackley
    I see the question posted fairly often asking what kind SharePoint team an organization should have. How many people do I need? What roles do I need to fill? What is best for my organization? Well, just like every other answer in SharePoint, the correct answer is “it depends”. Do you ever get sick of hearing that??? I know I do… So, let me give you my thoughts and opinions based upon my experience and what I’ve seen and let you come to your own conclusions. What are the possible SharePoint roles? I guess the first thing you need to understand are the different roles that exist in SharePoint (and their are LOTS). Remember, SharePoint is a massive beast and you will NOT find one person who can do it all. If you are hoping to find that person you will be sorely disappointed. For the most part this is true in SharePoint 2007 and 2010. However, generally things are improved in 2010 and easier for junior individuals to grasp. SharePoint Administrator The absolutely positively only role that you should not be without no matter the size of your organization or SharePoint deployment is a SharePoint administrator. These guys are essential to keeping things running and figuring out what’s wrong when things aren’t running well. These unsung heroes do more before 10 am than I do all day. The bad thing is, when these guys are awesome, you don’t even know they exist because everything is running so smoothly. You should definitely invest some time and money here to make sure you have some competent if not rockstar help. You need an admin who truly loves SharePoint and will go that extra mile when necessary. Let me give you a real world example of what I’m talking about: We have a rockstar admin… and I’m sure she’s sick of my throwing her name around so she’ll just have to live with remaining anonymous in this post… sorry Lori… Anyway! A couple of weeks ago our Server teams came to us and said Hi Lori, I’m finalizing the MOSS servers and doing updates that require a restart; can I restart them? Seems like a harmless request from your server team does it not? Sure, go ahead and apply the patches and reboot during our scheduled maintenance window. No problem? right? Sounded fair to me… but no…. not to our fearless SharePoint admin… I need a complete list of patches that will be applied. There is an update that is out there that will break SharePoint… KB973917 is the patch that has been shown to cause issues. What? You mean Microsoft released a patch that would actually adversely affect SharePoint? If we did NOT have a rockstar admin, our server team would have applied these patches and then when some problem occurred in SharePoint we’d have to go through the fun task of tracking down exactly what caused the issue and resolve it. How much time would that have taken? If you have a junior SharePoint admin or an admin who’s not out there staying on top of what’s going on you could have spent days tracking down something so simple as applying a patch you should not have applied. I will even go as far to say the only SharePoint rockstar you NEED in your organization is a SharePoint admin. You can always outsource really complicated development projects or bring in a rockstar contractor every now and then to make sure you aren’t way off track in other areas. For your day-to-day sanity and to keep SharePoint running smoothly, you need an awesome Admin. Some rockstars in this category are: Ben Curry, Mike Watson, Joel Oleson, Todd Klindt, Shane Young, John Ferringer, Sean McDonough, and of course Lori Gowin. SharePoint Developer Another essential role for your SharePoint deployment is a SharePoint developer. Things do start to get a little hazy here and there are many flavors of “developers”. Are you writing custom code? using SharePoint Designer? What about SharePoint Branding?  Are all of these considered developers? I would say yes. Are they interchangeable? I’d say no. Development in SharePoint is such a large beast in itself. I would say that it’s not so large that you can’t know it all well, but it is so large that there are many people who specialize in one particular category. If you are lucky enough to have someone on staff who knows it all well, you better make sure they are well taken care of because those guys are ready-made to move over to a consulting role and charge you 3 times what you are probably paying them. :) Some of the all-around rockstars are Eric Shupps, Andrew Connell (go Razorbacks), Rob Foster, Paul Schaeflein, and Todd Bleeker SharePoint Power User/No-Code Solutions Developer These SharePoint Swiss Army Knives are essential for quick wins in your organization. These people can twist the out-of-the-box functionality to make it do things you would not even imagine. Give these guys SharePoint Designer, jQuery, InfoPath, and a little time and they will create views, dashboards, and KPI’s that will blow your mind away and give your execs the “wow” they are looking for. Not only can they deliver that wow factor, but they can mashup, merge, and really help make your SharePoint application usable and deliver an overall better user experience. Before you hand off a project to your SharePoint Custom Code developer, let one of these rockstars look at it and show you what they can do (in probably less time). I would say the second most important role you can fill in your organization is one of these guys. Rockstars in this category are Christina Wheeler, Laura Rogers, Jennifer Mason, and Mark Miller SharePoint Developer – Custom Code If you want to really integrate SharePoint into your legacy systems, or really twist it and make it bend to your will, you are going to have to open up Visual Studio and write some custom code.  Remember, SharePoint is essentially just a big, huge, ginormous .NET application, so you CAN write code to make it do ANYTHING, but do you really want to spend the time and effort to do so? At some point with every other form of SharePoint development you are going to run into SOME limitation (SPD Workflows is the big one that comes to mind). If you truly want to knock down all the walls then custom development is the way to go. PLEASE keep in mind when you are looking for a custom code developer that a .NET developer does NOT equal a SharePoint developer. Just SOME of the things these guys write are: Custom Workflows Custom Web Parts Web Service functionality Import data from legacy systems Export data to legacy systems Custom Actions Event Receivers Service Applications (2010) These guys are also the ones generally responsible for packaging everything up into solution packages (you are doing that, right?). Rockstars in this category are Phil Wicklund, Christina Wheeler, Geoff Varosky, and Brian Jackett. SharePoint Branding “But it LOOKS like SharePoint!” Somebody call the WAAAAAAAAAAAAHMbulance…   Themes, Master Pages, Page Layouts, Zones, and over 2000 styles in CSS.. these guys not only have to be comfortable with all of SharePoint’s quirks and pain points when branding, but they have to know it TWICE for publishing and non-publishing sites.  Not only that, but these guys really need to have an eye for graphic design and be able to translate the ramblings of business into something visually stunning. They also have to be comfortable with XSLT, XML, and be able to hand off what they do to your custom developers for them to package as solutions (which you are doing, right?). These rockstars include Heater Waterman, Cathy Dew, and Marcy Kellar SharePoint Architect SharePoint Architects are generally SharePoint Admins or Developers who have moved into more of a BA role? Is that fair to say? These guys really have a grasp and understanding for what SharePoint IS and what it can do. These guys help you structure your farms to meet your needs and help you design your applications the correct way. It’s always a good idea to bring in a rockstar SharePoint Architect to do a sanity check and make sure you aren’t doing anything stupid.  Most organizations probably do not have a rockstar architect on staff. These guys are generally brought in at the deployment of a farm, upgrade of a farm, or for large development projects. I personally also find architects very useful for sitting down with the business to translate their needs into what SharePoint can do. A good architect will be able to pick out what can be done out-of-the-box and what has to be custom built and hand those requirements to the development Staff. Architects can generally fill in as an admin or a developer when needed. Some rockstar architects are Rick Taylor, Dan Usher, Bill English, Spence Harbar, Neil Hodgkins, Eric Harlan, and Bjørn Furuknap. Other Roles / Specialties On top of all these other roles you also get these people who specialize in things like Reporting, BDC (BCS in 2010), Search, Performance, Security, Project Management, etc... etc... etc... Again, most organizations will not have one of these gurus on staff, they’ll just pay out the nose for them when they need them. :) SharePoint End User Everyone else in your organization that touches SharePoint falls into this category. What they actually DO in SharePoint is determined by your governance and what permissions you give these guys. Hopefully you have these guys on a fairly short leash and are NOT giving them access to tools like SharePoint Designer. Sadly end users are the ones who truly make your deployment a success by using it, but are also your biggest enemy in breaking it.  :)  We love you guys… really!!! Okay, all that’s fine and dandy, but what should MY SharePoint team look like? It depends! Okay… Are you just doing out of the box team sites with no custom development? Then you are probably fine with a great Admin team and a great No-Code Solution Development team. How many people do you need? Depends on how busy you can keep them. Sorry, can’t answer the question about numbers without knowing your specific needs. I can just tell you who you MIGHT need and what they will do for you. I’ll leave you with what my ideal SharePoint Team would look like for a particular scenario: Farm / Organization Structure Dev, QA, and 2 Production Farms. 5000 – 10000 Users Custom Development and Integration with legacy systems Team Sites, My Sites, Intranet, Document libraries and overall company collaboration Team Rockstar SharePoint Administrator 2-3 junior SharePoint Administrators SharePoint Architect / Lead Developer 2 Power User / No-Code Solution Developers 2-3 Custom Code developers Branding expert With a team of that size and skill set, they should be able to keep a substantial SharePoint deployment running smoothly and meet your business needs. This does NOT mean that you would not need to bring in contract help from time to time when you need an uber specialist in one area. Also, this team assumes there will be ongoing development for the life of your SharePoint farm. If you are just going to be doing sporadic custom development, it might make sense to partner with an awesome firm that specializes in that sort of work (I can give you the name of a couple if you are interested).  Again though, the size of your team depends on the number of requests you are receiving and how much active deployment you are doing. So, don’t bring in a team that looks like this and then yell at me because they are sitting around with nothing to do or are so overwhelmed that nothing is getting done. I do URGE you to take the proper time to asses your needs and determine what team is BEST for your organization. Also, PLEASE PLEASE PLEASE do not skimp on the talent. When it comes to SharePoint you really do get what you pay for when it comes to employees, contractors, and software.  SharePoint can become absolutely critical to your business and because you skimped on hiring a developer he created a web part that brings down the farm because he doesn’t know what he’s doing, or you hire an admin who thinks it’s fine to stick everything in the same Content Database and then can’t figure out why people are complaining. SharePoint can be an enormous blessing to an organization or it’s biggest curse. Spend the time and money to do it right, or be prepared to spending even more time and money later to fix it.

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • Sorting in Pivot Table on how data is summarized, not just the value

    - by user26453
    Often I am creating pivot tables that summarize some count by some category. Let's say I am counting Yes/No responses by some category. I usually add the count field and display it as a "% of row", and then create a pivot chart. However, if I want to sort one of the columns, say "Yes", Excel sorts by the underlying count, not the calculated percentage. Any way around this?

    Read the article

  • Read-only access to MMC functionality

    - by Travis Ingram
    Does anyone know of a Windows Server 2008 role(s) to allow readonly access to MMC (MS Management Console) facilities - i.e. we would like to be able to view (not add/update/delete) "Administrative Tools" functionality such as IIS websites, Event log, Users and groups, status of services, etc. This is so we can connect to a say a Development server (administered by another team) to check configuration before/during deployment via say Administive Tools | Computer Management | Connect to another computer Many thanks Travis

    Read the article

  • Best practice for setting up SQL server on a Virtual Machine

    - by CrazyCoderz
    This is my first attempt at virtualizing SQL server on VMWare and I want to make sure I am doing things correctly. Should I have SQL server installed on the C: drive / same partition as the OS, Then add a virtual disk for the Data files, say 300GB, and then another virtual disk for the log files say 100GB? Or should I add 2 300GB vdisks, for the data files mirror them in the operating system, and then add a non mirrored vdisk of 100GB for LogFiles??

    Read the article

  • Creating a USB stick for installing centos 6.x using DVD1 and DVD2 iso files

    - by user250563
    First, we create 2 partitions on the USB stick that is let's say 16GB. first partition is let's say only 1GB and the second partition is the rest of what is available. after we "w" write the changes, the USB now has 2 partitions. 1 is 1GB 1 is more than 14GB so , we have... sdb1 and sdb2 now. now we need to turn these partitions into filesystems some say i should run these commands after those procedures. mkfs.vfat -F 32 /dev/sdb1 mkfs.ext3 /dev/sdb2 but some web pages recommend using: mkfs.vfat -n BOOT /dev/sdb1 mkfs.ext2 -m 0 -b 4096 -L DATA /dev/sdb2 which is it? so let's say the DVDs are called: CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso so we make a directory: mkdir -p /mnt/dvd1 and then mount it: mount -o loop CentOS-6.4-x86_64-bin-DVD1.iso /mnt/dvd1 and i suppose we don't make a directory for dvd2 and we don't have to mount it ? at this point i do not know what should be done. but i think this step might be next: we make the USB bootable by finding the file named mbr.bin and then moving it to there via these commnad. dd conv=notrunc bs=440 count=1 if=/usr/lib/syslinux/mbr.bin of=/dev/sdb parted /dev/sdb set 1 boot on in other words we are "dd-ing it to 'sdb' not sdb1' or 'sdb2'. and then we use parted to set the boot to on for sdb so far everything looks good? here is the confusing parts.. how exactly do i move these iso files to the usb drive? EVERYTHING BELOW IS A GUESS. so at this point i should copy the folder /mnt/dvd1/isolinux to usb's sdb1 or sdb2 ? rename it to syslinux ? and then inside this syslinux folder there will be a file called... isolinux.cfg ? which should be renamed to syslinux.cfg ? and then copy the contents of /mnt/dvd1/images/* to USB's sdb2 ? but i think i am also suppose to copy and paste the both CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso somewhere into this USB's sdb2 partition, correct ? almost like a drag and drop kind of a thing? or do they go into any folders ? centos' own web site has some instructions but those instructions do not work. http://wiki.centos.org/HowTos/InstallFromUSBkey i once got this working but things got ruined, i have to do it again and this time take notes.

    Read the article

  • MMC not strating in window server 2003

    - by Mirage
    I have window 2003 server with AD installed. The file server is used to share the folder acroos network. However after 1 hour suddenlt i can't access my shares , it asks me the password and then when i enter it it does not allow me and say access denied. Then i try mmc.exe on server it say The application has failed to start because MS.dll was not loaded How can i fix the problem. Can i re-install the MMC

    Read the article

  • Macbook memory upgrade question

    - by James Evans
    I've read some conflicting articles on Macbooks and memory upgrades. Some say you have to buy the "special" Mac memory (bulls$%t), others say manufacturers like Partriot and Ocz will work fine. My Macbook (non-pro) is about 6 mos old with it's 2 GB of memory (SO-DIMM 1066MHz DDR3). Does anyone have any definitive information of what will work? Thanks!

    Read the article

  • Macbook memory upgrade question

    - by James Evans
    I've read some conflicting articles on Macbooks and memory upgrades. Some say you have to buy the "special" Mac memory (bulls$%t), others say manufacturers like Partriot and Ocz will work fine. My Macbook (non-pro) is about 6 mos old with it's 2 GB of memory (SO-DIMM 1066MHz DDR3). Does anyone have any definitive information of what will work? Thanks!

    Read the article

  • Can files deleted with rm -rf be recovered?

    - by Lena
    When I delete folders or files in through osx terminal using the rm -rf, where do they go? I heard that some say they are deleted directly, but some also say it only "remove the link to the file making it unable to be found or accessed without special tools" (http://superuser.com/questions/370786/where-do-files-and-directories-go-when-i-run-rm-rf-folder-or-file-name-in-ubu). Someone said something about ext3 being able to save rm-ed files in ubuntu but what about mac?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >