Search Results

Search found 581 results on 24 pages for 'kelly jones'.

Page 2/24 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Practical Performance Monitoring and Tuning Event

    - by Andrew Kelly
      For any of you who may be interested or know of someone in the market for a performance Monitoring and Tuning class I have just the ticket for you. It’s a 3 day event that will be held in Atlanta Ga. on January 25th to the 27th 2011. For those of you that know me or have been to my sessions you realize I like to provide more than just classroom theory and like to teach real world and above all practical methodology when it comes to performance in SQL Server. This class covers all the essentials...(read more)

    Read the article

  • Illegal characters for SharePoint 2010 Content Type name

    - by Kelly Jones
    Quick tip: you can’t include a backslash in the name of the SharePoint 2010 Content Type.  In fact, there are several illegal characters:  \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. What, you didn’t know that after entering one of these characters in the name?  Is it because you saw this screen: Oh, that’s right….you need to turn off custom errors in the layouts folder…See this blog post for details and you’ll also need to turn off for the web application. Once you do that, you’ll see this: I wonder why the SharePoint team just doesn’t let the user know that the content type name contains illegal characters before the user hits the create button. Here’s a copy of the complete error (for the search engines): Server Error in '/' Application. -------------------------------------------------------------------------------- The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: Microsoft.SharePoint.SPInvalidContentTypeNameException: The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.  Stack Trace: [SPInvalidContentTypeNameException: The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab.]    Microsoft.SharePoint.SPContentType.ValidateName(String name) +27419522    Microsoft.SharePoint.SPContentType.ValidateNameWithResource(String strVal, String& strLocalized) +423    Microsoft.SharePoint.SPContentType.set_Name(String value) +151    Microsoft.SharePoint.SPContentType.Initialize(SPContentType parentContentType, SPContentTypeCollection collection, String name) +112    Microsoft.SharePoint.SPContentType..ctor(SPContentType parentContentType, SPContentTypeCollection collection, String name) +132    Microsoft.SharePoint.ApplicationPages.ContentTypeCreatePage.BtnOK_Click(Object sender, EventArgs e) +497    System.Web.UI.WebControls.Button.OnClick(EventArgs e) +115    System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +140    System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +29    System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2981   -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927

    Read the article

  • Can&rsquo;t eject external USB or Firewire drive in Windows 7

    - by Kelly Jones
    As a SharePoint developer, I work a lot with Virtual Machines (presently using Windows Virtual PC, with Windows 7).  I’m using these VMs with my laptop, and in order to get better performance, I’ve moved them to external hard drives.  (These drives have faster RPMs, larger caches, and a larger capacity, than the internal drive.)  I have one large external drive at home, another similar drive at the office, and a small, slow portable drive that I carry with me. So, at the end of each day at the office, I copy the files from the external drive to my portable drive and then once I get home, I copy them from the portable to the larger external drive I leave at home.  I do this for a couple of reasons: so I can work at home and secondly, so I have backup copies.  (Often, I feel like I’m in the movie “Office Space” when copying the files before I leave the office). Anyway, after the files are copied, I safely eject the external drives, and then hibernate my laptop.  I’ve been doing this for over a year now, but within the last couple of months I started to have issues disconnecting the drives.  Intermittently, some application/process would have a lock on some file on the drive that would keep Windows from safely ejecting it. After looking into it, I found that it was actually the Windows search service that was accessing the drive! Since I wasn’t using Windows search to look for stuff on these drives, I removed them as locations to index. To do this in Windows 7, you need to go to Indexing Options (just type “Indexing” into the search box in the Start menu…).  One of the choices displayed will then be Indexing Options, so click on it and you should then see a window similar to this:   Click on the Modify button and you’ll see this window: Notice the different drives listed above.  My “FreeAgent XTreme (F:)” drive was checked for some reason, which was causing the indexing service to scan the drive looking for new files to make available in the search results.  Ever since I unchecked this box, I’ve been able to safely eject the drive.

    Read the article

  • Integrating Windows Form Click Once Application into SharePoint 2007 &ndash; Part 1 of 2

    - by Kelly Jones
    Last year, I had the opportunity to build a solution that involved integrating a Windows Form application into a SharePoint 2007 (WSS version 3.0). In this post, I’ll layout our architecture thinking and in part two, I’ll describe the technical details. Business Case Our challenge was this: we needed an easy way for a small group of our users to upload documents, in batches.  They also needed to quickly set the meta data values, as well as set security on individual files. Using the out of the box uploads just didn’t fit.  The single file upload allows set the meta data, but our users would be uploading dozens of files.  The multiple upload would allow our users to upload batches of files, but it doesn’t allow them to set the meta data during upload.  Also, neither upload method allows the users to set the permissions on the file. Our Solution We looked into building a web control of some kind, but ruled that out due to security complexities (if I remember correctly).  Another option would have been using a technology like Silverlight (or Flash?), but our team didn’t have the skills necessary to build with these. So, after looking at what was technically possible, and also what skills our team had, we settled on a Windows Form application.  We also decided to deliver it to the clients via Click Once, so we would have the ability to easily update the application in the future. Lessons Learned After deploying our solution, we’ve learned a few lessons.  First, you’ll need to have the .Net Framework installed on the client computers.  We knew this, but we still ran into issues making sure our users had the proper framework version installed.  Second, we had issues with authentication.  Our issues were due to our testing domain being a separate Active Directory domain from the domain that our end users and their workstations were members of.  (See my earlier post about Clearing Saved Passwords for the fix to our problem). Our third issue was how we dealt with uploading files that were named the same.  Our application would replace the existing file with the new file, which is the way we expected it to work.  However, our users wanted to upload weekly reports, named the same as the previous week.  We solved this by using folders within the document library to keep the sets of reports separate from previous weeks. One last thing to consider before implementing a solution like this, is what browsers and platforms your users will be working from.  We only needed to support IE and Windows, which works fine.  However, if you need to support Firefox, there are add-ons that allow Click Once to work with Firefox.  This is still a Windows only solution though.  In order to support Macs, you’d have to focus on either browser techniques (AJAX?) or Silverlight/Flash. Summary Our users are happy with the Click Once app.  It allowed them to move all of their content to our SharePoint site in under a couple hours, which they were thrilled with.  We’re happy because we can easily deploy updates, our development time was small, and we met all of our business requirements.

    Read the article

  • Can&rsquo;t install Transport HUB in Exchange 2010

    - by Kelly Jones
    When building my latest SharePoint 2010 demo virtual machines, I decided to try installing Exchange 2010 as well.  Now, I’m not an Exchange admin, but I thought “how hard can this be?”  Well, a little more than I thought. Pretty early during the install, I got an error saying that it couldn’t “install Transport HUB”.  I double checked that my VM was meeting all of the requirements, both hardware and software, and everything looked fine. After much researching, it turns out that the error was caused by not having IPv6 enabled on the network adapter inside the virtual machine.  I had turned it off because I thought I wouldn’t need it.  I guess Exchange 2010 does.

    Read the article

  • Updating a SharePoint master page via a solution (WSP)

    - by Kelly Jones
    In my last blog post, I wrote how to deploy a SharePoint theme using Features and a solution package.  As promised in that post, here is how to update an already deployed master page. There are several ways to update a master page in SharePoint.  You could upload a new version to the master page gallery, or you could upload a new master page to the gallery, and then set the site to use this new page.  Manually uploading your master page to the master page gallery might be the best option, depending on your environment.  For my client, I did these steps in code, which is what they preferred. (Image courtesy of: http://www.joiningdots.net/blog/2007/08/sharepoint-and-quick-launch.html ) Before you decide which method you need to use, take a look at your existing pages.  Are they using the SharePoint dynamic token or the static token for the master page reference?  The wha, huh? SO, there are four ways to tell an .aspx page hosted in SharePoint which master page it should use: “~masterurl/default.master” – tells the page to use the default.master property of the site “~masterurl/custom.master” – tells the page to use the custom.master property of the site “~site/default.master” – tells the page to use the file named “default.master” in the site’s master page gallery “~sitecollection/default.master” – tells the page to use the file named “default.master” in the site collection’s master page gallery For more information about these tokens, take a look at this article on MSDN. Once you determine which token your existing pages are pointed to, then you know which file you need to update.  So, if the ~masterurl tokens are used, then you upload a new master page, either replacing the existing one or adding another one to the gallery.  If you’ve uploaded a new file with a new name, you’ll just need to set it as the master page either through the UI (MOSS only) or through code (MOSS or WSS Feature receiver code – or using SharePoint Designer). If the ~site or ~sitecollection tokens were used, then you’re limited to either replacing the existing master page, or editing all of your existing pages to point to another master page.  In most cases, it probably makes sense to just replace the master page. For my project, I’m working with WSS and the existing pages are set to the ~sitecollection token.  Based on this, I decided to just upload a new version of the existing master page (and not modify the dozens of existing pages). Also, since my client prefers Features and solutions, I created a master page Feature and a corresponding Feature Receiver.  For information on creating the elements and feature files, check out this post: http://sharepointmagazine.net/technical/development/deploying-the-master-page . This works fine, unless you are overwriting an existing master page, which was my case.  You’ll run into errors because the master page file needs to be checked out, replaced, and then checked in.  I wrote code in my Feature Activated event handler to accomplish these steps. Here are the steps necessary in code: Get the file name from the elements file of the Feature Check out the file from the master page gallery Upload the file to the master page gallery Check in the file to the master page gallery Here’s the code in my Feature Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3: try 4: { 5:   6: SPElementDefinitionCollection col = properties.Definition.GetElementDefinitions(System.Globalization.CultureInfo.CurrentCulture); 7:   8: using (SPWeb curweb = GetCurWeb(properties)) 9: { 10: foreach (SPElementDefinition ele in col) 11: { 12: if (string.Compare(ele.ElementType, "Module", true) == 0) 13: { 14: // <Module Name="DefaultMasterPage" List="116" Url="_catalogs/masterpage" RootWebOnly="FALSE"> 15: // <File Url="myMaster.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" 16: // Path="MasterPages/myMaster.master" /> 17: // </Module> 18: string Url = ele.XmlDefinition.Attributes["Url"].Value; 19: foreach (System.Xml.XmlNode file in ele.XmlDefinition.ChildNodes) 20: { 21: string Url2 = file.Attributes["Url"].Value; 22: string Path = file.Attributes["Path"].Value; 23: string fileType = file.Attributes["Type"].Value; 24:   25: if (string.Compare(fileType, "GhostableInLibrary", true) == 0) 26: { 27: //Check out file in document library 28: SPFile existingFile = curweb.GetFile(Url + "/" + Url2); 29:   30: if (existingFile != null) 31: { 32: if (existingFile.CheckOutStatus != SPFile.SPCheckOutStatus.None) 33: { 34: throw new Exception("The master page file is already checked out. Please make sure the master page file is checked in, before activating this feature."); 35: } 36: else 37: { 38: existingFile.CheckOut(); 39: existingFile.Update(); 40: } 41: } 42:   43: //Upload file to document library 44: string filePath = System.IO.Path.Combine(properties.Definition.RootDirectory, Path); 45: string fileName = System.IO.Path.GetFileName(filePath); 46: char slash = Convert.ToChar("/"); 47: string[] folders = existingFile.ParentFolder.Url.Split(slash); 48:   49: if (folders.Length > 2) 50: { 51: Logger.logMessage("More than two folders were detected in the library path for the master page. Only two are supported.", 52: Logger.LogEntryType.Information); //custom logging component 53: } 54:   55: SPFolder myLibrary = curweb.Folders[folders[0]].SubFolders[folders[1]]; 56:   57: FileStream fs = File.OpenRead(filePath); 58:   59: SPFile newFile = myLibrary.Files.Add(fileName, fs, true); 60:   61: myLibrary.Update(); 62: newFile.CheckIn("Updated by Feature", SPCheckinType.MajorCheckIn); 63: newFile.Update(); 64: } 65: } 66: } 67: } 68: } 69: } 70: catch (Exception ex) 71: { 72: string msg = "Error occurred during feature activation"; 73: Logger.logException(ex, msg, ""); 74: } 75:   76: } 77:   78: /// <summary> 79: /// Using a Feature's properties, get a reference to the Current Web 80: /// </summary> 81: /// <param name="properties"></param> 82: public SPWeb GetCurWeb(SPFeatureReceiverProperties properties) 83: { 84: SPWeb curweb; 85:   86: //Check if the parent of the web is a site or a web 87: if (properties != null && properties.Feature.Parent.GetType().ToString() == "Microsoft.SharePoint.SPWeb") 88: { 89:   90: //Get web from parent 91: curweb = (SPWeb)properties.Feature.Parent; 92: 93: } 94: else 95: { 96: //Get web from Site 97: using (SPSite cursite = (SPSite)properties.Feature.Parent) 98: { 99: curweb = (SPWeb)cursite.OpenWeb(); 100: } 101: } 102:   103: return curweb; 104: } This did the trick.  It allowed me to update my existing master page, through an easily repeatable process (which is great when you are working with more than one environment and what to do things like TEST it!).  I did run into what I would classify as a strange issue with one of my subsites, but that’s the topic for another blog post.

    Read the article

  • Deploying a SharePoint 2007 theme using Features

    - by Kelly Jones
    I recently had a requirement to update the branding on an existing Windows SharePoint Services (WSS version 3.0) site.  I needed to update the theme, along with the master page.  An additional requirement is that my client likes to have all changes bundled up in SharePoint solutions.  This makes it much easier to move code from dev to test to prod and more importantly, makes it easier to undo code migrations if any issues would arise (I agree with this approach). Updating the theme was easy enough.  I created a new theme, along with a two new features.  The first feature, scoped at the farm level, deploys the theme, adding it to the spthemes.xml file (in the 12 hive –> \Template\layouts\1033 folder).  Here’s the method that I call from the feature activated event: private static void AddThemeToSpThemes(string id, string name, string description, string thumbnail, string preview, SPFeatureReceiverProperties properties) { XmlDocument spThemes = new XmlDocument(); //use GetGenericSetupPath to find the 12 hive folder string spThemesPath = SPUtility.GetGenericSetupPath(@"TEMPLATE\LAYOUTS\1033\spThemes.xml"); //load the spthemes file into our xmldocument, since it is just xml spThemes.Load(spThemesPath); XmlNode root = spThemes.DocumentElement; //search the themes file to see if our theme is already added bool found = false; foreach (XmlNode node in root.ChildNodes) { foreach (XmlNode prop in node.ChildNodes) { if (prop.Name.Equals("TemplateID")) { if (prop.InnerText.Equals(id)) { found = true; break; } } } if (found) { break; } } if (!found) //theme not found, so add it { //This is what we need to add: // <Templates> // <TemplateID>ThemeName</TemplateID> // <DisplayName>Theme Display Name</DisplayName> // <Description>My theme description</Description> // <Thumbnail>images/mythemethumb.gif</Thumbnail> // <Preview>images/mythemepreview.gif</Preview> // </Templates> StringBuilder sb = new StringBuilder(); sb.Append("<Templates><TemplateID>"); sb.Append(id); sb.Append("</TemplateID><DisplayName>"); sb.Append(name); sb.Append("</DisplayName><Description>"); sb.Append(description); sb.Append("</Description><Thumbnail>"); sb.Append(thumbnail); sb.Append("</Thumbnail><Preview>"); sb.Append(preview); sb.Append("</Preview></Templates>"); root.CreateNavigator().AppendChild(sb.ToString()); spThemes.Save(spThemesPath); } } Just as important, is the code that removes the theme when the feature is deactivated: private static void RemoveThemeFromSpThemes(string id) { XmlDocument spThemes = new XmlDocument(); string spThemesPath = HostingEnvironment.MapPath("/_layouts/") + @"1033\spThemes.xml"; spThemes.Load(spThemesPath); XmlNode root = spThemes.DocumentElement; foreach (XmlNode node in root.ChildNodes) { foreach (XmlNode prop in node.ChildNodes) { if (prop.Name.Equals("TemplateID")) { if (prop.InnerText.Equals(id)) { root.RemoveChild(node); spThemes.Save(spThemesPath); break; } } } } } So, that takes care of deploying the theme.  In order to apply the theme to the web, my activate feature method looks like this: public override void FeatureDeactivating(SPFeatureReceiverProperties properties) { using (SPWeb curweb = (SPWeb)properties.Feature.Parent) { curweb.ApplyTheme("myThemeName"); curweb.Update(); } } Deactivating is just as simple: public override void FeatureDeactivating(SPFeatureReceiverProperties properties) { using (SPWeb curweb = (SPWeb)properties.Feature.Parent) { curweb.ApplyTheme("none"); curweb.Update(); } } Ok, that’s the code necessary to deploy, apply, un-apply, and retract the theme.  Also, the solution (WSP file) contains the actual theme files. SO, next is the master page, which I’ll cover in my next blog post.

    Read the article

  • Can preventing directory listings in WordPress upload folders cause Google ranking drops when they cause 403 errors in Webmaster Tools?

    - by Kelly
    I recently moved to a new host that blocks crawling to my uploads folders but (hopefully) allows the files in the folder to be crawled. I now show many 403 errors for each folder in the uploads folder in my Webmaster Tools. For example, http://www.rewardcharts4kids.com/wp-content/uploads/2013/07/ shows a 403 error. For example, I can access this file: http://www.rewardcharts4kids.com/wp-content/uploads/2013/07/lunch-box-notes.jpg but I cannot access the folder it is in. My rankings went down after I moved to this host and I am wondering if: this could be the reason. is this how files/folders are supposed to be set up?

    Read the article

  • Speaking in Raleigh NC June 15th

    - by Andrew Kelly
    Just a heads up to those in the area that I will be speaking at the (TriPASS) Raleigh SQL Server user group on the 15th of June 2010. The topic is Storage & I/O Best Practices. The abstract is listed below: SQL Server relies heavily on a well configured storage sub-system to perform at its peak but unfortunately this is one of the most neglected or mis-configured areas of a SQL Server instance. Here we will focus on the best practices related to how SQL Server works with the underlying storage...(read more)

    Read the article

  • Latest SolidQ Journal Plus Giveaways

    - by Andrew Kelly
      You can find the latest edition of the SolidQ Journal here that is always good reading but if you register over the next 3 weeks you may be eligible for a prize including:  One $500 Amazon gift card and 5 $150 gift cards; books from Itzik Ben-Gan, Greg Low, and Erik Veerman/Jay Hackney/Dejan Sarka; and chats with MarkTab and Kevin Boles.   The deadline for the giveaway is January 7th and you can register for it HERE .  So be a good little boy or girl and maybe Santa will bring...(read more)

    Read the article

  • Protecting PDF files and XDO.CFG

    - by Greg Kelly
    Protecting PDF files and XDO.CFG Security related properties can be overridden at runtime through PeopleCode as all other XMLP properties using the SetRuntimeProperties() method on the ReportDefn class. This is documented in PeopleBooks. Basically this method need to be called right before calling the processReport() method: . . &asPropName = CreateArrayRept("", 0); &asPropValue = CreateArrayRept("", 0); &asPropName.Push("pdf-open-password"); &asPropValue.Push("test"); &oRptDefn.SetRuntimeProperties(&asPropName, &asPropValue); &oRptDefn.ProcessReport(&sTemplateId, %Language_User, &dAsOfDate, &sOutputFormat); Of course users should not hardcode the password value in the code, instead, if password is stored encrypted in the database or somewhere else, they can use Decrypt() api

    Read the article

  • Access denied error when adding new server to existing SharePoint 2007 Farm

    - by Kelly Jones
    I recently got an error when adding a new SharePoint 2007 (SP2) server to our existing MOSS farm.  I had run the installation fine, and was walking through the SharePoint Configuration Wizard, when I got an error on step 2: “Resource retrieved id PostSetupConfigurationFailedEventLog is Configuration of SharePoint Products and Technologies failed.” I searched the net, but didn’t really find anything. I then remembered that I had forgotten to run the latest SharePoint updates (for us, the latest applied was December 2009).  I installed the WSS update, and then the MOSS update and both worked fine.  I then ran the Configuration wizard again and it worked without any errors.

    Read the article

  • Corrupted Views when migrating document libraries from SharePoint 2003 to 2007

    - by Kelly Jones
    A coworker of mine ran into this error recently, while migrating a document library from SharePoint 2003 to 2007: “A WebPartZone can only exist on a page which contains a SPWebPartManager. The SPWebPartManager must be placed before any WebPartZones on the page.” He saw this when he tried to see the All Documents view for the library. After looking into it, we figured out what had happened.  He was migrating documents using the Explorer View in SharePoint.  He had copied the contents of the library from one server (a remote server that we didn’t have administrative access to) to his desktop.  He then opened an Explorer View of the new library and copied the files to it.  Well, it turns out he had copied the hidden “Forms” folder, which contained the files necessary to display the different views for the library. (He had set his explorer to show hidden files, which made them visible.) So, he had copied the 2003 forms to the 2007 library, which are incompatible. We fixed it, by simply deleting the new document library, recreating it, and then copied everything except that hidden Forms folder.  Another option might have been to create a new document library on 2007, and copy the Forms folder from it to the broken library.  Since we didn’t need to save anything in the broken BTW, I confirmed my suspicion with this blog post: http://palmettotq.com/blog/?p=54

    Read the article

  • SharePoint 2007 Parser Error after updating master page

    - by Kelly Jones
    A few weeks ago I was updating the master page for a SharePoint 2007 (WSS) site.  The client wanted the site updated to reflect the new look and feel that is being applied to another set of sites in the organization. I created a new theme and master page, which I already wrote about here and here.  It worked well, except for a few pages on a subsite.  On those pages, I got the following error: Server Error in '/' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Code blocks are not allowed in this file.   I decided to go comb through my new master page and compare it to the existing master page that was already working.  After going through them line by line several times, I had no clue what would be causing the error because they were basically the same! It turns out, it was a combination of two things.  First, on a few of the pages in the site, there was some include code (basically an <% EVAL()%> snippet).  This was the code that was triggering my error “Code blocks are not allowed in this file”. However, this code was working fine with the previous master page. I decided to then try doing a full deployment of the site with the new master page, and it worked fine!  Apparently, if the master page is deployed using a Feature, then it is granted permission to allow code blocks, but if you upload pages either using web UI or SharePoint Designer, then the pages won’t be able to use code blocks. I haven’t been able to pin down the rules or official info about this, but I thought others might find it useful anyway.

    Read the article

  • Backup File Naming Convention

    - by Andrew Kelly
      I have been asked this many times before and again just recently so I figured why not blog about it. None of this information outlined here is rocket science or even new but it is an area that I don’t think people put enough thought into before implementing.  Sure everyone choses some format but it often doesn’t go far enough in my opinion to get the most bang for the buck. This is the format I prefer to use: ServerName_InstanceName_BackupType_DBName_DateTimeStamp.xxx ServerName_InstanceName...(read more)

    Read the article

  • Integrating Windows Form Click Once Application into SharePoint 2007 &ndash; Part 2 of 4

    - by Kelly Jones
    In my last post, I explained why we decided to use a Click Once application to solve our business problem. To quickly review, we needed a way for our business users to upload documents to a SharePoint 2007 document library in mass, set the meta data, set the permissions per document, and to do so easily. Let’s look at the pieces that make up our solution.  First, we have the Windows Form application.  This app is deployed using Click Once and calls SharePoint web services in order to upload files and then calls web services to set the meta data (SharePoint columns and permissions).  Second, we have a custom action.  The custom action is responsible for providing our users a link that will launch the Windows app, as well as passing values to it via the query string.  And lastly, we have the web services that the Windows Form application calls.  For our solution, we used both out of the box web services and a custom web service in order to set the column values in the document library as well as the permissions on the documents. Now, let’s look at the technical details of each of these pieces.  (All of the code is downloadable from here: )   Windows Form application deployed via Click Once The Windows Form application, called “Custom Upload”, has just a few classes in it: Custom Upload -- the form FileList.xsd -- the dataset used to track the names of the files and their meta data values SharePointUpload -- this class handles uploading the file SharePointUpload uses an HttpWebRequest to transfer the file to the web server. We had to change this code from a WebClient object to the HttpWebRequest object, because we needed to be able to set the time out value.  public bool UploadDocument(string localFilename, string remoteFilename) { bool result = true; //Need to use an HttpWebRequest object instead of a WebClient object // so we can set the timeout (WebClient doesn't allow you to set the timeout!) HttpWebRequest req = (HttpWebRequest)WebRequest.Create(remoteFilename); try { req.Method = "PUT"; req.Timeout = 60 * 1000; //convert seconds to milliseconds req.AllowWriteStreamBuffering = true; req.Credentials = System.Net.CredentialCache.DefaultCredentials; req.SendChunked = false; req.KeepAlive = true; Stream reqStream = req.GetRequestStream(); FileStream rdr = new FileStream(localFilename, FileMode.Open, FileAccess.Read); byte[] inData = new byte[4096]; int bytesRead = rdr.Read(inData, 0, inData.Length); while (bytesRead > 0) { reqStream.Write(inData, 0, bytesRead); bytesRead = rdr.Read(inData, 0, inData.Length); } reqStream.Close(); rdr.Close(); System.Net.HttpWebResponse response = (HttpWebResponse)req.GetResponse(); if (response.StatusCode != HttpStatusCode.OK && response.StatusCode != HttpStatusCode.Created) { String msg = String.Format("An error occurred while uploading this file: {0}\n\nError response code: {1}", System.IO.Path.GetFileName(localFilename), response.StatusCode.ToString()); LogWarning(msg, "2ACFFCCA-59BA-40c8-A9AB-05FA3331D223"); result = false; } } catch (Exception ex) { LogException(ex, "{E9D62A93-D298-470d-A6BA-19AAB237978A}"); result = false; } return result; } The class also contains the LogException() and LogWarning() methods. When the application is launched, it parses the query string for some initial values.  The query string looks like this: string queryString = "Srv=clickonce&Sec=N&Doc=DMI&SiteName=&Speed=128000&Max=50"; This Srv is the path to the server (my Virtual Machine is name “clickonce”), the Sec is short for security – meaning HTTPS or HTTP, the Doc is the shortcut for which document library to use, and SiteName is the name of the SharePoint site.  Speed is used to calculate an estimate for download speed for each file.  We added this so our users uploading documents would realize how long it might take for clients in remote locations (using slow WAN connections) to download the documents. The last value, Max, is the maximum size that the SharePoint site will allow documents to be.  This allowed us to give users a warning that a file is too large before we even attempt to upload it. Another critical piece is the meta data collection.  We organized our site using SharePoint content types, so when the app loads, it gets a list of the document library’s content types.  The user then select one of the content types from the drop down list, and then we query SharePoint to get a list of the fields that make up that content type.  We used both an out of the box web service, and one that we custom built, in order to get these values. Once we have the content type fields, we then add controls to the form.  Which type of control we add depends on the data type of the field.  (DateTime pickers for date/time fields, etc)  We didn’t write code to cover every data type, since we were working with a limited set of content types and field data types. Here’s a screen shot of the Form, before and after someone has selected the content types and our code has added the custom controls:     The other piece of meta data we collect is the in the upper right corner of the app, “Users with access”.  This box lists the different SharePoint Groups that we have set up and by checking the boxes, the user can set the permissions on the uploaded documents. All of this meta data is collected and submitted to our custom web service, which then sets the values on the documents on the list.  We’ll look at these web services in a future post. In the next post, we’ll walk through the Custom Action we built.

    Read the article

  • White Paper on Parallel Processing

    - by Andrew Kelly
      I just ran across what I think is a newly published white paper on parallel  processing in SQL Server 2008 R2. The date is October 2010 but this is the first time I have seen it so I am not 100% sure how new it really is. Maybe you have seen it already but if not I recommend taking a look. So far I haven’t had time to read it extensively but from a cursory look it appears to be quite informative and this is one of the areas least understood by a lot of dba’s. It was authored by Don Pinto...(read more)

    Read the article

  • Resources for up-to-date Delphi programming

    - by Dan Kelly
    I'm a developer in a small department and have been using Delphi for the last 10 years. Whilst I've tried to keep up-to-date with movements there are a lot of changes that have occurred between Delphi 7 and (current for us) 2010. Stack Exchange and here have been great for answering the "how do you" questions, but what I'd like is a resource that shows great examples of larger scale programming. For example is there anywhere that hosts examples of well written, multi form applications? Something that can be looked at as a whole to illustrate why things should be done a certain way?

    Read the article

  • Dlink DWA-643 ExpressCard / Atheros AR5008 can't connect to wifi networks

    - by Justin Kelly
    I've just purchased a D-Link DWA-643 Xtreme N ExpressCard Notebook Adapter - but it can't connect to my wireless network The card is listed on the FSF website and - refer links below: http://www.fsf.org/resources/hw/index_html/net/wireless/index_html/cards.html http://www.dlink.com.au/products/?pid=550 Ubuntu see the card as using the Atheros AR5008 chipset - refer image below The card lights up and I can see that available wifi networks using this card - so it seems to 'just work' on ubuntu 12.04 but when i try and connect to my networks - it fails I've tried setting the network to all the different options (WEP, WPA2, no encryption, etc.. b/g/n ) but ubuntu sill cant connect to it I've also installed wicd but still couldn't connect Has anyone got a DWA-643 to work in Ubuntu? Or does anyone have any suggestion on how to get it to connect?? Any help would be greatly appreciated Note: the laptop has built in wifi but its broadcom, works but with dialup speed connection - and i've had nothign but trouble using the boardcom drivers so purchased the FSF recommended PCI expresscard as i hoped it would 'just work' on the latest Ubuntu i've have tried to disable the built in wifi - broadcom - but even with the broadcom uninstall and unavailable it didnt help the dlink to connect previously I had MAC address filtering on the router - i've added the dlinks MAC - and also disabled MAC address filtering - still no luck lspci output below: 18:00.0 Network controller: Atheros Communications Inc. AR5008 Wireless Network Adapter (rev 01) Subsystem: D-Link System Inc Device 3a6f Flags: bus master, fast devsel, latency 0, IRQ 18 Memory at e4000000 (64-bit, non-prefetchable) [size=64K] Capabilities: [40] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit- Capabilities: [60] Express Legacy Endpoint, MSI 00 Capabilities: [90] MSI-X: Enable- Count=1 Masked- Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • Kicking yourself because you missed the Oracle OpenWorld and Oracle Develop Call for Papers?

    - by Greg Kelly
    Here's a great opportunity! If you missed the Oracle OpenWorld and Oracle Develop Call for Papers, here is another opportunity to submit a paper to present. Submit a paper and ask your colleagues, Oracle Mix community, friends and anyone else you know to vote for your session. Note, only Oracle Mix members are allowed to vote. Voting is open from the end of May through June 20. For the most part, the top voted sessions will be selected for the program (although we may choose sessions in order to balance the content across the program). Please note that Oracle reserves the right to decline sessions that are not appropriate for the conference, such as subjects that are competitive in nature or sessions that cover outdated versions of products. Oracle OpenWorld and Oracle Develop Suggest-a-Session https://mix.oracle.com/oow10/proposals FAQ https://mix.oracle.com/oow10/faq

    Read the article

  • SharePoint 2010: Taxonomy feature (Feature ID &quot;73EF14B1-13A9-416b-A9B5-ECECA2B0604C&quot;) has not been activated

    - by Kelly Jones
    I ran into an error message in SharePoint 2010 that took me a few minutes to figure out.  I was working on a demo of SharePoint 2010’s managed metadata and getting an error when I was adding a Managed Metadata column to a library.  A little Google research turned up this blog post: The Taxonomy feature (Feature ID "73EF14B1-13A9-416b-A9B5-ECECA2B0604C") has not been activated. As Michal Pisarek pointed out last June, you get the error because the Taxonomy feature isn’t activated.  Like Michal, I’m not sure how this happened to my installation, but the fix he documented works. (Activating the feature using STSADM)

    Read the article

  • WSS 3.0 to SharePoint 2010: Tips for delaying the Visual Upgrade

    - by Kelly Jones
    My most recent project has been to migrate a bunch of sites from WSS 3.0 (SharePoint 2007) to SharePoint Server 2010.  The users are currently working with WSS 3.0 and Office 2003, so the new ribbon based UI in 2010 will be completely new.  My client wants to avoid the new SharePoint 2010 look and feel until they’ve had time to train their users, so we’ve been testing the upgrades by keeping them with the 2007 user interface. Permission to perform the Visual Upgrade One of the first things we noticed was the default permissions for who was allowed to switch the UI from 2007 to 2010.  By default, site collection administrators and site owners can do this.  Since we wanted to more tightly control the timing of the new UI, I added a few lines to the PowerShell script that we are using to perform the migration.  This script creates the web application, sets the User Policy, and then does a Mount-SPDatabase to attach the old 2007 content database to the 2010 farm.  I added the following steps after the Mount-SPDatabase step: #Remove the visual upgrade option for site owners # it remains for Site Collection administrators foreach ($sc in $WebApp.Sites){ foreach ($web in $sc.AllWebs){ #Visual Upgrade permissions for the site/subsite (web) $web.UIversionConfigurationEnabled = $false; $web.Update(); } } These script steps loop through each Site Collection in a particular web application ($WebApp) and then it loops through each subsite ($web) in the Site Collection ($sc) and disables the Site Owner’s permission to perform the Visual Upgrade. This is equivalent to going to the Site Collection administrator settings page –> Visual Upgrade and selecting “Hide Visual Upgrade”. Since only IT people have Site Collection administrator privileges, this will allow IT to control the timing of the new 2010 UI rollout. Newly created subsites Our next issue was brought to our attention by SharePoint Joel’s blog post last week (http://www.sharepointjoel.com/Lists/Posts/Post.aspx?ID=524 ).  In it, he lists some updates about the 2010 upgrade, and his fourth point was one that I hadn’t seen yet: 4. If a 2007 upgraded site has not been visually upgraded, the sites created underneath it will look like 2010 sites – While this is something I’ve been aware of, I think many don’t realize how this impacts common look and feel for master pages, and how it impacts good navigation and UI. As well depending on your patch level you may see hanging behavior in the list picker. The site and list creation Silverlight control in Internet Explorer is looking for resources that don’t exist in the galleries in the 2007 site, and hence it continues to spin and spin and eventually time out. The work around is to upgrade to SP1, or use Chrome or Firefox which won’t attempt to render the Silverlight control. When the root site collection is a 2007 site and has it’s set of galleries and the children are 2010 sites there is some strange behavior linked to the way that the galleries work and pull from the parent. Our production SharePoint 2010 Farm has SP1 installed, as well as the December 2011 Cumulative Update, so I think the “hanging behavior” he mentions won’t affect us. However, since we want to control the roll out of the UI, we are concerned that new subsites will have the 2010 look and feel, no matter what the parent site has. Ok, time to dust off my developer skills. I first looked into using feature stapling, but I couldn’t get that to work (although I’m pretty sure I had everything wired up correctly).  Then I stumbled upon SharePoint 2010’s web events – a great way to handle this. Using Visual Studio 2010, I created a new SharePoint project and added a Web Event Receiver: In the Event Receiver class, I used the WebProvisioned method to check if the parent site is a 2007 site (UIVersion = 3), and if so, then set the newly created site to 2007:   /// <summary> /// A site was provisioned. /// </summary> public override void WebProvisioned(SPWebEventProperties properties) { base.WebProvisioned(properties);   try { SPWeb curweb = properties.Web;   if (curweb.ParentWeb != null) {   //check if the parent website has the 2007 look and feel if (curweb.ParentWeb.UIVersion == 3) { //since parent site has 2007 look and feel // we'll apply that look and feel to the current web curweb.UIVersion = 3; curweb.Update(); } } } catch (Exception) { //TODO: Add logging for errors } }   This event is part of a Feature that is scoped to the Site Level (Site Collection).  I added a couple of lines to my migration PowerShell script to activate the Feature for any site collections that we migrate. Plan Going Forward The plan going forward is to perform the visual upgrade after the users for a particular site collection have gone through 2010 training. If we need to do several site collections at once, we’ll use a PowerShell script to loop through each site collection to update the sites to 2010.  If it’s just one or two, we’ll be using the “Update All Sites” button on the Visual Upgrade page for Site Collection Administrators. The custom code for newly created sites won’t need to be changed, since it relies on the UI version of the parent site.  If the parent is 2010, then the new site will look 2010.

    Read the article

  • Microsoft Certifications &ndash; how to prep? and why?

    - by Kelly Jones
    I often get asked by my colleagues, “how do you prepare for Microsoft exams?” Well, the answer for me is a little complicated, so I thought I’d write up here what I do. The first thing I do is go to Microsoft’s website to find the exam that I need to take.  If you’re looking to get a particular certification, then their site lists the exam or exams that you’ll need to pass.  If you’ve already taken an exam, you can log onto the MCP website and use their certification planner.  This little tool tells you what tests you need, based on the exams you’ve already passed.  It is very helpful with the certifications that are multiple tests and especially ones that have electives. Once you’ve identified the test, you can use Microsoft’s website to see the topics that it covers.  This is a good outline to follow when you study.  I’ll keep this handy to reference back throughout my studying to make sure that I’m covering all the topics I need to know. The next step is probably where I am a little different from others.  IF the exam outline covers material that I’ve already been working with, then I’ll skip a lot of the studying and go directly to the practice tests.  However, if I’m looking at the outline and wondering how in the world do you do that? – then it’s time to hit the books. So, where to find study materials?  Try typing in the exam number into any search engine.  You’ll typically find a ton of resources.  If you’re lucky, you’ll find books that others recommend based on their studying and exam experience.  As a Sogeti employee, I have access to three really good resources: an internal company list of all of the consultants who have passed particular tests (on our Connex website), Books 24x7, and Transcender practice exams. Once my studying is done (either through books or experience), I’ll go through the practice exams.  I find them really helpful in getting my knowledge lined up to the thinking process that the exam writers use.  If I’m relying on my experience, then this really helps me to identify gaps in my knowledge that I’ll need to fill. That’s about it.  If I’m doing ok on the practice exams, then I’ll take the real thing.  I’ve found that the practice exams are usually more difficult than then real thing. Oh – one other thing I do related to Microsoft exams – I try to take any beta exams that Microsoft makes available that fall into my skill set.  Microsoft has started a blog to announce these and the seats usually fill up really quick.  The blog is at http://blogs.technet.com/betaexams/ . You don’t get your results instantly, like a normal exam, instead you have to wait for everyone to finish taking the beta exams and for Microsoft to determine which questions they are using and which they are dropping.  So, be prepared to wait six to eight weeks for your results.

    Read the article

  • Windows 7 Virtual PC - &ldquo;RPC server unavailable&rdquo;

    - by Kelly Jones
    I use Windows 7 Virtual PC on my current project and I often bring home the files, so I can work some in the evenings.  Since my VHDs are large, I’ll only copy the undo disks, saved state, and virtual machine config files from my external drive.  I copy them to a small portable drive and once I get home, I’ll copy them to a large external drive. I’ve done this for over a year, but recently I started getting an error when I tried to start the VPC after the copying was finished.  It would open the initial window with the progress bar, but eventually the bar would stop, turn red, and then the error “RPC server unavailable” would appear.  When I first started seeing these, I’d try again, but no luck. After some testing, it turns out that my small portable drive is apparently going bad, so it was corrupting the files.  Lucky for me, that I never overwrote my good copies with corrupted copies, at least not at both the office and at home.

    Read the article

  • When does implementing MVVM not make sense

    - by Kelly Sommers
    I am a big fan of various patterns and enjoy learning new ones all the time however I think with all the evangelism around popular patterns and anti-patterns sometimes this causes blind adoption. I think most things have individual pros and cons and it's important to educate what the cons are and when it doesn't make sense to make a particular choice. The pros are constantly advocated. "It depends" I think applies most times but the industry does a poor job at communicating what it depends ON. Also many patterns surfaced from inheriting values from previous patterns or have derivatives, which each one brings another set of pros and cons to the table. The sooner we are more aware of the trade off's of decisions we make in software architecture the sooner we make better decisions. This is my first challenge to the community. Even if you are a big fan of said pattern, I challenge you to discover the cons and when you shouldn't use it. Define when MVVM (Model-View-ViewModel) may not make sense in a particular piece of software and based on what reasons. MVVM has a set of pros and cons. Let's try to define them. GO! :)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >