Search Results

Search found 29473 results on 1179 pages for 'solaris 10'.

Page 713/1179 | < Previous Page | 709 710 711 712 713 714 715 716 717 718 719 720  | Next Page >

  • Prolog Cut Not Working

    - by user2295607
    Im having a problem with Prolog since cut is not doing what (i believe) its supposed to do: % line-column handlers checkVallEle(_, _, 6, _):- write('FAIL'), !, fail. checkVallEle(TABULEIRO, VALUE, LINE, COLUMN):- COLUMN>5, NL is LINE+1, checkVallEle(TABULEIRO, VALUE, NL, 0). % if this fails, it goes to the next checkVallEle(TABULEIRO, VALUE, LINE, COLUMN):- (checkHorizontal(TABULEIRO, VALUE, LINE, COLUMN, 0), write('HORIZONTAL '); checkVertical(TABULEIRO, VALUE, LINE, COLUMN, 0), write('VERTICAL'); checkDiagonalRight(TABULEIRO, VALUE, LINE, COLUMN, 0), write('DIAGONALRIGHT'); checkDiagonalLeft(TABULEIRO, VALUE, LINE, COLUMN, 0), write('DIAGONALLEFT')), write('WIN'). % goes to the next if above fails checkVallEle(TABULEIRO, VALUE, LINE, COLUMN):- NC is COLUMN+1, checkVallEle(TABULEIRO, VALUE, LINE, NC). What I wish to do is that if the code ever reaches the first statement, that is, if the line is ever 6, it fails (since it went out of range), without checking for more possibilities. But what happens is, when it reaches the first statement, it keeps going to the below statements and ignores the cut symbol, and I dont see why. I just want the statement to fail when it reaches the first line. I also made an experience... run(6):-write('done'), !, fail. run(X):-X1 is X+1, run(X1). And this is what i get from tracing: | ?- run(0). 1 1 Call: run(0) ? 2 2 Call: _1079 is 0+1 ? 2 2 Exit: 1 is 0+1 ? 3 2 Call: run(1) ? 4 3 Call: _3009 is 1+1 ? 4 3 Exit: 2 is 1+1 ? 5 3 Call: run(2) ? 6 4 Call: _4939 is 2+1 ? 6 4 Exit: 3 is 2+1 ? 7 4 Call: run(3) ? 8 5 Call: _6869 is 3+1 ? 8 5 Exit: 4 is 3+1 ? 9 5 Call: run(4) ? 10 6 Call: _8799 is 4+1 ? 10 6 Exit: 5 is 4+1 ? 11 6 Call: run(5) ? 12 7 Call: _10729 is 5+1 ? 12 7 Exit: 6 is 5+1 ? 13 7 Call: run(6) ? 14 8 Call: write(done) ? done 14 8 Exit: write(done) ? 13 7 Fail: run(6) ? 11 6 Fail: run(5) ? 9 5 Fail: run(4) ? 7 4 Fail: run(3) ? 5 3 Fail: run(2) ? 3 2 Fail: run(1) ? 1 1 Fail: run(0) ? no What are all those Fails after the write? is it still backtracing to previous answers? Is this behaviour the reason why cut is failing in my first code? Please enlighten me.

    Read the article

  • Drawing a rectangular prism using opengl

    - by BadSniper
    I'm trying to learn opengl. I did some code for building a rectangular prism. I don't want to draw back faces so I used glCullFace(GL_BACK), glEnable(GL_CULL_FACE);. But I keep getting back faces also when viewing from front and also sometimes when rotating sides are vanishing. Can someone point me in right direction? glPolygonMode(GL_FRONT,GL_LINE); // draw wireframe polygons glColor3f(0,1,0); // set color green glCullFace(GL_BACK); // don't draw back faces glEnable(GL_CULL_FACE); // don't draw back faces glTranslatef(-10, 1, 0); // position glBegin(GL_QUADS); // face 1 glVertex3f(0,-1,0); glVertex3f(0,-1,2); glVertex3f(2,-1,2); glVertex3f(2,-1,0); // face 2 glVertex3f(2,-1,2); glVertex3f(2,-1,0); glVertex3f(2,5,0); glVertex3f(2,5,2); // face 3 glVertex3f(0,5,0); glVertex3f(0,5,2); glVertex3f(2,5,2); glVertex3f(2,5,0); // face 4 glVertex3f(0,-1,2); glVertex3f(2,-1,2); glVertex3f(2,5,2); glVertex3f(0,5,2); // face 5 glVertex3f(0,-1,2); glVertex3f(0,-1,0); glVertex3f(0,5,0); glVertex3f(0,5,2); // face 6 glVertex3f(0,-1,0); glVertex3f(2,-1,0); glVertex3f(2,5,0); glVertex3f(0,5,0); glEnd();

    Read the article

  • ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager

    - by rajbk
    This post walks you through creating a UI for paging, sorting and filtering a list of data items. It makes use of the excellent MVCContrib Grid and Pager Html UI helpers. A sample project is attached at the bottom. Our UI will eventually look like this. The application will make use of the Northwind database. The top portion of the page has a filter area region. The filter region is enclosed in a form tag. The select lists are wired up with jQuery to auto post back the form. The page has a pager region at the top and bottom of the product list. The product list has a link to display more details about a given product. The column headings are clickable for sorting and an icon shows the sort direction. Strongly Typed View Models The views are written to expect strongly typed objects. We suffix these strongly typed objects with ViewModel since they are designed specifically for passing data down to the view.  The following listing shows the ProductViewModel. This class will be used to hold information about a Product. We use attributes to specify if the property should be hidden and what its heading in the table should be. This metadata will be used by the MvcContrib Grid to render the table. Some of the properties are hidden from the UI ([ScaffoldColumn(false)) but are needed because we will be using those for filtering when writing our LINQ query. public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList = productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The following diagram shows the rest of the key ViewModels in our design. We have a container class called ProductListContainerViewModel which has nested classes. The ProductPagedList is of type IPagination<ProductViewModel>. The MvcContrib expects the IPagination<T> interface to determine the page number and page size of the collection we are working with. You convert any IEnumerable<T> into an IPagination<T> by calling the AsPagination extension method in the MvcContrib library. It also creates a paged set of type ProductViewModel. The ProductFilterViewModel class will hold information about the different select lists and the ProductName being searched on. It will also hold state of any previously selected item in the lists and the previous search criteria (you will recall that this type of state information was stored in Viewstate when working with WebForms). With MVC there is no state storage and so all state has to be fetched and passed back to the view. The GridSortOptions is a type defined in the MvcContrib library and is used by the Grid to determine the current column being sorted on and the current sort direction. The following shows the view and partial views used to render our UI. The Index view expects a type ProductListContainerViewModel which we described earlier. <%Html.RenderPartial("SearchFilters", Model.ProductFilterViewModel); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> <% Html.RenderPartial("SearchResults", Model); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> The View contains a partial view “SearchFilters” and passes it the ProductViewFilterContainer. The SearchFilter uses this Model to render all the search lists and textbox. The partial view “Pager” uses the ProductPageList which implements the interface IPagination. The “Pager” view contains the MvcContrib Pager helper used to render the paging information. This view is repeated twice since we want the pager UI to be available at the top and bottom of the product list. The Pager partial view is located in the Shared directory so that it can be reused across Views. The partial view “SearchResults” uses the ProductListContainer model. This partial view contains the MvcContrib Grid which needs both the ProdctPagedList and GridSortOptions to render itself. The Controller Action An example of a request like this: /Products?productName=test&supplierId=29&categoryId=4. The application receives this GET request and maps it to the Index method of the ProductController. Within the action we create an IQueryable<ProductViewModel> by calling the GetProductsProjected() method. /// <summary> /// This method takes in a filter list, paging/sort options and applies /// them to an IQueryable of type ProductViewModel /// </summary> /// <returns> /// The return object is a container that holds the sorted/paged list, /// state for the fiters and state about the current sorted column /// </returns> public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The supplier, category and productname filters are applied to this IQueryable if any are present in the request. The ProductPagedList class is created by applying a sort order and calling the AsPagination method. Finally the ProductListContainerViewModel class is created and returned to the view. You have seen how to use strongly typed views with the MvcContrib Grid and Pager to render a clean lightweight UI with strongly typed views. You also saw how to use partial views to get data from the strongly typed model passed to it from the parent view. The code also shows you how to use jQuery to auto post back. The sample is attached below. Don’t forget to change your connection string to point to the server containing the Northwind database. NorthwindSales_MvcContrib.zip My name is Kobayashi. I work for Keyser Soze.

    Read the article

  • Using design-patterns to transform web-service model classes into local model classes and vise versa

    - by Daniil Petrov
    There is a web-application built with play framework 1.2.7. It contains less than 10 model classes. The main purpose of the application is a lightweight access to a complex remote application (more than 50 model classes). The remote application has its own SOAP API and we use it for synchronization of data. There is a scheduled job in the web-app which makes requests to the remote app. It gets bunches of objects from the remote model and populates corresponding objects of the local model. Currently, there are two groups of classes - the local model and the remote model (generated from wsdl schema). It is not allowed to make any modifications to the remote model. Transformations are being made in the scheduled job class. When it gets objects from the remote app it creates local objects. Recently, it was decided to add a possibility to modify the remote objects. It requires more transformations on our side. We need to transform from remote to local model when reading objects and from local to remote when changing objects. I wonder if this would be possible to use some design-patterns to reduce a number of transformations?

    Read the article

  • Intel Corporation Ethernet Connection does not start properly

    - by Oscar Alejos
    I'm experiencing some problems when trying to connect my PC to the router through a switch. When the PC is directly connected to the router, everything works fine, Ubuntu (14.04) starts normally, and the Internet connection runs inmediately. The Ethernet controller is the Intel Corporation Ethernet Connection, as lspci returns: $ lspci | grep Eth 00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-V (rev 04) However, when I try to connect through the switch what I get is the following. dmesg returns: $ dmesg | grep eth [ 1.035585] e1000e 0000:00:19.0 eth0: registered PHC clock [ 1.035587] e1000e 0000:00:19.0 eth0: (PCI Express:2.5GT/s:Width x1) 00:22:4d:a7:be:5d [ 1.035589] e1000e 0000:00:19.0 eth0: Intel(R) PRO/1000 Network Connection [ 1.035625] e1000e 0000:00:19.0 eth0: MAC: 11, PHY: 12, PBA No: FFFFFF-0FF [ 1.357838] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 2.165413] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 2.165574] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 2.641287] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 16.715086] e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx [ 16.715090] e1000e 0000:00:19.0 eth0: 10/100 speed: disabling TSO [ 16.715117] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready It looks like eth0 is properly working. Actually, nm-tool returns: $ nm-tool - Device: eth0 [Conexión cableada] ------------------------------------------- Type: Wired Driver: e1000e State: connected Default: yes HW Address: 00:22:4D:A7:BE:5D Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.1.30 Prefix: 24 (255.255.255.0) Gateway: 192.168.1.1 DNS: 80.58.61.250 DNS: 80.58.61.254 DNS: 192.168.1.1 However, ping returns: $ ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. From 192.168.1.30 icmp_seq=1 Destination Host Unreachable From 192.168.1.30 icmp_seq=2 Destination Host Unreachable From 192.168.1.30 icmp_seq=3 Destination Host Unreachable The connection is restored by restarting it: # ifconfig eth0 down # ifconfig eth0 up From this point on, everything runs smoothly, as if the PC were directly connected to the router. It seems to be an issue related to the integrated LAN adaptor and the Ethernet controller, since my laptop connects without any problem. My desktop board is an Intel DB85FL. I'd be grateful if anyone could give some ideas on how to solve this issue. Thank you in advance.

    Read the article

  • Version Assemblies with TFS 2010 Continuous Integration

    - by Steve Michelotti
    When I first heard that TFS 2010 had moved to Workflow Foundation for Team Build, I was *extremely* skeptical. I’ve loved MSBuild and didn’t quite understand the reasons for this change. In fact, given that I’ve been exclusively using Cruise Control for Continuous Integration (CI) for the last 5+ years of my career, I was skeptical of TFS for CI in general. However, after going through the learning process for TFS 2010 recently, I’m starting to become a believer. I’m also starting to see some of the benefits with Workflow Foundation for the overall processing because it gives you constructs not available in MSBuild such as parallel tasks, better control flow constructs, and a slightly better customization story. The first customization I had to make to the build process was to version the assemblies of my solution. This is not new. In fact, I’d recommend reading Mike Fourie’s well known post on Versioning Code in TFS before you get started. This post describes several foundational aspects of versioning assemblies regardless of your version of TFS. The main points are: 1) don’t use source control operations for your version file, 2) use a schema like <Major>.<Minor>.<IncrementalNumber>.0, and 3) do not keep AssemblyVersion and AssemblyFileVersion in sync. To do this in TFS 2010, the best post I’ve found has been Jim Lamb’s post of building a custom TFS 2010 workflow activity. Overall, this post is excellent but the primary issue I have with it is that the assembly version numbers produced are based in a date and look like this: “2010.5.15.1”. This is definitely not what I want. I want to be able to communicate to the developers and stakeholders that we are producing the “1.1 release” or “1.2 release” – which would have an assembly version number of “1.1.317.0” for example. In this post, I’ll walk through the process of customizing the assembly version number based on this method – customizing the concepts in Lamb’s post to suit my needs. I’ll also be combining this with the concepts of Fourie’s post – particularly with regards to the standards around how to version the assemblies. The first thing I’ll do is add a file called SolutionAssemblyVersionInfo.cs to the root of my solution that looks like this: 1: using System; 2: using System.Reflection; 3: [assembly: AssemblyVersion("1.1.0.0")] 4: [assembly: AssemblyFileVersion("1.1.0.0")] I’ll then add that file as a Visual Studio link file to each project in my solution by right-clicking the project, “Add – Existing Item…” then when I click the SolutionAssemblyVersionInfo.cs file, making sure I “Add As Link”: Now the Solution Explorer will show our file. We can see that it’s a “link” file because of the black arrow in the icon within all our projects. Of course you’ll need to remove the AssemblyVersion and AssemblyFileVersion attributes from the AssemblyInfo.cs files to avoid the duplicate attributes since they now leave in the SolutionAssemblyVersionInfo.cs file. This is an extremely common technique so that all the projects in our solution can be versioned as a unit. At this point, we’re ready to write our custom activity. The primary consideration is that I want the developer and/or tech lead to be able to easily be in control of the Major.Minor and then I want the CI process to add the third number with a unique incremental number. We’ll leave the fourth position always “0” for now – it’s held in reserve in case the day ever comes where we need to do an emergency patch to Production based on a branched version.   Writing the Custom Workflow Activity Similar to Lamb’s post, I’m going to write two custom workflow activities. The “outer” activity (a xaml activity) will be pretty straight forward. It will check if the solution version file exists in the solution root and, if so, delegate the replacement of version to the AssemblyVersionInfo activity which is a CodeActivity highlighted in red below:   Notice that the arguments of this activity are the “solutionVersionFile” and “tfsBuildNumber” which will be passed in. The tfsBuildNumber passed in will look something like this: “CI_MyApplication.4” and we’ll need to grab the “4” (i.e., the incremental revision number) and put that in the third position. Then we’ll need to honor whatever was specified for Major.Minor in the SolutionAssemblyVersionInfo.cs file. For example, if the SolutionAssemblyVersionInfo.cs file had “1.1.0.0” for the AssemblyVersion (as shown in the first code block near the beginning of this post), then we want to resulting file to have “1.1.4.0”. Before we do anything, let’s put together a unit test for all this so we can know if we get it right: 1: [TestMethod] 2: public void Assembly_version_should_be_parsed_correctly_from_build_name() 3: { 4: // arrange 5: const string versionFile = "SolutionAssemblyVersionInfo.cs"; 6: WriteTestVersionFile(versionFile); 7: var activity = new VersionAssemblies(); 8: var arguments = new Dictionary<string, object> { 9: { "tfsBuildNumber", "CI_MyApplication.4"}, 10: { "solutionVersionFile", versionFile} 11: }; 12:   13: // act 14: var result = WorkflowInvoker.Invoke(activity, arguments); 15:   16: // assert 17: Assert.AreEqual("1.2.4.0", (string)result["newAssemblyFileVersion"]); 18: var lines = File.ReadAllLines(versionFile); 19: Assert.IsTrue(lines.Contains("[assembly: AssemblyVersion(\"1.2.0.0\")]")); 20: Assert.IsTrue(lines.Contains("[assembly: AssemblyFileVersion(\"1.2.4.0\")]")); 21: } 22: 23: private void WriteTestVersionFile(string versionFile) 24: { 25: var fileContents = "using System.Reflection;\n" + 26: "[assembly: AssemblyVersion(\"1.2.0.0\")]\n" + 27: "[assembly: AssemblyFileVersion(\"1.2.0.0\")]"; 28: File.WriteAllText(versionFile, fileContents); 29: }   At this point, the code for our AssemblyVersion activity is pretty straight forward: 1: [BuildActivity(HostEnvironmentOption.Agent)] 2: public class AssemblyVersionInfo : CodeActivity 3: { 4: [RequiredArgument] 5: public InArgument<string> FileName { get; set; } 6:   7: [RequiredArgument] 8: public InArgument<string> TfsBuildNumber { get; set; } 9:   10: public OutArgument<string> NewAssemblyFileVersion { get; set; } 11:   12: protected override void Execute(CodeActivityContext context) 13: { 14: var solutionVersionFile = this.FileName.Get(context); 15: 16: // Ensure that the file is writeable 17: var fileAttributes = File.GetAttributes(solutionVersionFile); 18: File.SetAttributes(solutionVersionFile, fileAttributes & ~FileAttributes.ReadOnly); 19:   20: // Prepare assembly versions 21: var majorMinor = GetAssemblyMajorMinorVersionBasedOnExisting(solutionVersionFile); 22: var newBuildNumber = GetNewBuildNumber(this.TfsBuildNumber.Get(context)); 23: var newAssemblyVersion = string.Format("{0}.{1}.0.0", majorMinor.Item1, majorMinor.Item2); 24: var newAssemblyFileVersion = string.Format("{0}.{1}.{2}.0", majorMinor.Item1, majorMinor.Item2, newBuildNumber); 25: this.NewAssemblyFileVersion.Set(context, newAssemblyFileVersion); 26:   27: // Perform the actual replacement 28: var contents = this.GetFileContents(newAssemblyVersion, newAssemblyFileVersion); 29: File.WriteAllText(solutionVersionFile, contents); 30:   31: // Restore the file's original attributes 32: File.SetAttributes(solutionVersionFile, fileAttributes); 33: } 34:   35: #region Private Methods 36:   37: private string GetFileContents(string newAssemblyVersion, string newAssemblyFileVersion) 38: { 39: var cs = new StringBuilder(); 40: cs.AppendLine("using System.Reflection;"); 41: cs.AppendFormat("[assembly: AssemblyVersion(\"{0}\")]", newAssemblyVersion); 42: cs.AppendLine(); 43: cs.AppendFormat("[assembly: AssemblyFileVersion(\"{0}\")]", newAssemblyFileVersion); 44: return cs.ToString(); 45: } 46:   47: private Tuple<string, string> GetAssemblyMajorMinorVersionBasedOnExisting(string filePath) 48: { 49: var lines = File.ReadAllLines(filePath); 50: var versionLine = lines.Where(x => x.Contains("AssemblyVersion")).FirstOrDefault(); 51:   52: if (versionLine == null) 53: { 54: throw new InvalidOperationException("File does not contain [assembly: AssemblyVersion] attribute"); 55: } 56:   57: return ExtractMajorMinor(versionLine); 58: } 59:   60: private static Tuple<string, string> ExtractMajorMinor(string versionLine) 61: { 62: var firstQuote = versionLine.IndexOf('"') + 1; 63: var secondQuote = versionLine.IndexOf('"', firstQuote); 64: var version = versionLine.Substring(firstQuote, secondQuote - firstQuote); 65: var versionParts = version.Split('.'); 66: return new Tuple<string, string>(versionParts[0], versionParts[1]); 67: } 68:   69: private string GetNewBuildNumber(string buildName) 70: { 71: return buildName.Substring(buildName.LastIndexOf(".") + 1); 72: } 73:   74: #endregion 75: }   At this point the final step is to incorporate this activity into the overall build template. Make a copy of the DefaultTempate.xaml – we’ll call it DefaultTemplateWithVersioning.xaml. Before the build and labeling happens, drag the VersionAssemblies activity in. Then set the LabelName variable to “BuildDetail.BuildDefinition.Name + "-" + newAssemblyFileVersion since the newAssemblyFileVersion was produced by our activity.   Configuring CI Once you add your solution to source control, you can configure CI with the build definition window as shown here. The main difference is that we’ll change the Process tab to reflect a different build number format and choose our custom build process file:   When the build completes, we’ll see the name of our project with the unique revision number:   If we look at the detailed build log for the latest build, we’ll see the label being created with our custom task:     We can now look at the history labels in TFS and see the project name with the labels (the Assignment activity I added to the workflow):   Finally, if we look at the physical assemblies that are produced, we can right-click on any assembly in Windows Explorer and see the assembly version in its properties:   Full Traceability We now have full traceability for our code. There will never be a question of what code was deployed to Production. You can always see the assembly version in the properties of the physical assembly. That can be traced back to a label in TFS where the unique revision number matches. The label in TFS gives you the complete snapshot of the code in your source control repository at the time the code was built. This type of process for full traceability has been used for many years for CI – in fact, I’ve done similar things with CCNet and SVN for quite some time. This is simply the TFS implementation of that pattern. The new features that TFS 2010 give you to make these types of customizations in your build process are quite easy once you get over the initial curve.

    Read the article

  • How would you rewrite/refactor this ?

    - by frostings
    Old application that is used by 50-60.000 paying customers. Company is several hundred people big. Application has a lot of business critical code (30% of all code) written in classic asp. Application has a lot more .net code. Application has a COM+ bridge for enabling asp to "talk" to .net Organization lacks some/a lot knowledge on what is causing the 10-20% server-reset per day (might be due to COM+ ?) There is no red line through the application; no architecture, no real patterns etc. The application has been like this for at least 5 years. The asp code base is increasing, slowly but certainly. I have read refactoring stories and I have knowledge on why you some of the times should not re-write a system. I would love for the old asp code to vanish as well as the COM+ component. But the pain is that no one really knows what is going on inside the asp classic code and the attitude inside all the teams are "this is just how it is". Down the line, this causes a lot of other issues like recruiting, dev effeciency, business needs that cannot be met, scale etc. With these little facts, does that justify a re-write of the asp code and the removal of the COM+ component ? How would you go about it ?

    Read the article

  • Sun Storage 2500-M2 Array and Sun Fire X4470 M2 Server

    - by nospam(at)example.com (Joerg Moellenkamp)
    There is some new hardware in the Oracle portfolio. The first one is the Sun Fire X4470 M2 Server. There was a lot of talk about the system before because of benchmark results, but now it's finally announced. Two or four Intel Xeon E7-4800. Up to 1 TB as the system provides 64 DIMM slots with 16 GB DDR DIMMs. The memory is placed on those riser cards right behind the fans of this chassis. Up to 6 internal drives. In a 3 RU package. Another announcement was the Sun Storage 2500 M2 announced yesterday: From 5 to 48 drives (the later number with three expansion trays) for up to 28.8 TB of storage. The array is SAS based internally. You can put 300GB and 600 GB in it. The 2540-M2 provides 4 (8 optional) FC ports with up to 8 GB/sec. The 2530-M2 has 4 SAS2 ports with up to 6 GBit/s. It has 2 integrated controllers providing 2 GB cache protected by a power backup for 72 hours. The controller enables the arrays to deliver 0, 1, 10, 3, 5, 6, (P+Q) RAID levels.

    Read the article

  • Whitepaper: The Socially Enabled Enterprise

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Sharing the results of our new executive study, which explored the opportunities and challenges global organizations are facing in the transition to becoming socially enabled enterprises. Oracle, Leader Networks, and Social Media Today recently conducted an online survey of over 900 Marketing and IT executives to understand how companies are leveraging social technologies and practices throughout their organizations. Read Now! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Default /etc/apt/sources.list?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • SSH main process ended

    - by Khaled
    I have a running ubuntu server 10.04.1. When I tried to login to the server via ssh, I could not. Instead, I got connection refused error. I tried to ping the machine and I got reply! So, the clear reason is that SSH daemon is stopped. After reboot, I was able to login to my server via ssh. After some time, I looked at my logs /var/log/syslog and found the following records: Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2465) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2469) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2473) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2477) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2481) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2485) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2489) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2493) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2497) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2501) terminated with status 255 Jan 16 10:57:09 myserver init: ssh respawning too fast, stopped I searched for a similar problem/solution. Some people said that this is caused by the SSH daemon trying to start before networking and they suggest to change ListenAddress in /etc/ssh/sshd_config to be 0.0.0.0. I think this is not the cause in my case, because my problem occurs after system is up and running. Any idea what is causing this? This is ubuntu server and it should be running and accessed remotely using ssh.

    Read the article

  • PPA causing 404 error?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • Upgrading Team Foundation Server 2008 to 2010

    - by Martin Hinshelwood
    I am sure you will have seen my posts on upgrading our internal Team Foundation Server from TFS2008 to TFS2010 Beta 2, RC and RTM, but what about a fresh upgrade of TFS2008 to TFS2010 using the RTM version of TFS. One of our clients is taking the plunge with TFS2010, so I have the job of doing the upgrade. It is sometimes very useful to have a team member that starts work when most of the Sydney workers are heading home as I can do the upgrade without impacting them. The down side is that if you have any blockers then you can be pretty sure that everyone that can deal with your problem is asleep I am starting with an existing blank installation of TFS 2010, but Adam Cogan let slip that he was the one that did the install so I thought it prudent to make sure that it was OK. Verifying Team Foundation Server 2010 We need to check that TFS 2010 has been installed correctly. First, check the Admin console and have a root about for any errors. Figure: Even the SQL Setup looks good. I don’t know how Adam did it! Backing up the Team Foundation Server 2008 Databases As we are moving from one server to another (recommended method) we will be taking a backup of our TFS2008 databases and resorting them to the SQL Server for the new TFS2010 Server. Do not just detach and reattach. This will cause problems with the version of the database. If you are running a test migration you just need to create a backup of the TFS 2008 databases, but if you are doing the live migration then you should stop IIS on the TFS 2008 server before you backup the databases. This will stop any inadvertent check-ins or changes to TFS 2008. Figure: Stop IIS before you take a backup to prevent any TFS 2008 changes being written to the database. It is good to leave a little time between taking the TFS 2008 server offline and commencing the upgrade as there is always one developer who has not finished and starts screaming. This time it was John Liu that needed 10 more minutes to make his changes and check-in, so I always give it 30 minutes and see if anyone screams. John Liu [SSW] said:   are you doing something to TFS :-O MrHinsh [SSW UK][VS ALM MVP] said:   I have stopped TFS 2008 as per my emails John Liu [SSW] said:   haven't finish check in @_@   can we have it for 10mins? :) MrHinsh [SSW UK][VS ALM MVP] said:   TFS 2008 has been started John Liu [SSW] said:   I love you! -IM conversation at TFS Upgrade +25 minutes After John confirmed that he had everything done I turned IIS off again and made a cup of tea. There were no more screams so the upgrade can continue. Figure: Backup all of the databases for TFS and include the Reporting Services, just in case.   Figure: Check that all the backups have been taken Once you have your backups, you need to copy them to your new TFS2010 server and restore them. This is a good way to proceed as if we have any problems, or just plain run out of time, then you just turn the TFS 2008 server back on and all you have lost is one upgrade day, and not 10 developer days. As per the rules, you should record the number of files and the total number of areas and iterations before the upgrade so you have something to compare to: TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 You can use this to verify that the upgrade was successful. it should however be noted that the numbers in TFS 2010 will be bigger. This is due to some of the sorting out that TFS does during the upgrade process. Restore Team Foundation Server 2008 Databases Restoring the databases is much more time consuming than just attaching them as you need to do them one at a time. But you may be taking a backup of an operational database and need to restore all your databases to a particular point in time instead of to the latest. I am doing latest unless I encounter any problems. Figure: Restore each of the databases to either a latest or specific point in time.     Figure: Restore all of the required databases Now that all of your databases are restored you now need to upgrade them to Team Foundation Server 2010. Upgrade Team Foundation Server 2008 Databases This is probably the easiest part of the process. You need to call a fire and forget command that will go off to the database specified, find the TFS 2008 databases and upgrade them to 2010. During this process all of the 6 main TFS 2008 databases are merged into the TfsVersionControl database, upgraded and then the database is renamed to TFS_[CollectionName]. The rename is only the database and not the physical files, so it is worth going back and renaming the physical file as well. This keeps everything neat and tidy. If you plan to keep the old TFS 2008 server around, for example if you are doing a test migration first, then you will need to change the TFS GUID. This GUID is unique to each TFS instance and is preserved when you upgrade. This GUID is used by the clients and they can get a little confused if there are two servers with the same one. To kick of the upgrade you need to open a command prompt and change the path to “C:\Program Files\Microsoft Team Foundation Server 2010\Tools” and run the “import” command in  “tfsconfig”. TfsConfig import /sqlinstance:<Previous TFS Data Tier>                  /collectionName:<Collection Name>                  /confirmed Imports a TFS 2005 or 2008 data tier as a new project collection. Important: This command should only be executed after adequate backups have been performed. After you import, you will need to configure portal and reporting settings via the administration console. EXAMPLES -------- TfsConfig import /sqlinstance:tfs2008sql /collectionName:imported /confirmed TfsConfig import /sqlinstance:tfs2008sql\Instance /collectionName:imported /confirmed OPTIONS: -------- sqlinstance         The sql instance of the TFS 2005 or 2008 data tier. The TFS databases at that location will be modified directly and will no longer be usable as previous version databases.  Ensure you have back-ups. collectionName      The name of the new Team Project Collection. confirmed           Confirm that you have backed-up databases before importing. This command will automatically look for the TfsIntegration database and verify that all the other required databases exist. In this case it took around 5 minutes to complete the upgrade as the total database size was under 700MB. This was unlike the upgrade of SSW’s production database with over 17GB of data which took a few hours. At the end of the process you should get no errors and no warnings. The Upgrade operation on the ApplicationTier feature has completed. There were 0 errors and 0 warnings. As this is a new server and not a pure upgrade there should not be a problem with the GUID. If you think at any point you will be doing this more than once, for example doing a test migration, or merging many TFS 2008 instances into a single one, then you should go back and rename the physical TfsVersionControl.mdf file to the same as the new collection. This will avoid confusion later down the line. To do this, detach the new collection from the server and rename the physical files. Then reattach and change the physical file locations to match the new name. You can follow http://www.mssqltips.com/tip.asp?tip=1122 for a more detailed explanation of how to do this. Figure: Stop the collection so TFS does not take a wobbly when we detach the database. When you try to start the new collection again you will get a conflict with project names and will require to remove the Test Upgrade collection. This is fine and it just needs detached. Figure: Detaching the test upgrade from the new Team Foundation Server 2010 so we can start the new Collection again. You will now be able to start the new upgraded collection and you are ready for testing. Do you remember the stats we took off the TFS 2008 server? TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 Well, now we need to compare them to the TFS 2010 stats, remembering that there will probably be more files under source control. TFS2010 File count: Type Count 1 19288 Areas & Iterations: 139 Lovely, the number of iterations are the same, and the number of files is bigger. Just what we were looking for. Testing the upgraded Team Foundation Server 2010 Project Collection Can we connect to the new collection and project? Figure: We can connect to the new collection and project.   Figure: make sure you can connect to The upgraded projects and that you can see all of the files. Figure: Team Web Access is there and working. Note that for Team Web Access you now use the same port and URL as for TFS 2010. So in this case as I am running on the local box you need to use http://localhost:8080/tfs which will redirect you to http://localhost:8080/tfs/web for the web access. If you need to connect with a Visual Studio 2008 client you will need to use the full path of the new collection, http://[servername]/tfs/[collectionname] and this will work with all of your collections. With Visual Studio 2005 you will only be able to connect to the Default collection and in both VS2008 and VS2005 you will need to install the forward compatibility updates. Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 To make sure that you have everything up to date, make sure that you run SSW Diagnostics and get all green ticks. Upgrade Done! At this point you can send out a notice to everyone that the upgrade is complete and and give them the connection details. You need to remember that at this stage we have 2008 project upgraded to run under TFS 2010 but it is still running under that same process template that it was running before. You can only “enable” 2010 features in a process template you can’t upgrade. So what to do? Well, you need to create a new project and migrate things you want to keep across. Souse code is easy, you can move or Branch, but Work Items are more difficult as you can’t move them between projects. This instance is complicated more as the old project uses the Conchango/EMC Scrum for Team System template and I will need to write a script/application to get the work items across with their attachments in tact. That is my next task! Technorati Tags: TFS 2010,TFS 2008,VS ALM

    Read the article

  • Upgrading Sharepoint MOSS 2007 Farm to Sharepoint 2010 "waiting to get a lock to upgrade the farm"

    - by Wes Weeks
    My first inplace upgrade of a MOSS 2007 farm to sharepoint went pretty smooth. I read the preupgrade documentation and was comfortable with the steps.  Since it was a fairly new installation of Moss changes were minimal and I wasn't anticipating too many problems The one issue I got was after installing the software on all of the farm.  I went to the first machine which ran Sharepoint 2010 central administration and ran the Sharepoint 2010 Products Configuration Wizard.  I received the message that I would need to run the configuration on each server in the farm.  Fair enough, I expected as much. The wizard completed without issue on the first server, but when I tried to run it on the others it hung with a "waiting to get a lock to upgrade the farm" message.  It hung for about 10 minutes and then the wizard failed.  Did a few searches on Google and Bing and got 0 results for that message.  None, Nothing, Zilch.  I'm on my own... For grins, hit the help button on the configuration wizard and it seemed to indicate that the configuration wizard needed to be run on all farm servers simultaneously.  I started it again on the first server to the point I got the message about needing to be run on all servers on the farm and then started the wizard on the other servers and ran it to that point as well.  I then clicked ok on the first server and then the subsuquent servers. It took a while and it did hang on the lock message for some time, but then it did kick off and completed succesfully on all of them.  Yeah! Hope this helps someone else!  Now there should be at least one post with this error message on it!

    Read the article

  • Installing SharePoint 2010 and PowerPivot for SharePoint on Windows 7

    - by smisner
    Many people like me want (or need) to do their business intelligence development work on a laptop. As someone who frequently speaks at various events or teaches classes on all subjects related to the Microsoft business intelligence stack, I need a way to run multiple server products on my laptop with reasonable performance. Once upon a time, that requirement meant only that I had to load the current version of SQL Server and the client tools of choice. In today's post, I'll review my latest experience with trying to make the newly released Microsoft BI products work with a Windows 7 operating system.The entrance of Microsoft Office SharePoint Server 2007 into the BI stack complicated matters and I started using Virtual Server to establish a "suitable" environment. As part of the team that delivered a lot of education as part of the Yukon pre-launch activities (that would be SQL Server 2005 for the uninitiated), I was working with four - yes, four - virtual servers. That was a pretty brutal workload for a 2GB laptop, which worked if I was very, very careful. It could also be a finicky and unreliable configuration as I learned to my dismay at one TechEd session several years ago when I had to reboot a very carefully cached set of servers just minutes before my session started. Although it worked, it came back to life very, very slowly much to the displeasure of the audience. They couldn't possibly have been less pleased than me.At that moment, I resolved to get the beefiest environment I could afford and consolidate to a single virtual server. Enter the 4GB 64-bit laptop to preserve my sanity and my livelihood. Likewise, for SQL Server 2008, I managed to keep everything within a single virtual server and I could function reasonably well with this approach.Now we have SQL Server 2008 R2 plus Office SharePoint Server 2010. That means a 64-bit operating system. Period. That means no more Virtual Server. That means I must use Hyper-V or another alternative. I've heard alternatives exist, but my few dabbles in this area did not yield positive results. It might have been just me having issues rather than any failure of those technologies to adequately support the requirements.My first run at working with the new BI stack configuration was to set up a 64-bit 4GB laptop with a dual-boot to run Windows Server 2008 R2 with Hyper-V. However, I was generally not happy with running Windows Server 2008 R2 on my laptop. For one, I couldn't put it into sleep mode, which is helpful if I want to prepare for a presentation beforehand and then walk to the podium without the need to hold my laptop in its open state along the way (my strategy at the TechEd session long, long ago). Secondly, it was finicky with projectors. I had issues from time to time and while I always eventually got it to work, I didn't appreciate those nerve-wracking moments wondering whether this would be the time that it wouldn't work.Somewhere along the way, I learned that it was possible to load SharePoint 2010 in a Windows 7 which piqued my interest. I had just acquired a new laptop running Windows 7 64-bit, and thought surely running the BI stack natively on my laptop must be better than running Hyper-V. (I have not tried booting to Hyper-V VHD yet, but that's on my list of things to try so the jury of one is still out on this approach.) Recently, I had to build up a server with the RTM versions of SQL Server 2008 R2 and Sharepoint Server 2010 and decided to follow suit on my Windows 7 Ultimate 64-bit laptop. The process is slightly different, but I'm happy to report that it IS possible, although I had some fits and starts along the way.DISCLAIMER: These products are NOT intended to be run in production mode on the Windows 7 operating system. The configuration described in this post is strictly for development or learning purposes and not supported by Microsoft. If you have trouble, you will NOT get help from them. I might be able to help, but I provide no guarantees of my ability or availablity to help. I won't provide the step-by-step instructions in this post as there are other resources that provide these details, but I will provide an overview of my approach, point you to the relevant resources, describe some of the problems I encountered, and explain how I addressed those problems to achieve my desired goal.Because my goal was not simply to set up SharePoint Server 2010 on my laptop, but specifically PowerPivot for SharePoint, I started out by referring to the installation instructions at the PowerPiovt-Info site, but mainly to confirm that I was performing steps in the proper sequence. I didn't perform the steps in Part 1 because those steps are applicable only to a server operating system which I am not running on my laptop. Then, the instructions in Part 2, won't work exactly as written for the same reason. Instead, I followed the instructions on MSDN, Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008. In general, I found the following differences in installation steps from the steps at PowerPivot-Info:You must copy the SharePoint installation media to the local drive so that you can edit the config.xml to allow installation on a Windows client.You also have to manually install the prerequisites. The instructions provides links to each item that you must manually install and provides a command-line instruction to execute which enables required Windows features.I will digress for a moment to save you some grief in the sequence of steps to perform. I discovered later that a missing step in the MSDN instructions is to install the November CTP Reporting Services add-in for SharePoint. When I went to test my SharePoint site (I believe I tested after I had a successful PowerPivot installation), I ran into the following error: Could not load file or assembly 'RSSharePointSoapProxy, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I was rather surprised that Reporting Services was required. Then I found an article by Alan le Marquand, Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010,that instructed readers to install the November add-in. My first reaction was, "Really?!?" But I confirmed it in another TechNet article on hardware and software requirements for SharePoint Server 2010. It doesn't refer explicitly to the November CTP but following the link took me there. (Interestingly, I retested today and there's no longer any reference to the November CTP. Here's the link to download the latest and greatest Reporting Services Add-in for SharePoint Technologies 2010.) You don't need to download the add-in anymore if you're doing a regular server-based installation of SharePoint because it installs as part of the prerequisites automatically.When it was time to start the installation of SharePoint, I deviated from the MSDN instructions and from the PowerPivot-Info instructions:On the Choose the installation you want page of the installation wizard, I chose Server Farm.On the Server Type page, I chose Complete.At the end of the installation, I did not run the configuration wizard.Returning to the PowerPivot-Info instructions, I tried to follow the instructions in Part 3 which describe installing SQL Server 2008 R2 with the PowerPivot option. These instructions tell you to choose the New Server option on the Setup Role page where you add PowerPivot for SharePoint. However, I ran into problems with this approach and got installation errors at the end.It wasn't until much later as I was investigating an error that I encountered Dave Wickert's post that installing PowerPivot for SharePoint on Windows 7 is unsupported. Uh oh. But he did want to hear about it if anyone succeeded, so I decided to take the plunge. Perseverance paid off, and I can happily inform Dave that it does work so far. I haven't tested absolutely everything with PowerPivot for SharePoint but have successfully deployed a workbook and viewed the PowerPivot Management Dashboard. I have not yet tested the data refresh feature, but I have installed. Continue reading to see how I accomplished my objective.I unintalled SQL Server 2008 R2 and started again. I had different problems which I don't recollect now. However, I uninstalled again and approached installation from a different angle and my next attempt succeeded. The downside of this approach is that you must do all of the things yourself that are done automatically when you install PowerPivot as a new server. Here are the steps that I followed:Install SQL Server 2008 R2 to get a database engine instance installed.Run the SharePoint configuration wizard to set up the SharePoint databases.In Central Administration, create a Web application using classic mode authentication as per a TechNet article on PowerPivot Authentication and Authorization.Then I followed the steps I found at How to: Install PowerPivot for SharePoint on an Existing SharePoint Server. Especially important to note - you must launch setup by using Run as administrator. I did not have to manually deploy the PowerPivot solution as the instructions specify, but it's good to know about this step because it tells you where to look in Central Administration to confirm a successful deployment.I did spot some incorrect steps in the instructions (at the time of this writing) in How To: Configure Stored Credentials for PowerPivot Data Refresh. Specifically, in the section entitled Step 1: Create a target application and set the credentials, both steps 10 and 12 are incorrect. They tell you to provide an actual Windows user name and password on the page where you are simply defining the prompts for your application in the Secure Store Service. To add the Windows user name and password that you want to associate with the application - after you have successfully created the target application - you select the target application and then click Set credentials in the ribbon.Lastly, I followed the instructions at How to: Install Office Data Connectivity Components on a PowerPivot server. However, I have yet to test this in my current environment.I did have several stops and starts throughout this process and edited those out to spare you from reading non-essential information. I believe the explanation I have provided here accurately reflect the steps I followed to produce a working configuration. If you follow these steps and get a different result, please let me know so that together we can work through the issue and correct these instructions. I'm sure there are many other folks in the Microsoft BI community that will appreciate the ability to set up the BI stack in a Windows 7 environment for development or learning purposes. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Microsoft launches IE9 preview – No support for XP

    - by samsudeen
    Microsoft launched the developer preview version of Internet Explorer 9 (IE9) at MIX 10 web conference yesterday.This release is aimed getting the feedback from website designers , developers and other community to make IE9 development better from its previous versions. Microsoft will update the developer preview every eight weeks and the next update is expected on mid of march.So what is new and interesting  about IE9 Chakra Chakra (The new scripting engine of IE9) renders the Java script much faster compared to IE8 and other browsers thus improving the performance significantly.According to Microsoft Chakra renders the java script in background with a separate thread parallel to the main engine which is complete new way of rendering from the current browser technologies Standards Microsoft is desperate to make ( surprisingly!!!) IE9 compliance to  web standards by supporting the open standards such as Accelerated support for HTML5 video support for new web technologies such as CSS3 and SVG2. ACID3 Test IE9 scores (55/100) in its latest ACID3 test which is much better compared to the IE8 score (22/100) but not even  nearer to their rivals Chrome, Opera, and Safari which scores 100/100 in ACID3 testing I am little disappointed over not able to download the  developer preview on my XP machine. The early comments looks much positive for IE9.If you want to explore IE9,check the Microsoft Test drive site  at Microsoft IE9 Test-drive You can also download the IE9 developer preview at Download Preview Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Setting Up GLFW3 in Visual Studio

    - by sm81095
    I decided a couple of days ago that I was going to start trying to develop games in C++ with OpenGL, instead of C# Monogame like I have been doing for a while. I was looking around for libraries to use, to make OpenGL a little easier to use. I settled on GLEW and GLFW. GLEW was a super easy copy/paste, but GLFW3 was not. After looking around for a while and fighting with CMake, I got the GLFW2.lib file created, and I added the additional include directories, library directories, and linked my program to the glfw3.lib file I just created. The problem is, I get these linker errors when I try to run or build my program: Error 1 error LNK2019: unresolved external symbol _glfwInit referenced in function _main C:\Codex Interactive\Projects\OGLTest\OGLTest\test.obj OGLTest Error 2 error LNK2019: unresolved external symbol _glfwTerminate referenced in function _main C:\Codex Interactive\Projects\OGLTest\OGLTest\test.obj OGLTest Error 3 error LNK2019: unresolved external symbol _glfwSetErrorCallback referenced in function _main C:\Codex Interactive\Projects\OGLTest\OGLTest\test.obj OGLTest and 10 other LNK2019 errors, all talking about some glfw method, as well as: Error 14 error LNK1120: 13 unresolved externals C:\Codex Interactive\Projects\OGLTest\Debug\OGLTest.exe 1 1 OGLTest at the very bottom of the error list. I've looked up most of these errors on their own, and the solutions that I find either do nothing to solve the problem, or are people commenting on how dumb people are for not being about to solve this linker problem. Any assistance to solve these errors would be greatly appreciated. Info: I built GLFW3 on Cmake for Visual Studio 11, 32 bit and 64 bit, and both threw the same errors. The only extra libraries I linked were opengl32.lib, glu32.lib, and glfw3.lib Here is the test code (from GLFW3's latest tutorial): Code

    Read the article

  • Sublinear Extra Space MergeSort

    - by hulkmeister
    I am reviewing basic algorithms from a book called Algorithms by Robert Sedgewick, and I came across a problem in MergeSort that I am, sad to say, having difficulty solving. The problem is below: Sublinear Extra Space. Develop a merge implementation that reduces that extra space requirement to max(M, N/M), based on the following idea: Divide the array into N/M blocks of size M (for simplicity in this description, assume that N is a multiple of M). Then, (i) considering the blocks as items with their first key as the sort key, sort them using selection sort; and (ii) run through the array merging the first block with the second, then the second block with the third, and so forth. The problem I have with the problem is that based on the idea Sedgewick recommends, the following set of arrays will not be sorted: {0, 10, 12}, {3, 9, 11}, {5, 8, 13}. The algorithm I use is the following: Divide the full array into subarrays of size M. Run Selection Sort on each of the subarrays. Merge each of the subarrays using the method Sedgwick recommends in (ii). (This is where I encounter the problem of where to store the results after the merge.) This leads to wanting to increase the size of the auxiliary space needed to handle at least two subarrays at a time (for merging), but based on the specifications of the problem, that is not allowed. I have also considered using the original array as space for one subarray and using the auxiliary space for the second subarray. However, I can't envision a solution that does not end up overwriting the entries of the first subarray. Any ideas on other ways this can be done? NOTE: If this is suppose to be on StackOverflow.com, please let me know how I can move it. I posted here because the question was academic.

    Read the article

  • Finalized Ubuntu 13.10 Releases are now Available for Download

    - by Akemi Iwaya
    The long wait for the latest stable version of Ubuntu is finally over. Now you can download your favorite UI version of Ubuntu 13.10, try out the Phone Edition, and grab a copy of the official manual using the compiled set of download links we have put together for your convenience. Download Links Ubuntu 13.10 Unity Edition (Desktop) Note: You made need to scroll down the page part way to find the download link. Ubuntu 13.10 GNOME Edition (Desktop) Ubuntu 13.10 Kubuntu Edition (Desktop) Ubuntu 13.10 Xubuntu Edition (Desktop) Ubuntu 13.10 Lubuntu Edition (Desktop) Ubuntu 13.10 Server Edition Note: You made need to scroll down the page part way to find the download link. Phone Edition For those who are adventurous and want to give the Phone Edition a try, you can learn more details about it and download it via the links below. Keep in mind that this particular release is still focused more towards developers, industry partners, and enthusiasts versus general usage at this time. Instructions for Installing Ubuntu on a Phone Note: Also lists the two devices currently supported for installing the system on. Download the Ubuntu 13.10 Phone Edition [Ubuntu Phone Edition Reference via The Next Web] Bonus! You can download the official manual for the new release as well! When you visit the download page, use the three options/choices to get the particular version of the manual you want. Download the ‘Getting Started with Ubuntu 13.10′ Manual [Ubuntu Manual Reference via Softpedia]     

    Read the article

  • SQLAuthority News – Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning Training

    - by pinaldave
    Last 3 days to register for the courses. This is one time offer with big discount. The deadline for the course registration is 5th May, 2010. There are two different courses are offered by Solid Quality Mentors 1) Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning – Pinal Dave Date: May 12-14, 2010 Price: Rs. 14,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 11,000/person for 3 days 2) SharePoint 2010 – Joy Rathnayake Date: May 10-11, 2010 Price: Rs. 11,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 8,000/person for 2 days Download the complete PDF brochure. To register, either send an email to [email protected] or call +91 95940 43399. Feel free to drop me an email at pinal “at” SQLAuthority.com for any additional information and clarification. Training Venue: Abridge Solutions, #90/B/C/3/1, Ganesh GHR & MSY Plaza, Vittalrao Nagar, Near Image Hospital, Madhapur, Hyderabad – 500 081. Additionally there is special program of SolidQ India Insider. This is only available to first few registrants of the courses only. Read more details about the course here. Read my TechEd India 2010 experience here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • 500 Metro Style WP7 Icons

    - by Bil Simser
    I was inspired by The Noun Project, a project that offers up “Metro-style” icons in SVG format. The project is licensed under a public domain license and while it’s a great project, all of the content is in SVG format. Jon Galloway has a great post (from 2007) talking about the differences between SVG and XAML so I highly recommend that for some background. I thought it would be helpful to the WPF/Windows Phone 7/Silverlight community to provide the content in alternative formats for use in your applications. The Goods I’ve put together a package of the 500 icons (502 actually) in PNG, XAML and the original SVG format along with a couple of sample projects so you can see them in action. There’s a WPF desktop app: And a Windows Phone 7 app: Building It To get all the content first I wrote up a quick program to suck the original SVG files. Luckily they’re all in a common path just named 1.SVG, 2.SVG, and so on. Easy sleazy to grab the contents. Once I had 500 SVG files I used the latest copy of XamlTune, an open source CodePlex project that has a command line conversion tool to convert the directory of SVG files into XAML (the tool also created a PNG file of each SVG so that’s just icing on the cake). Conversions The conversion from SVG to XAML isn’t 100%. While you can just drop the content into a WPF app, it doesn’t work that way for WP7. There are just some small adjustments I made to each format so you’ll have to do the same. Follow the information below or refer to the sample applications. As a sample, here’s an icon we want to use: Here’s the original SVG file: <svg version="1.0" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" width="100px" height="94.616px" viewBox="0 0 100 94.616" enable-background="new 0 0 100 94.616" xml:space="preserve"> <path d="M25.076,15.639c4.324,0.009,7.824-3.488,7.82-7.82C32.9,3.512,29.4,0.012,25.076,0c-4.313,0.012-7.814,3.512-7.821,7.819 C17.262,12.15,20.763,15.648,25.076,15.639L25.076,15.639z"/> <path d="M4.593,43.388h6.861l4.137-15.135h1.716L13.22,43.388h24.318l-4.389-15.135h1.817l2.32,7.415 c1.08,3.131,3.852,3.851,6.003,1.162l8.375-10.142c2.651-3.42-2.104-7.021-4.844-4.035l-4.993,5.952 c0.007,0.095-0.96-3.278-0.96-3.278c-1.135-3.978-4.918-7.903-10.595-7.922H19.576c-5.071,0.019-9.043,4.434-9.888,7.214 L4.593,43.388L4.593,43.388z"/> <polygon points="56.206,22.753 56.206,7.163 49.192,7.163 49.192,22.753 56.206,22.753 "/> <path d="M79.87,15.738c4.332-0.014,7.831-3.516,7.82-7.82c0.011-4.332-3.488-7.833-7.82-7.82c-4.306-0.013-7.806,3.488-7.821,7.82 C72.064,12.222,75.564,15.725,79.87,15.738L79.87,15.738z"/> <path d="M89.759,89.556v-43.19h5.751V22.804c0.007-3.079-2.757-5.448-6.71-5.449H70.436c-3.65,0.001-4.539,1.186-5.551,2.168 L49.597,37.889c-3.098,3.848,2.428,8.333,5.55,4.743L69.88,25.226v64.43c-0.019,6.475,9.06,6.686,9.081,0.201v-36.58h1.765v36.379 C80.748,96.109,89.772,96.13,89.759,89.556L89.759,89.556z"/> <polygon points="100,54.035 100,45.155 0,45.155 0,54.035 100,54.035 "/> </svg> Here’s the XAML that XamlTune created. It can be used in any WPF app without any changes: <Canvas Name="Layer_1" Width="100" Height="94.616" ClipToBounds="True" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M25.076,15.639C29.4,15.648 32.9,12.151 32.896,7.819 32.9,3.512 29.4,0.012 25.076,0 20.763,0.012 17.262,3.512 17.255,7.819 17.262,12.15 20.763,15.648 25.076,15.639L25.076,15.639z" /> </Path.Data> </Path> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M4.593,43.388L11.454,43.388 15.591,28.253 17.307,28.253 13.22,43.388 37.538,43.388 33.149,28.253 34.966,28.253 37.286,35.668C38.366,38.799,41.138,39.519,43.289,36.83L51.664,26.688C54.315,23.268,49.56,19.667,46.82,22.653L41.827,28.605C41.834,28.7 40.867,25.327 40.867,25.327 39.732,21.349 35.949,17.424 30.272,17.405L19.576,17.405C14.505,17.424,10.533,21.839,9.688,24.619L4.593,43.388 4.593,43.388z" /> </Path.Data> </Path> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M56.206,22.753L56.206,7.163 49.192,7.163 49.192,22.753 56.206,22.753z" /> </Path.Data> </Path> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M79.87,15.738C84.202,15.724 87.701,12.222 87.69,7.918 87.701,3.586 84.202,0.0849999999999991 79.87,0.097999999999999 75.564,0.084999999999999 72.064,3.586 72.049,7.918 72.064,12.222 75.564,15.725 79.87,15.738L79.87,15.738z" /> </Path.Data> </Path> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M89.759,89.556L89.759,46.366 95.51,46.366 95.51,22.804C95.517,19.725,92.753,17.356,88.8,17.355L70.436,17.355C66.786,17.356,65.897,18.541,64.885,19.523L49.597,37.889C46.499,41.737,52.025,46.222,55.147,42.632L69.88,25.226 69.88,89.656C69.861,96.131,78.94,96.342,78.961,89.857L78.961,53.277 80.726,53.277 80.726,89.656C80.748,96.109,89.772,96.13,89.759,89.556L89.759,89.556z" /> </Path.Data> </Path> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M100,54.035L100,45.155 0,45.155 0,54.035 100,54.035z" /> </Path.Data> </Path> </Canvas> The XAML works AS-IS in a WPF application but there are some changes I did to get it to work in a WP7 app. Here’s the modified XAML in a WP7 application: <Canvas Grid.Row="0" Grid.Column="0" Name="Icon_1" Width="100" Height="94.616"> <Path Fill="#FF000000" Data="M25.076,15.639C29.4,15.648 32.9,12.151 32.896,7.819 32.9,3.512 29.4,0.012 25.076,0 20.763,0.012 17.262,3.512 17.255,7.819 17.262,12.15 20.763,15.648 25.076,15.639L25.076,15.639z"> </Path> <Path Fill="#FF000000" Data="M4.593,43.388L11.454,43.388 15.591,28.253 17.307,28.253 13.22,43.388 37.538,43.388 33.149,28.253 34.966,28.253 37.286,35.668C38.366,38.799,41.138,39.519,43.289,36.83L51.664,26.688C54.315,23.268,49.56,19.667,46.82,22.653L41.827,28.605C41.834,28.7 40.867,25.327 40.867,25.327 39.732,21.349 35.949,17.424 30.272,17.405L19.576,17.405C14.505,17.424,10.533,21.839,9.688,24.619L4.593,43.388 4.593,43.388z"> </Path> <Path Fill="#FF000000" Data="M56.206,22.753L56.206,7.163 49.192,7.163 49.192,22.753 56.206,22.753z"> </Path> <Path Fill="#FF000000" Data="M79.87,15.738C84.202,15.724 87.701,12.222 87.69,7.918 87.701,3.586 84.202,0.0849999999999991 79.87,0.097999999999999 75.564,0.084999999999999 72.064,3.586 72.049,7.918 72.064,12.222 75.564,15.725 79.87,15.738L79.87,15.738z"> </Path> <Path Fill="#FF000000" Data="M89.759,89.556L89.759,46.366 95.51,46.366 95.51,22.804C95.517,19.725,92.753,17.356,88.8,17.355L70.436,17.355C66.786,17.356,65.897,18.541,64.885,19.523L49.597,37.889C46.499,41.737,52.025,46.222,55.147,42.632L69.88,25.226 69.88,89.656C69.861,96.131,78.94,96.342,78.961,89.857L78.961,53.277 80.726,53.277 80.726,89.656C80.748,96.109,89.772,96.13,89.759,89.556L89.759,89.556z"> </Path> <Path Fill="#FF000000" Data="M100,54.035L100,45.155 0,45.155 0,54.035 100,54.035z"> </Path> </Canvas> All I did was take the data portion and put it directly into a Data attribute on the Path. Note that while it does show up in the app (on the emulator or device) it wouldn’t show up in Visual Studio for me. Maybe some XAML guru out there can tell me why. You can just as easily use the PNG files in WP7 but if you want the crispness of vector graphics, go for the XAML version. Of course with XamlTune being open source you could always modify the output of that program to cater it to your app. If you do make a change that’s worthy please consider submitting a patch to the project so everyone can benefit. Hope this helps and happy programming! Resources and Links Sample Project and Icons XamlTune an open source project to convert SVG to XAML The Noun Project source of the original files Jon Galloways post on SVG and XAML StackOverflow question on converting SVG to XAML

    Read the article

  • Enterprise SharePoint 2010 Hosting, SharePoint Foundation 2010 Hosting, SharePoint Standard 2010 Hos

    - by Michael J. Hamilton, Sr.
    Enterprise SharePoint 2010 Hosting, SharePoint Foundation 2010 Hosting, SharePoint Standard 2010 Hosting, Michigan Sclera, a Microsoft Hosted Services Provider Partner, is offering key Service Offerings around the Microsoft SharePoint Server 2010 stack. Specifically – if you’re looking for SharePoint Foundation, SharePoint Standard or Enterprise 2010 hosting provisions, checkout the Service Offerings from Sclera Hosting (www.sclerahosting.com) and compare with some of the lowest prices available on the web today. I wanted to post this so you could shot around and compare. There are a couple of the larger on demand hosting agencies (247hosting, and fpweb hosting) – that charge outrageous fees  - like $350 a month for SharePoint Foundation 2010 hosting. The most incredible part? This is on a shared domain name – not the client’s domain. It’s hosting on something like .sharepointsites.com">.sharepointsites.com">http://<yourSiteName>.sharepointsites.com – or something crazy like that. Sclera Hosting provides you on demand – SharePoint Foundation, SharePoint Server Standard/Enterprise – 2010 RTM bits – within minutes of your order – ON YOUR DOMAIN – and that is a major perk for me. You have complete SharePoint Designer 2010 integration; complete support for custom assemblies, web parts, you name it – this hosting provider gives you more bang for buck than any provider on the Net today. Now – some teasers – I was in a meeting this week and I heard – SharePoint Foundation – 2010 RTM bits – unlimited users, 10 GB content database quota, full SharePoint Designer 2010 integration/support, all on the client’s domain – sit down and soak this up - $175.00 per month – no kidding. Now, I do not know about you – but – I have not seen a deal like that EVER on the Net – so – get over to www.sclerahosting.com – or email the Sales Team at Sclera Design, Inc. today for more details. Have a great weekend!

    Read the article

  • How can I install a 32bit python on 64 bit Ubuntu

    - by moose
    I am using Ubuntu 10.10 (Linux pc07 2.6.35-27-generic #48-Ubuntu SMP Tue Feb 22 20:25:46 UTC 2011 x86_64 GNU/Linux) and the default python package (Python 2.6.6). I would like to install python-psyco to improve the performance of one of my scripts, but only python-psyco-doc is available for 64 bit. I tried a virtual machine, but the the performance boost is much less on the virtual machine than on a "real" installed 32-bit Ubuntu. So my question is: How can I install a 32Bit Python with psyco on my 64Bit Ubuntu machine? edit: I've found this article and made this: Download "Python 2.7.1 bzipped source tarball" from http://python.org/download/ Go in the directory where you decompressed "Python 2.7.1" $ OPT=-m32 LDFLAGS=-m32 ./configure --prefix=/opt/pym32 $ make But I got this error: gcc -pthread -m32 -Xlinker -export-dynamic -o python \ Modules/python.o \ libpython2.7.a -lpthread -ldl -lutil -lm libpython2.7.a(posixmodule.o): In function `posix_tmpnam': /home/moose/Downloads/Python-2.7.1/./Modules/posixmodule.c:7346: warning: the use of `tmpnam_r' is dangerous, better use `mkstemp' libpython2.7.a(posixmodule.o): In function `posix_tempnam': /home/moose/Downloads/Python-2.7.1/./Modules/posixmodule.c:7301: warning: the use of `tempnam' is dangerous, better use `mkstemp' Segmentation fault make: *** [sharedmods] Fehler 139 edit2: Now I've found http://indefinitestudies.org/2010/02/08/how-to-build-32-bit-python-on-ubuntu-9-10-x86_64/ and it seems like this worked: $ cd Python-2.7.1 $ CC="gcc -m32" LDFLAGS="-L/lib32 -L/usr/lib32 \ -Lpwd/lib32 -Wl,-rpath,/lib32 -Wl,-rpath,/usr/lib32" \ ./configure --prefix=/opt/pym32 $ make $ sudo make install But installing psyco didn't work: Download the lastest snapshot: http://psyco.sourceforge.net/download.html Extract it and go into the folder $ python setup.py install This error appeared: PROCESSOR = 'ivm' running install running build running build_py running build_ext building 'psyco._psyco' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DALL_STATIC=1 -Ic/ivm -I/usr/include/python2.6 -c c/psyco.c -o build/temp.linux-x86_64-2.6/c/psyco.o In file included from c/psyco.c:1: c/psyco.h:9: fatal error: Python.h: Datei oder Verzeichnis nicht gefunden compilation terminated. error: command 'gcc' failed with exit status 1

    Read the article

  • SQL SERVER – Server Side Paging in SQL Server 2011 – Part2

    - by pinaldave
    The best part of the having blog is that SQL Community helps to keep it running with new ideas. Earlier I wrote about SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative. A very popular article on that subject. I had used variables for “number of the rows” and “number of the pages”. Blog reader send me email asking in their organizations these values are stored in the table. Is there any the new syntax can read the data from the table. Absolutely YES! USE AdventureWorks2008R2 GO CREATE TABLE PagingSetting (RowsPerPage INT, PageNumber INT) INSERT INTO PagingSetting (RowsPerPage, PageNumber) VALUES(10,5) GO SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET (SELECT RowsPerPage*PageNumber FROM PagingSetting) ROWS FETCH NEXT (SELECT RowsPerPage FROM PagingSetting) ROWS ONLY GO Here is the quick script: This is really an easy trick. I also wrote blog post on comparison of the performance over here: . SQL SERVER – Server Side Paging in SQL Server 2011 Performance Comparison Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: SQL Paging

    Read the article

  • SQL SERVER – A Successful Performance Tuning Seminar – Hyderabad – Nov 27-28, 2010 – Next Pune

    - by pinaldave
    My recent SQL Server Performance Tuning Seminar in Colombo was oversubscribed with total of 35 attendees. You can read the details over here SQLAuthority News – SQL Server Performance Optimizations Seminar – Grand Success – Colombo, Sri Lanka – Oct 4 – 5, 2010. I had recently completed another seminar in Hyderabad which was again blazing success. We had 25 attendees to the seminar and had wonderful time together. There is one thing very different between usual class room training and this seminar series. In this seminar series we go 100% demo oriented and real world scenario deep down. We do not talk usual theory talk-talk. The goal of this seminar to give anybody who attends a jump start and deep dive on the performance tuning subject. I will share many different examples and scenarios from my years of experience of performance tuning. The beginning of the second day is always interesting as I take attendees the server as example of the talk, and together we will attempt to identify the bottleneck and see if we can resolve the same. So far I have got excellent feedback on this unique session, where we pick database of the attendees and address the issues. I plan to do the same again in next sessions. The next Seminar is in Pune.I am very excited for the same. Date and Time: December 4-5, 2010. 10 AM to 6 PM The Pride Hotel 05, University Road, Shivaji Nagar, Pune – 411 005 Tel: 020 255 34567 Click here for the agenda of the seminar. Instead of writing more details, I will let the photos do the talk for latest Hyderabad Seminar. Hotel Amrutha Castle King Arthur's Court Pinal Presenting Seminar Pinal Presenting Seminar Seminar Attendees Pinal Presenting Seminar Group Photo of Hyderabad Seminar Attendees Seminar Support Staff - Nupur and Shaivi Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

< Previous Page | 709 710 711 712 713 714 715 716 717 718 719 720  | Next Page >