Search Results

Search found 5987 results on 240 pages for 'nested sets'.

Page 124/240 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Azure - Part 4 - Table Storage Service in Windows Azure

    - by Shaun
    In Windows Azure platform there are 3 storage we can use to save our data on the cloud. They are the Table, Blob and Queue. Before the Chinese New Year Microsoft announced that Azure SDK 1.1 had been released and it supports a new type of storage – Drive, which allows us to operate NTFS files on the cloud. I will cover it in the coming few posts but now I would like to talk a bit about the Table Storage.   Concept of Table Storage Service The most common development scenario is to retrieve, create, update and remove data from the data storage. In the normal way we communicate with database. When we attempt to move our application over to the cloud the most common requirement should be have a storage service. Windows Azure provides a in-build service that allow us to storage the structured data, which is called Windows Azure Table Storage Service. The data stored in the table service are like the collection of entities. And the entities are similar to rows or records in the tradtional database. An entity should had a partition key, a row key, a timestamp and set of properties. You can treat the partition key as a group name, the row key as a primary key and the timestamp as the identifer for solving the concurrency problem. Different with a table in a database, the table service does not enforce the schema for tables, which means you can have 2 entities in the same table with different property sets. The partition key is being used for the load balance of the Azure OS and the group entity transaction. As you know in the cloud you will never know which machine is hosting your application and your data. It could be moving based on the transaction weight and the number of the requests. If the Azure OS found that there are many requests connect to your Book entities with the partition key equals “Novel” it will move them to another idle machine to increase the performance. So when choosing the partition key for your entities you need to make sure they indecate the category or gourp information so that the Azure OS can perform the load balance as you wish.   Consuming the Table Although the table service looks like a database, you cannot access it through the way you are using now, neither ADO.NET nor ODBC. The table service exposed itself by ADO.NET Data Service protocol, which allows you can consume it through the RESTful style by Http requests. The Azure SDK provides a sets of classes for us to connect it. There are 2 classes we might need: TableServiceContext and TableServiceEntity. The TableServiceContext inherited from the DataServiceContext, which represents the runtime context of the ADO.NET data service. It provides 4 methods mainly used by us: CreateQuery: It will create a IQueryable instance from a given type of entity. AddObject: Add the specified entity into Table Service. UpdateObject: Update an existing entity in the Table Service. DeleteObject: Delete an entity from the Table Service. Beofre you operate the table service you need to provide the valid account information. It’s something like the connect string of the database but with your account name and the account key when you created the storage service on the Windows Azure Development Portal. After getting the CloudStorageAccount you can create the CloudTableClient instance which provides a set of methods for using the table service. A very useful method would be CreateTableIfNotExist. It will create the table container for you if it’s not exsited. And then you can operate the eneities to that table through the methods I mentioned above. Let me explain a bit more through an exmaple. We always like code rather than sentence.   Straightforward Accessing to the Table Here I would like to build a WCF service on the Windows Azure platform, and for now just one requirement: it would allow the client to create an account entity on the table service. The WCF service would have a method named Register and accept an instance of the account which the client wants to create. After perform some validation it will add the entity into the table service. So the first thing I should do is to create a Cloud Application on my VIstial Studio 2010 RC. (The Azure SDK 1.1 only supports VS2008 and VS2010 RC.) The solution should be like this below. Then I added a configuration items for the storage account through the Settings section under the cloud project. (Double click the Services file under Roles folder and navigate to the Setting section.) This setting will be used when to retrieve my storage account information. Since for now I just in the development phase I will select “UseDevelopmentStorage=true”. And then I navigated to the WebRole.cs file under my WCF project. If you have read my previous posts you would know that this file defines the process when the application start, and terminate on the cloud. What I need to do is to when the application start, set the configuration publisher to load my config file with the config name I specified. So the code would be like below. I removed the original service and contract created by the VS template and add my IAccountService contract and its implementation class - AccountService. And I add the service method Register with the parameters: email, password and it will return a boolean value to indicates the result which is very simple. At this moment if I press F5 the application will be established on my local development fabric and I can see my service runs well through the browser. Let’s implement the service method Rigister, add a new entity to the table service. As I said before the entities you want to store in the table service must have 3 properties: partition key, row key and timespan. You can create a class with these 3 properties. The Azure SDK provides us a base class for that named TableServiceEntity in Microsoft.WindowsAzure.StorageClient namespace. So what we need to do is more simply, create a class named Account and let it derived from the TableServiceEntity. And I need to add my own properties: Email, Password, DateCreated and DateDeleted. The DateDeleted is a nullable date time value to indecate whether this entity had been deleted and when. Do you notice that I missed something here? Yes it’s the partition key and row key I didn’t assigned. The TableServiceEntity base class defined 2 constructors one was a parameter-less constructor which will be used to fill values into the properties from the table service when retrieving data. The other was one with 2 parameters: partition key and row key. As I said below the partition key may affect the load balance and the row key must be unique so here I would like to use the email as the parition key and the email plus a Guid as the row key. OK now we finished the entity class we need to store onto the table service. The next step is to create a data access class for us to add it. Azure SDK gives us a base class for it named TableServiceContext as I mentioned below. So let’s create a class for operate the Account entities. The TableServiceContext need the storage account information for its constructor. It’s the combination of the storage service URI that we will create on Windows Azure platform, and the relevant account name and key. The TableServiceContext will use this information to find the related address and verify the account to operate the storage entities. Hence in my AccountDataContext class I need to override this constructor and pass the storage account into it. All entities will be saved in the table storage with one or many tables which we call them “table containers”. Before we operate an entity we need to make sure that the table container had been created on the storage. There’s a method we can use for that: CloudTableClient.CreateTableIfNotExist. So in the constructor I will perform it firstly to make sure all method will be invoked after the table had been created. Notice that I passed the storage account enpoint URI and the credentials to specify where my storage is located and who am I. Another advise is that, make your entity class name as the same as the table name when create the table. It will increase the performance when you operate it over the cloud especially querying. Since the Register WCF method will add a new account into the table service, here I will create a relevant method to add the account entity. Before implement, I should add a reference - System.Data.Services.Client to the project. This reference provides some common method within the ADO.NET Data Service which can be used in the Windows Azure Table Service. I will use its AddObject method to create my account entity. Since the table service are not fully implemented the ADO.NET Data Service, there are some methods in the System.Data.Services.Client that TableServiceContext doesn’t support, such as AddLinks, etc. Then I implemented the serivce method to add the account entity through the AccountDataContext. You can see in the service implmentation I load the storage account information through my configuration file and created the account table entity from the parameters. Then I created the AccountDataContext. If it’s my first time to invoke this method the constructor of the AccountDataContext will create a table container for me. Then I use Add method to add the account entity into the table. Next, let’s create a farely simple client application to test this service. I created a windows console application and added a service reference to my WCF service. The metadata information of the WCF service cannot be retrieved if it’s deployed on the Windows Azure even though the <serviceMetadata httpGetEnabled="true"/> had been set. If we need to get its metadata we can deploy it on the local development service and then changed the endpoint to the address which is on the cloud. In the client side app.config file I specified the endpoint to the local development fabric address. And the just implement the client to let me input an email and a password then invoke the WCF service to add my acocunt. Let’s run my application and see the result. Of course it should return TRUE to me. And in the local SQL Express I can see the data had been saved in the table.   Summary In this post I explained more about the Windows Azure Table Storage Service. I also created a small application for demostration of how to connect and consume it through the ADO.NET Data Service Managed Library provided within the Azure SDK. I only show how to create an eneity in the storage service. In the next post I would like to explain about how to query the entities with conditions thruogh LINQ. I also would like to refactor my AccountDataContext class to make it dyamic for any kinds of entities.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • What is Linq?

    - by Aamir Hasan
    The way data can be retrieved in .NET. LINQ provides a uniform way to retrieve data from any object that implements the IEnumerable<T> interface. With LINQ, arrays, collections, relational data, and XML are all potential data sources. Why LINQ?With LINQ, you can use the same syntax to retrieve data from any data source:var query = from e in employeeswhere e.id == 1select e.nameThe middle level represents the three main parts of the LINQ project: LINQ to Objects is an API that provides methods that represent a set of standard query operators (SQOs) to retrieve data from any object whose class implements the IEnumerable<T> interface. These queries are performed against in-memory data.LINQ to ADO.NET augments SQOs to work against relational data. It is composed of three parts.LINQ to SQL (formerly DLinq) is use to query relational databases such as Microsoft SQL Server. LINQ to DataSet supports queries by using ADO.NET data sets and data tables. LINQ to Entities is a Microsoft ORM solution, allowing developers to use Entities (an ADO.NET 3.0 feature) to declaratively specify the structure of business objects and use LINQ to query them. LINQ to XML (formerly XLinq) not only augments SQOs but also includes a host of XML-specific features for XML document creation and queries. What You Need to Use LINQLINQ is a combination of extensions to .NET languages and class libraries that support them. To use it, you’ll need the following: Obviously LINQ, which is available from the new Microsoft .NET Framework 3.5 that you can download at http://go.microsoft.com/?linkid=7755937.You can speed up your application development time with LINQ using Visual Studio 2008, which offers visual tools such as LINQ to SQL designer and the Intellisense  support with LINQ’s syntax.Optionally, you can download the Visual C# 2008 Expression Edition tool at www.microsoft.com/vstudio/express/download. It is the free edition of Visual Studio 2008 and offers a lot of LINQ support such as Intellisense and LINQ to SQL designer. To use LINQ to ADO.NET, you need SQL

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Update on ASP.NET MVC 3 RC2 (and a workaround for a bug in it)

    - by ScottGu
    Last week we published the RC2 build of ASP.NET MVC 3.  I blogged a bunch of details about it here. One of the reasons we publish release candidates is to help find those last “hard to find” bugs. So far we haven’t seen many issues reported with the RC2 release (which is good) - although we have seen a few reports of a metadata caching bug that manifests itself in at least two scenarios: Nullable parameters in action methods have problems: When you have a controller action method with a nullable parameter (like int? – or a complex type that has a nullable sub-property), the nullable parameter might always end up being null - even when the request contains a valid value for the parameter. [AllowHtml] doesn’t allow HTML in model binding: When you decorate a model property with an [AllowHtml] attribute (to turn off HTML injection protection), the model binding still fails when HTML content is posted to it. Both of these issues are caused by an over-eager caching optimization we introduced very late in the RC2 milestone.  This issue will be fixed for the final ASP.NET MVC 3 release.  Below is a workaround step you can implement to fix it today. Workaround You Can Use Today You can fix the above issues with the current ASP.NT MVC 3 RC2 release by adding one line of code to the Application_Start() event handler within the Global.asax class of your application: The above code sets the ModelMetaDataProviders.Current property to use the DataAnnotationsModelMetadataProvider.  This causes ASP.NET MVC 3 to use a meta-data provider implementation that doesn’t have the more aggressive caching logic we introduced late in the RC2 release, and prevents the caching issues that cause the above issues to occur.  You don’t need to change any other code within your application.  Once you make this change the above issues are fixed.  You won’t need to have this line of code within your applications once the final ASP.NET MVC 3 release ships (although keeping it in also won’t cause any problems). Hope this helps – and please keep any reports of issues coming our way, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Is there a way to track data structure dependencies from the database, through the tiers, all the way out to a web page?

    - by Sean Mickey
    When we design applications, we generally end up with the same tiered sets of data structures: A persistent data structure that is described using DDL and implemented as RDBMS tables and columns. A set of domain objects that consist primarily of data structures, usually combined with business-rule level logic, that are implemented in a programming language such as Java. A set of service layer interfaces that directly support use case implementations (which use the domain data structures as parameters), implemented as EJBs or something equivalent in another programming language. UI screens that allow users to C reate, R etrieve, U pdate, and (maybe) D elete all manner of data structures and graphs of data structures, with numerous screens and with multiple UI widgets, all structured to support the same data structures. But if you want to change the data structures in any of these tiers, it always seems extremely difficult to assess the impact(s) the change will have across the application. UML can help, but tracing through diagram after diagram is not a real solution to this problem. The best I have ever seen was a homespun data tracking spreadsheet document that listed all of the data structures and walked the relationships from tier-to-tier. Is there a tool or accepted approach that makes it easy to identify a data structure in any tier and easily obtain a list of all dependent: database table and column data structures domain object data structures service layer interface methods and parameter data structures screen & UI component data structures

    Read the article

  • Formatting Keywords to UPPERCASE In Oracle SQL Developer

    - by thatjeffsmith
    I received this question from a customer today, and it took me more than a few minutes to remember where this preference was located in SQL Developer. This tells me that the topic is ripe for blogging How do I go FROM: select * from scott.emp where ename like '%JEFF%' TO SELECT * FROM scott.emp WHERE ename LIKE '%JEFF%' It’s all in the formatting You need to access the formatting preferences under the Tools menu. It takes a bit of navigating to get there, so bear with me: Tools Database SQL Formatter Oracle Formatting Click ‘Edit’ on the profile Other Case change: ‘Keywords Uppercase’ It’s easy to find once you know where to look? You can tell it to leave the case alone, upper everything, upper only the keywords, lower everything. Accessing the Formatter Options We allow separate formatting options for different RDBMS. You need to make sure you’re accessing the ‘Oracle Formatting’ page in the preferences. You can then choose to edit the default options OR you can do what I have done – save the defaults as a new set of options. I’ve called my profile ‘JeffCustom.’ I can now switch back and forth now through different sets of formatting options. You need to hit the ‘Edit’ button to get to the formatting options editor. A good number of people seem to miss this. Select your profile, then hit the ‘Edit’ button

    Read the article

  • Idea for a physics–computer science joint curriculum and textbook

    - by Ami
    (I apologize in advance if this question is off topic or too vague) I want to write (and have starting outlining) a physics textbook which assumes its reader is a competent computer programmer. Normal physics textbooks teach physical formulas and give problems that are solved with pen, paper and calculator. I want to provide a book that emphasizes computational physics, how computers can model physical systems and gives problems of the kind: write a program that can solve a set of physics problems based on user input. Third party open source libraries would be used to handle most of the computation and I want to use a high-level language like Java or C#. Besides the fact I'd enjoy working on this, I think a physics-computer science joint curriculum should be offered in schools and this is part of a large agenda to make this happen. I think physics students (like myself) should be learning how to use and leverage computers to solve abstract problems and sets of problems. I think programming languages should be thought of as a useful medium for engaging in many areas of inquiry. Is this an idea worth pursuing? Is the merger of these two subjects in the form of an undergraduate college curriculum feasible? Are there any specific tools I should be leveraging or pitfalls I should be aware of? Has anyone heard of college courses or otherwise that assume this methodology? Are there any books/textbooks out there like the one I'm describing (for physics or any other subject)?

    Read the article

  • Vision, Integration, Ability—Oracle is once again positioned as an E-Commerce Leader

    - by Jeri Kelley
    The new Gartner report is the fifth successive Magic Quadrant for E-Commerce to position Oracle as a leader. We’re proud of the result, but we’re not too surprised. Oracle Commerce’s functionality is uniquely aligned with a number of the major market trends Gartner describes in its report: from customers ‘expecting a seamless buying experience across all channels’, to organizations seeking to consolidate ‘B2B and B2C applications with a single underlying platform’. What we think sets Oracle Commerce apart Why are we a leader? We believe the key strengths of Oracle Commerce include: Outstanding Scalability and VersatilityOracle has a long and enviable track record of delivering B2B and B2C e-commerce solutions, and the Oracle Commerce solution supports a broad range of vertical industries – from retail to telecom, and manufacturing to distribution. Additionally, Oracle Commerce is engineered to scale simply and quickly to meet the changing needs of the enterprise. Oracle IntegrationOur commitment to seamless solutions integration allows customers to get the most from our ever evolving range of e-commerce and CX products—and deliver consistent, relevant, and personalized cross-channel buying experiences that drive customer satisfaction, and boost revenue. Experience and VisionOracle has a long and impressive history of delivering B2B and B2C e-commerce solutions to the world’s best brands. We’re constantly putting this experience to good use, and making our solutions even smarter. With powerful merchandising and business tools, and advanced promotions capabilities, Oracle Commerce is one of the most forward-thinking e-commerce solutions around. Read the reportYou can read Gartner’s full report here, or click here to find out more about our celebrated platform.

    Read the article

  • Frederick .NET User Group June 2010 Meeting

    - by John Blumenauer
    FredNUG is pleased to announce our June speaker will be Pete Brown.  Pete was one FredNUG’s first speakers when the group started and we’re very happy to have him visiting us again to present on Silverlight!  On June 15th @ 6:30 PM, we’ll start with a Visual Studio 2010 Launch with pizza, swag and a presentation about what makes Visual Studio 2010 great.  Then, starting at 7 PM, Pete Brown will present “What’s New in Silverlight 4.”  It looks like a evening filled with newness!   The scheduled agenda is:   6:30 PM - 7:15 PM – Visual Studio 2010 Launch Event plus Pizza/Social Networking/Announcements 7:15 PM - 8:30 PM - Main Topic: What’s New in Silverlight 4 with Pete Brown  Main Topic:  What’s New in Silverlight 4? Speaker Bio: Pete Brown is a Senior Program Manager with Microsoft on the developer community team led by Scott Hanselman, as well as a former Microsoft Silverlight MVP, INETA speaker, and RIA Architect for Applied Information Sciences, where he worked for over 13 years. Pete's focus at Microsoft is the community around client application development (WPF, Silverlight, Windows Phone, Surface, Windows Forms, C++, Native Windows API and more). From his first sprite graphics and custom character sets on the Commodore 64 to 3d modeling and design through to Silverlight, Surface, XNA, and WPF, Pete has always had a deep interest in programming, design, and user experience. His involvement in Silverlight goes back to the Silverlight 1.1 alpha application that he co-wrote and put into production in July 2007. Pete has been programming for fun since 1984, and professionally since 1992. In his spare time, Pete enjoys programming, blogging, designing and building his own woodworking projects and raising his two children with his wife in the suburbs of Maryland. Pete's site and blog is at 10rem.net, and you can follow him on Twitter at http://twitter.com/pete_brown Twitter: http://twitter.com/pete_brown Facebook: http://www.facebook.com/pmbrown Pete is a founding member of the CapArea .NET Silverlight SIG. (Visit the CapArea. NET Silverlight SIG here )    8:30 PM - 8:45 PM – RAFFLE! Please join us and get involved in our .NET developers community!

    Read the article

  • eBay Leads Mobile Commerce

    - by David Dorf
    For the first time, more smartphones where shipped than PCs. This important milestone helps reinforce that retailers need a strong mobile commerce strategy. IDC reported that for the 4th quarter of 2010, manufacturers shipped 100.9 million devices versus 92.1 million PCs shipped. One early adopter for the retail industry is eBay, the popular online auction and shopping site. In July 2008 they released their first mobile app and have increased investments ever since. In 2002 they bought PayPal for use with its online channel, but its becoming a force in the mobile world as well. In June 2010 they acquired RedLaser, the popular barcode scanning mobile app. Both pieces of technology enhance the mobile experience, and are available to other retailers as well. More recently, in December 2010 they acquired Critical Path Software, the developer of their eBay, StubHub, and Shopping.com mobile applications. Taking their mobile development in-house was a clear signal that mobile commerce is important to their strategy. Pop on over the eBay Inc's mobile commerce stats page to see just how well they are doing. You can use the animated map to see where people are using the app on any given day, and you can compare sales of the different categories. eBay's hottest category is Cars & Trucks, garnering 16.5% of the total $2B (yes, billion) in mobile sales in 2010. To understand why that category is so large, let's look at the top 10 most expensive cars sold on eBay mobile in 2010: $240,001 Mercedes-Benz: SLR McLaren $209,888 Lamborghini: Gallardo $208,500 Ferrari: 430 $199,900 Lamborghini: Gallardo $189,000 Lamborghini: Murcielago $185,000 Ferrari: 430 $175,000 Porsche: 911 $170,000 Ferrari: 550 $160,000 Bentley: Continental, GT $159,900 Lamborghini: Gallardo eBay claims they sell 3-4 Ferraris on their mobile app each month. Yes, mobile commerce is not limited to small items. While I would wait to get home and fire up the PC, the current generation that has grown up with mobile phones has no issue satisfying their impulses. Dave Sikora of Digby told me he's seen people buy furniture sets, mattresses, and diamonds via their mobile phones. I guess mobile commerce is rapidly becoming the norm.

    Read the article

  • How do I set up pairing email addresses?

    - by James A. Rosen
    Our team uses the Ruby gem hitch to manage pairing. You set it up with a group email address (e.g. [email protected]) and then tell it who is pairing: $ hitch james tiffany Hitch then sets your Git author configuration so that our commits look like commit 629dbd4739eaa91a720dd432c7a8e6e1a511cb2d Author: James and Tiffany <[email protected]> Date: Thu Oct 31 13:59:05 2013 -0700 Unfortunately, we've only been able to come up with two options: [email protected] doesn't exist. The downside is that if Travis CI tries to notify us that we broke the build, we don't see it. [email protected] does exist and forwards to all the developers. Now the downside is that everyone gets spammed with every broken build by every pair. We have too many possible pair to do any of the following: set up actual [email protected] email addresses or groups (n^2 email addresses) set up forwarding rules for [email protected] (n^2 forwarding rules) set up forwarding rules for [email protected] (n forwarding rules for each of n developers) Does anyone have a system that works for them?

    Read the article

  • CodePlex Daily Summary for Friday, June 11, 2010

    CodePlex Daily Summary for Friday, June 11, 2010New ProjectsBIxPress Community Edition: SSIS Toolset BIDS Addin,Audit,Notify,Deploy,Template: BI xPress is a BIDS Addin/standalone application for SQL Developer/DBA. This tool has many features including Auditing, Notification, Deployment, P...C# Shell (cash): Cash is a command-line interpreter (shell), written in C#. It is part of an project to produce tools which replace the traditional GNU/Linux user-l...Chernarus Life Revivved: Chernarus life revivved for arma 2 multiplayerDKAL: DKAL is a distributed authorization policy language. This project contains an engine for running DKAL policies. It is implemented primarly in F#.ImageResizer for Albulle: Application permettant de déployer le contenu pour une la galerie photos php Albulle. L'objectif est de faciliter et automatiser le redimensionnem...Intraweb Active Directory Authentication Demo: A simple demo created using Delphi 7 and Intraweb 9.0.42 with the Active Directory Helper interface to authenticate a user to Active Directory. ...Lokad CQRS - build scalable web sites and enterprise solutions on Windows Azure: Lokad CQRS helps to build scalable cloud applications for Windows Azure. It provides time-proven guidance and .NET Application Blocks to help arc...LRS: Write a concise, reader-focused summary. Write a concise, reader-focused summary. Write a concise, reader-focused summary. ManagementPeople99: ...MEDILIG - MEDICAL LIFE GUARD: Cross-platform EHR/EMR software for the design, implementation and use of autonomous, open database models for multilingual clinical data managemen...NETris - ASP .NET, AJAX, Web Service based Tetris Game: Tetris game with business logic provided via Web Services.Peace Through Force: Peace Through Force is a 2D side scrolling tactical shooter developed in XNA Game Studio 3.1 using C#. The game is targeted to be made available o...Powwa: Util to show battery status and cpu speed on laptop. Uses jWMI's for interfacing with Windows' WMI.Resource Management System: Resource Management system for Internal purpose using WPF,WCF and SilverlighttChat - ASP.NET, Ajax & Web Service based Chat Room: Simple chat room application. Technology: ASP .NET, Ajax, Web Services and MS SQL Server Database. Uses ASP .NET authentication mechanism.Test Project (ignore): This is used to demonstrate CodePlex at meetings. Please ignore this project.Transform Config: Transfom Config lets you use the new configuration transformation feature in Visual Studio 2010 without performing a publish on a web application p...unsocialcity: trying to make a game for facebook called <projectname> - just seemed like a fun idea http://apps.facebook.com/unsocialcityVorbisPlayer: VorbisPlayer is the audio user control for Silverlight games. It plays loop-sets seamless, it solves the short sound problem, and it can play sound...New ReleasesA Guide to Parallel Programming: Drop 5 - Guide Preface, Chapters 1 - 7, and code: This is Drop 5 with Guide Preface, Chapters 1 - 7, Appendix B, Glossary, and References, and the accompanying code samples. This drop requires Visu...CC.Hearts Screen Saver: CC.Hearts Screen Saver 1.0.10.610: The third release of CC.Hearts Screen Saver. Key features are: Further performance enhancements Keyboard commands (Press ? for help) Help Popu...Fiddler Delayed Responses Extension: v1 beta: UI improvement . drag and drop . session markers . icons . layout Algorithm review . Performance issues Thanks to Eric Lawrence for his ideas.FontViewer 2010: FontViewer 2010 (Codename Eraser): This is the installer for the development version ("Eraser") of FontViewer 2010. Because many of the features are under development, functionality...Genuilder: Genuilder 1.2: First release of Genuilder.Extensibility.ImageResizer for Albulle: ImageResizer1.0-bin: Première version. Permet de déployer des images dans un système de fichier (local).imdb movie downloader: myImdb 0.9.5: myImdb 0.9.5KooBoo Image Gallery: Beta 4: This new version has a new example using s3slider script http://www.serie3.info/s3slider/demonstration.html thanks to mshimao Now there are two pl...LogikBug's IoC Container: LogikBug's IoC Container v1.1.1: In this release: I extended the Extensibility namespace. Fixed a few minor issues with the Extensibility namespace.LRS: jlrs: asdfaLRS: jlrs src: jlrs srcMiniTwitter: 1.14: MiniTwitter 1.14 更新内容 修正 リストのインポートをキャンセルしてもタイムラインの名前がリストの名前になるバグを修正 OAuth の承認を取り消した後、再ログインできないバグを修正 インポートするリストを選択せずにインポートをクリックすると落ちるバグを修正 追加 ...NETris - ASP .NET, AJAX, Web Service based Tetris Game: NETris - Source Code and Documentation: Fully functional prototype. Please note that documentation is written in the form of a report as the project was an assignment at Coventry University.Object/Relational Mapper & Code Generator in Net 2.0 for Relational & XML Schema: 2.10: Minor release, incremental changes to sample website and UI templates.Opalis Community Releases: Integration Pack for Standard OIS Logging: The Integration Pack for Standard OIS Logging provides extended Policy Logging functions to OIS and MSSQL. This Integration Pack adds the followin...PicassoCms: 0.6: More intuitive UI, new controlsPowerAuras: PowerAuras-3.0.0K-beta2: New Auras: Item Name Equipment Slot Tracking Changes from beta1 5 new aura textures Fixed Tracking bug Added graphical equipment slot sele...PowerPivot Sample Data: PowerPivot for Excel Tutorial Sample Data-v.2: The PowerPivot Tutorial Sample Data-Version 2 download includes a variety of data sources that you can use to complete the tutorial in the PowerPiv...Powwa: First build: Unpack & built with NetBean 6.8Quick Performance Monitor: Version 1.4: Added functionality to add and remove performance counters at run time. Also added saving and loading to file so sets of performance counters can b...Resonance: TrainNode Client Library: Libraries to access the TrainNode ServiceSharePoint 2010 Taxonomy Import Utility: TaxonomyBuilder Version 1.0.2: New Features Added support for additional term labels per term Added support for Term Set Owners Added support for Term Set stakeholders Upda...Silverlight for Umbraco Media Objects (SUMO): Community Tech Preview: The CTP for SUMO is now live, feedback appreciated!Silverlight Reporting: Initial Release: This is the first release of the code. It includes the source code from Pete's blog post article on Silverlight reporting.Simple.NET: Simple.Mocking 1.0.0.7: Initial version of a new mocking framework for .NET Revision 1: Expect.AnyInocationOn<T>(T target) changed to Expect.AnyInocationOn(object target...Smart Voice: Smart Voice 0.2.2: Changelog: Fixed more bugs Added a readme into the archiveSoulHackers Demon Unite(Chinese version): WPFClient pre alpha 2: pre alpha 2, need your feedbackSquiggle - A Free open source Lan Messenger: Squiggle 1.5: File Transfer capability added (With drag/drop support) Message text box maintains history of last 10 messages and you can retrieve them by CTRL+U...SSIS Expression Editor & Tester: Expression Editor and Tester v1.0.1.0: Minor updated release of expression editor tool and editor control. Download and extract the files to get started, no install required. Changes Si...StreamInsight Samples: GregLow HighwayMonitor Samples: Initial Upload of GregLow HighwayMonitor StreamInsight samples. These samples are used in the upcoming free eClinic for StreamInsight and have been...tChat - ASP.NET, Ajax & Web Service based Chat Room: tChat Source Code and Documentation: Functional prototype. T-SQL scripts can be found in the SQL folder. Please note that the documentation is written in a format of a report for a "du...Transform Config: Initial Release: This is the initial release of the project. It's all been thrown together quickly so it's lacking error handling etc, but it's still fully function...UrzaGatherer: UrzaGatherer v2.0.2: Integrate the support of SQL Server Compact Edition 3.5 SP2 for a better portability.Value Injecter: map anything to anything anyway you might imagine: ValueInjecter 1.9: Features map anything to anything flattening unflattening includes sample projects for: asp.net mvc asp.net web-forms win-formsVCC: Latest build, v2.1.30610.0: Automatic drop of latest buildVisual Studio DSite: Picture SlideShow Viewer (Visual C++ 2008): A picture slidershow viewer.VorbisPlayer: VorbisPlayer: The first release of the Silverlight VorbisPlayer, including source code and example files.WCF 4 Templates for Visual Studio 2010: AnonymousOverHttps Template: Produces a WCF service application configured for anonymous calls over HTTPS/SSL. Supplies a BasicHttpBinding default configured for Transport secu...Most Popular ProjectsHaoRan_TokyoTyrantClient.NET Transactional File ManagerSOLID by exampleMemetic NPC Behavior ToolkitSharpotify - Spotify .Net LibraryWCF 4 Templates for Visual Studio 2010SFTP Component for .NET CSharp, VB.NET, and ASP.NETUltimate FTP Component for .NET C#, VB.NET and ASP.NETAnurag Pallaprolu's Code RepositoryBigfootMVCMost Active ProjectsCommunity Forums NNTP bridgejQuery Library for SharePoint Web ServicesRhyduino - Arduino and Managed Codepatterns & practices – Enterprise LibraryNB_Store - Free DotNetNuke Ecommerce Catalog ModuleCassandraemonBlogEngine.NETMediaCoder.NETAndrew's XNA HelpersStyleCop

    Read the article

  • SQL SERVER – 2012 RC0 Various Resources and Downloads

    - by pinaldave
    Microsoft SQL Server 2012 Release Candidate 0 (RC0) Microsoft SQL Server 2012 RC0 enables a cloud-ready information platform that will help organizations unlock breakthrough insights across the organization. Microsoft SQL Server 2012 Express RC Microsoft SQL Server 2012 Express RC0 is a powerful and reliable free data management system that delivers a rich set of features, data protection, and performance for embedded applications, lightweight Web Sites, applications, and local data stores. Microsoft SQL Server 2012 Semantic Language Statistics RC0 The Semantic Language Statistics Database is a required component for the Statistical Semantic Search feature in Microsoft SQL Server 2012 Semantic Language Statistics RC0. Microsoft SQL Server 2012 Release Candidate 0 (RC0) Manageability Tool Kit The Microsoft SQL Server 2012 Release Candidate 0 (RC0) Manageability Tool Kit is a collection of stand-alone packages which provide additional value for Microsoft SQL Server 2012 Release Candidate 0 (RC0). Microsoft SQL Server 2012 PowerPivot for Microsoft Excel 2010 Release Candidate 0 (RC0) Microsoft PowerPivot for Microsoft Excel 2010 provides ground-breaking technology; fast manipulation of large data sets, streamlined integration of data, and the ability to effortlessly share your analysis through Microsoft SharePoint Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Disaster Recovery Plan&ndash;Rebuild System Disk (Dell Server 2900 with PERC RAID controller)

    - by Jim Lahman
    Goal: Since the system disk is a RAID 1 mirrored set, we can rebuild the shadow set by replacing one of the good sets with a blank disk Steps Shutdown and power down server Remove the disk from bay 9, which is part of the system shadow set. Put this disk on the shelf Insert blank/old disk into the empty bay     Label the new disk before inserting it into the empty bay       Power up server During the booting process, the following message appears: “Some configured disks have been removed from your system…”       Press ‘C’ to Load Configuration utility             Press 'Y' to confirm to load the foreign configuration       In this example, the system shadow set is Disk Group 2.  (Before proceeding, confirm this is the disk group in your case).  Expanding the physical disks shows a disk in bay 8 and a missing disk in bay 9.  This is correct.   Now, we have to include the new inserted disk in this group       RAID controller reporting bay 9 is empty       There may be times when the new disk is seen as a foreign disk.  In this case, do the following:     Foreign disk is reported in bay 9 CTRL-N (Next Page) to Foreign Mgt All the disk groups will be displayed.  Typically, the disk group containing the foreign disk will be grey.  To remove the foreign disk Highlight Controller Press F2 Select Foreign Select Clear (do NOT import the configuration!)       Clear the foreign configuration Now the disk can be brought into the system shadow set disk group as a hot spare   To include the newly inserted disk into the system shadowset disk group, it must be brought in as a hot spare Highlight Disk Group 2 (VD Management) Hit F2 Select 'Manage Ded. HS'     Manage dedicated hot swap Select the disk in bay 9 (Hit space bar to select) Tab to 'OK'.  Hit the return key     Select hot spare to bring into RAID 1 mirror set   Rebuild automatically commences     Rebuild in process   Restart now or restart after rebuild is completed

    Read the article

  • TFS Build 2010: BuildNumber and DropLocation

    - by javarg
    Automatic Builds for Application Release is a current practice in every major development factory nowadays. Using Team Foundation Server Build 2010 to accomplish this offers many opportunities to improve quality of your releases. The following approach allow us to generate build drop folders including the BuildNumber and the Changeset or Label provided. Using this procedure we can quickly identify the generated binaries in the Drop Server with the corresponding Version. Branch the DefaultTemplate.xaml and renamed it with CustomDefaultTemplate.xaml Open it for edit (check out) Go to the Set Drop Location Activity and edit the DropLocation property. Write the following expression: BuildDetail.DropLocationRoot + "\" + BuildDetail.BuildDefinition.Name + "\" + If(String.IsNullOrWhiteSpace(GetVersion), BuildDetail.SourceGetVersion, GetVersion) + "_" + BuildDetail.BuildNumber Check in the branched template. Now create a build definition named TestBuildForDev using the new template. The previous expression sets the DropLocation with the following format: (ChangesetNumber|LabelName)_BuildName_BuildNumber The first part of the folder name will be the changeset number or the label name (if triggered using labels). Folder names will be generated as following: C1850_TestBuildForDev_20111117.1 (changesets start with letter C) LLabelname_TestBuildForDev_20111117.1 (labels start with letter L) Try launching a build from a Changeset and from a Label. You can specify a Label in the GetVersion parameter in the Queue new Build Wizard, going to the Parameters tab (for labels add the “L” prefix):

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • Desktop Fun: Music Icon Packs

    - by Asian Angel
    If you really love music and want to liven up your desktop then get ready to create a desktop concert with our Music Icon Packs collection. Note: To customize the icon setup on your Windows 7 & Vista systems see our article here. Using Windows XP? We have you covered here. Sneak Preview For our desktop example we decided to go with a touch of anime musical fun. The icons used are from the Guitar Icons set shown below. Note: Wallpaper can be found here. An up close look at the icons that we used… Notes icon set 1 *.png format only Download Notes icon set 2 *.png format only Download Notes icon set 3 *.png format only Download Notes icon set 4 *.png format only Download Big Band Set 1 *.ico format only Download Big Band Set 2 *.ico format only Download Acoustic Guitars *.ico and .png format Download Acoustic Guitar *.ico format only Download Guitar Icons *.ico format only Download Guiter Skulll *.ico format only Download Dented Music *.ico format only Download Music Icons *.ico format only Download Ipod Mini *.ico format only Download MP3 Players Icons *.ico format only Download MusicPhones icon *.ico and .png format Download Wanting more great icon sets to look through? Be certain to visit our Desktop Fun section for more icon goodness! Similar Articles Productive Geek Tips Desktop Fun: Video Game Icon PacksDesktop Fun: Sci-Fi Icons Packs Series 2Why Did Windows Vista’s Music Folder Icon Turn Yellow?Restore Missing Desktop Icons in Windows 7 or VistaAdd Home Directory Icon to the Desktop in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Download Songs From MySpace Steve Jobs’ iPhone 4 Keynote Video Watch World Cup Online On These Sites Speed Up Windows With ReadyBoost Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online

    Read the article

  • "Oracle", "Sybase", "SQL Server" vs just "SQL/JDBC" in the CV

    - by bobah
    How would you define a testable measure of the expertise that, if you're honest with yourself, lets you write in your CV words "Oracle", "Sybase", or "SQL Server" and not just "Relational Databases, SQL, JDBC" in your software developer's CV? What every XXX-developer (XXX - a vendor name) should know? The question is similar to http://stackoverflow.com/questions/2119859/questions-every-good-database-sql-developer-should-be-able-to-answer but is vendor-specific. Below is a start of the list as an example, demonstrate what kind of answers I am hoping to get. If you are expert in X then you know that Y (X - Y below): Sybase/SQL Server - they are very similar, Sybase is much more expensive Sybase/SQL Server - for Java you can use either native Sybase/JSQLDB driver or jTDS that is using TDS protocol and can connect to SQL Server as well, TDS traffic can be dumped and analyzed with hexdump command Sybase/SQL Server - for C++ you can use FreeTDS to connect to any, for Perl - same Sybase/SQL Server - a query can return multiple result sets and return codes, all need to be processes otherwise errors can happen Sybase/SQL Server - sp_help, sp_helptext Sybase/SQL Server - your tables/views/procedures are under DBName/dbo/... Sybase - for C++ on Linux you can use Sybase client API to connect (at least until recently) SQL Server - JDBC driver has a configurable transparent failover capability Oracle - for C++ Linux one can use OTLv4 that is a very powerful yet lightweight wrapper around Oracle client API Oracle compilation (contributors: ammoQ) PLSQL Java Stored Procedures '' is null Hierarchical Query Analytic Functions Oracle Text

    Read the article

  • Convert your Hash keys to object properties in Ruby

    - by kerry
    Being a Ruby noob (and having a background in Groovy), I was a little surprised that you can not access hash objects using the dot notation.  I am writing an application that relies heavily on XML and JSON data.  This data will need to be displayed and I would rather use book.author.first_name over book[‘author’][‘first_name’].  A quick search on google yielded this post on the subject. So, taking the DRYOO (Don’t Repeat Yourself Or Others) concept.  I came up with this: 1: class ::Hash 2:  3: # add keys to hash 4: def to_obj 5: self.each do |k,v| 6: if v.kind_of? Hash 7: v.to_obj 8: end 9: k=k.gsub(/\.|\s|-|\/|\'/, '_').downcase.to_sym 10: self.instance_variable_set("@#{k}", v) ## create and initialize an instance variable for this key/value pair 11: self.class.send(:define_method, k, proc{self.instance_variable_get("@#{k}")}) ## create the getter that returns the instance variable 12: self.class.send(:define_method, "#{k}=", proc{|v| self.instance_variable_set("@#{k}", v)}) ## create the setter that sets the instance variable 13: end 14: return self 15: end 16: end This works pretty well.  It converts each of your keys to properties of the Hash.  However, it doesn’t sit very well with me because I probably will not use 90% of the properties most of the time.  Why should I go through the performance overhead of creating instance variables for all of the unused ones? Enter the ‘magic method’ #missing_method: 1: class ::Hash 2: def method_missing(name) 3: return self[name] if key? name 4: self.each { |k,v| return v if k.to_s.to_sym == name } 5: super.method_missing name 6: end 7: end This is a much cleaner method for my purposes.  Quite simply, it checks to see if there is a key with the given symbol, and if not, loop through the keys and attempt to find one. I am a Ruby noob, so if there is something I am overlooking, please let me know.

    Read the article

  • Google Analytics setting cookies on static content despite being on entirely separate domain

    - by Donald Jenkins
    I recently decided to comply with the YSlow recommendation that static content is hosted on a cookieless domain. As I already use the root of my domain (donaldjenkins.com) to host my website—on which Google Analytics sets a few cookies—that meant I had to move the CNAME URL for the CDN serving the static files from cdn.donaldjenkins.com to an entirely separate, dedicated domain. I purchased cdn.dj (yes, it's a real Djibouti domain name), hosted the files on the root (which contains nothing else, other than a robots.txt file) and set a CNAME of e.cdn.dj for the CDN. This setup works, but I was rather surprised to find that YSlow was still flagging the static files for not being cookie-free: here's a screenshot: The cdn.djdomain was new, and was never used for anything other than hosting these static files. Running httpfox on the site shows the _utma and _utmz Google Analytics cookies are being set on the static files listed above—despite their being hosted on an entirely separate, dedicated domain. Here's my Google Analytics code: //Google Analytics tracking code var _gaq=[['_setAccount','UA-5245947-5'],['_trackPageview']]; (function(d,t){var g=d.createElement(t),s=d.getElementsByTagName(t)[0]; g.src=('https:'==location.protocol?'//ssl':'//www')+'.google-analytics.com/ga.js'; s.parentNode.insertBefore(g,s)}(document,'script')); // [END] Google Analytics tracking code I'm not obsessing about this issue—I know it's not really affecting server performance—but I'd like to just understand what is causing it not to go away...

    Read the article

  • Oracle Utilities Application Framework V4.1 Group Fix 4 available

    - by ACShorten
    Oracle Utilities Application Framework V4.1 Group Fix 4 is available from My Oracle Support as Patch 13523301. This Group Fix contains a number of enhancements and keeps fixes up to date to the latest patch level. The enhancements included in this Group Fix include: UI Hints - In previous group fixes of the Oracle Utilities Application Framework the infrastructure to support UI Hints was introduced. This group fix completes the release of this functionality. Prior to this enhancement, products and implementers typically would build at least one UI Map per Business Object to display and/or maintain the object. Whilst, this can be generated using the UI Map maintenance function and stored, this enhancement allows additional tags and elements to be added to the Business Object directly to allow dynamic generation of the UI Map for maintenance and viewing the object. This reduces the need to generate and build a UI Map at all for that object. This will reduce maintenance effort of maintaining the product and implementation by eliminating the need to maintain the HTML for the UI Map. This also allows lower skilled personnel to maintain the system. Help and working examples are available from the View schema attributes and node names option from the Schema Tips dashboard zone. For example: Note: For examples of the hints, refer to an of the following Business Objects F1_OutcomeStyleLookup, F1-TodoSumEmailType, F1-BOStatusReason or F1-BIGeneralMasterConfig. Setting batch log file names -  By default the batch infrastructure supplied with the Oracle Utilities Application Framework sets the name and location of the log files to set values. In Group Fix 4 a set of user exits have been added to allow implementers and partners to set their own filename and location.  Refer to the Release Notes in the download for more details.

    Read the article

  • SQL SERVER – A Funny Cartoon on Index

    - by pinaldave
    Performance Tuning has been my favorite subject and I have done it for many years now. Today I will list one of the most common conversation about Index I have heard in my life. Every single time, I am at consultation for performance tuning I hear following conversation among various team members. I want to ask you, does this kind of conversation happens in your organization? Any way, If you think Index solves all of your performance problem I think it is not true. There are many other reason one has to consider along with Indexes. For example I consider following various topic one need to understand for performance tuning. ?Logical Query Processing ?Efficient Join Techniques ?Query Tuning Considerations ?Avoiding Common Performance Tuning Issues Statistics and Best Practices ?TempDB Tuning ?Hardware Planning ?Understanding Query Processor ?Using SQL Server 2005 and 2008 Updated Feature Sets ?CPU, Memory, I/O Bottleneck Index Tuning (of course) ?Many more… Well, I have written this blog thinking I will keep this blog post a bit easy and not load up. I will in future discuss about other performance tuning concepts. Let me know what do you think about the cartoon I made. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Humor, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • NetBeans Podcast #61

    - by TinuA
    Download mp3: 39 minutes – 31.6 MB Subscribe to the NetBeans Podcast on iTunes NetBeans Community News with Geertjan and Tinu What's NEW? The Smarter and NOW FASTER NetBeans IDE 7.2 available since July. Is it faster for you too? Tell us about it on Twitter! (#netbeans) NetBeans Community Day at JavaOne is BACK!!! Join the NetBeans team in San Francisco on Sunday, September 30th for a full day of sessions about how various Java EE, JavaFX, and NetBeans Platform experts are using NetBeans in the real-world. NetBeans Community Day is just the start of the fun at JavaOne 2012, check out the full listing of ALL NetBeans-related sessions at the conference. NetBeans Governance Board elections are around the corner. Nominate yourself or someone who you think can represent the interest of the NetBeans Community. Email us at nbpodcast at netbeans dot org to get on the ballot in September. Community Interview: Çagatay Çivici, PrimeFaces Çagatay Çivici is the lead architect and founder of PrimeFaces , the popular JSF component library. Find out what the project is about, its inception, how to create PrimeFaces-based application inside NetBeans IDE, and more. Learn more about PrimeFaces at NetBeans Community Day at JavaOne 2012. Dig deeper into PrimeFaces at JavaOne 2012: CON6139 - Lessons Learned in Building Enterprise and Desktop Applications with the NetBeans IDE Community Interview: Timon Veenstra, Agrosense Timon Veenstra is the architect behind Agrosense , an open-source farm management system built on the NetBeans Platform. Find out how Agrosense helps farms run more efficiently and productively, and why NetBeans is the platform of choice for Timon and the Agrosense team. Catch a demo of Agrosense at NetBeans Community Day at JavaOne 2012. API Design with Jarda Tulach Geertjan has been using the Lookup API incorrectly; Jarda sets him on the right path. *Have ideas for NetBeans Podcast topics? Send them to nbpodcast at netbeans dot org. *Subscribe to the official NetBeans page on Facebook! Check us out as well on Twitter, YouTube, and Google+.

    Read the article

  • Configuring Request-Reply in JMSAdapter

    - by [email protected]
    Request-Reply is a new feature in 11g JMSAdapter that helps you achieve the following:Allows you to combine Request and Reply in a single step. In the prior releases of the Oracle SOA Suite, you would require to configure two distinct adapters. Performs automatic correlation without you needing to configure BPEL "correlation sets". This would work seamlessly in Mediator and BPMN as well.In order to configure the JMSAdapter Request-Reply, please follow these steps:1) Drag and drop a JMSAdapter onto the "External References" swim lane in your composite editor. 2) Enter default values for the first few screens in the JMS Adapter wizard till you hit the screen where the wizard prompts you to enter the operation name. Select "Request-Reply" as the "Operation Type" and Asynchronous as "Operation Name".3) Select the Request and Reply queues in the following screens of the wizard. The message will be en-queued in the "Request" queue and the reply will be returned in the "Reply" queue. The reason I have used such a selector is that the back-end system that reads from the request queue and generates the response in the response queue actually generates more than one response and hence I must use a filter to exclude the unwanted responses.4) Select the message schema for request as well as response. 5) Add an <invoke> activity in BPEL corresponding to the JMS Adapter partner link. Please note that I am setting an additional header as my third-party application requires this.6) Add a <receive> activity just after the <invoke> and select the "Reply" operation. Please make sure that the "Create Instance" option is unchecked.Your completed BPEL process will something like this:

    Read the article

  • Understanding Process Scheduling in Oracle Solaris

    - by rickramsey
    The process scheduler in the Oracle Solaris kernel allocates CPU resources to processes. By default, the scheduler tries to give every process relatively equal access to the available CPUs. However, you might want to specify that certain processes be given more resources than others. That's where classes come in. A process class defines a scheduling policy for a set of processes. These three resources will help you understand and manage it process classes: Blog: Overview of Process Scheduling Classes in the Oracle Solaris Kernel by Brian Bream Timesharing, interactive, fair-share scheduler, fixed priority, system, and real time. What are these? Scheduling classes in the Solaris kernel. Brian Bream describes them and how the kernel manages them through context switching. Blog: Process Scheduling at the Thread Level by Brian Bream The Fair Share Scheduler allows you to dispatch processes not just to a particular CPU, but to CPU threads. Brian Bream explains how to use and provides examples. Docs: Overview of the Fair Share Scheduler by Oracle Solaris Documentation Team This official Oracle Solaris documentation set provides the nitty-gritty details for setting up classes and managing your processes. Covers: Introduction to the Scheduler CPU Share Definition CPU Shares and Process State CPU Share Versus Utilization CPU Share Examples FSS Configuration FSS and Processor Sets Combining FSS With Other Scheduling Classes Setting the Scheduling Class for the System Scheduling Class on a System with Zones Installed Commands Used With FSS -Rick Follow me on: Blog | Facebook | Twitter | Personal Twitter | YouTube | The Great Peruvian Novel

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >