Search Results

Search found 6690 results on 268 pages for 'resource kit'.

Page 251/268 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • Parallelism in .NET – Part 15, Making Tasks Run: The TaskScheduler

    - by Reed
    In my introduction to the Task class, I specifically made mention that the Task class does not directly provide it’s own execution.  In addition, I made a strong point that the Task class itself is not directly related to threads or multithreading.  Rather, the Task class is used to implement our decomposition of tasks.  Once we’ve implemented our tasks, we need to execute them.  In the Task Parallel Library, the execution of Tasks is handled via an instance of the TaskScheduler class. The TaskScheduler class is an abstract class which provides a single function: it schedules the tasks and executes them within an appropriate context.  This class is the class which actually runs individual Task instances.  The .NET Framework provides two (internal) implementations of the TaskScheduler class. Since a Task, based on our decomposition, should be a self-contained piece of code, parallel execution makes sense when executing tasks.  The default implementation of the TaskScheduler class, and the one most often used, is based on the ThreadPool.  This can be retrieved via the TaskScheduler.Default property, and is, by default, what is used when we just start a Task instance with Task.Start(). Normally, when a Task is started by the default TaskScheduler, the task will be treated as a single work item, and run on a ThreadPool thread.  This pools tasks, and provides Task instances all of the advantages of the ThreadPool, including thread pooling for reduced resource usage, and an upper cap on the number of work items.  In addition, .NET 4 brings us a much improved thread pool, providing work stealing and reduced locking within the thread pool queues.  By using the default TaskScheduler, our Tasks are run asynchronously on the ThreadPool. There is one notable exception to my above statements when using the default TaskScheduler.  If a Task is created with the TaskCreationOptions set to TaskCreationOptions.LongRunning, the default TaskScheduler will generate a new thread for that Task, at least in the current implementation.  This is useful for Tasks which will persist for most of the lifetime of your application, since it prevents your Task from starving the ThreadPool of one of it’s work threads. The Task Parallel Library provides one other implementation of the TaskScheduler class.  In addition to providing a way to schedule tasks on the ThreadPool, the framework allows you to create a TaskScheduler which works within a specified SynchronizationContext.  This scheduler can be retrieved within a thread that provides a valid SynchronizationContext by calling the TaskScheduler.FromCurrentSynchronizationContext() method. This implementation of TaskScheduler is intended for use with user interface development.  Windows Forms and Windows Presentation Foundation both require any access to user interface controls to occur on the same thread that created the control.  For example, if you want to set the text within a Windows Forms TextBox, and you’re working on a background thread, that UI call must be marshaled back onto the UI thread.  The most common way this is handled depends on the framework being used.  In Windows Forms, Control.Invoke or Control.BeginInvoke is most often used.  In WPF, the equivelent calls are Dispatcher.Invoke or Dispatcher.BeginInvoke. As an example, say we’re working on a background thread, and we want to update a TextBlock in our user interface with a status label.  The code would typically look something like: // Within background thread work... string status = GetUpdatedStatus(); Dispatcher.BeginInvoke(DispatcherPriority.Normal, new Action( () => { statusLabel.Text = status; })); // Continue on in background method .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This works fine, but forces your method to take a dependency on WPF or Windows Forms.  There is an alternative option, however.  Both Windows Forms and WPF, when initialized, setup a SynchronizationContext in their thread, which is available on the UI thread via the SynchronizationContext.Current property.  This context is used by classes such as BackgroundWorker to marshal calls back onto the UI thread in a framework-agnostic manner. The Task Parallel Library provides the same functionality via the TaskScheduler.FromCurrentSynchronizationContext() method.  When setting up our Tasks, as long as we’re working on the UI thread, we can construct a TaskScheduler via: TaskScheduler uiScheduler = TaskScheduler.FromCurrentSynchronizationContext(); We then can use this scheduler on any thread to marshal data back onto the UI thread.  For example, our code above can then be rewritten as: string status = GetUpdatedStatus(); (new Task(() => { statusLabel.Text = status; })) .Start(uiScheduler); // Continue on in background method This is nice since it allows us to write code that isn’t tied to Windows Forms or WPF, but is still fully functional with those technologies.  I’ll discuss even more uses for the SynchronizationContext based TaskScheduler when I demonstrate task continuations, but even without continuations, this is a very useful construct. In addition to the two implementations provided by the Task Parallel Library, it is possible to implement your own TaskScheduler.  The ParallelExtensionsExtras project within the Samples for Parallel Programming provides nine sample TaskScheduler implementations.  These include schedulers which restrict the maximum number of concurrent tasks, run tasks on a single threaded apartment thread, use a new thread per task, and more.

    Read the article

  • Database users in the Oracle Utilities Application Framework

    - by Anthony Shorten
    I mentioned the product database users fleetingly in the last blog post and they deserve a better mention. This applies to all versions of the Oracle Utilities Application Framework. The Oracle Utilities Application Framework uses up to three users initially as part of the base operations of the product. The type of database supported (the framework supports Oracle, IBM DB2 and Microsoft SQL Server) dictates the number of users used and their permissions. For publishing brevity I will outline what is available for the Oracle database and, in summary, mention where it differs for the other database supported. For Oracle database customers we ship three distinct database users: Administration User (SPLADM or CISADM by default) - This is the database user that actually owns the schema. This user is not used by the product to do any DML (Data Manipulation Language) SQL other than that is necessary for maintenance of the database. This database user performs all the DCL (Data Control Language) and DDL (Data Definition Language) against the database. It is typically reserved for Database Administration use only. Product Read Write User (SPLUSER or CISUSER by default) - This is the database user used by the product itself to execute DML (Data Manipulation Language) statements against the schema owned by the Administration user. This user has the appropriate read and write permission to objects within the schema owned by the Administration user. For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. Product Read User (SPLREAD or CISREAD by default) - This is the database that has read only permission to the schema owned by the Administration user. It is used for reporting or any part of the product or interface that requires read permissions to the database (for example, products that have ConfigLab and Archiving use this user for remote access). For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. You may notice the words by default in the list above. The values supplied with the installer are the default and can be changed to what the site standard or implementation wants to use (as long as they conform to the standards supported by the underlying database). You can even create multiples of each within the same database and pointing to same schema. To manage the permissions for the users, there is a utility provided with the installation (oragensec (Oracle), db2gensec (DB2) or msqlgensec (SQL Server)) that generates the security definitions for the above users. That can be executed a number of times for each schema to give users appropriate permissions. For example, it is possible to define more than one read/write User to access the database. This is a common technique used by implementations to have a different user per access mode (to separate online and batch). In fact you can also allocate additional security (such as resource profiles in Oracle) to limit the impact of specific users at the database. To facilitate users and permissions, in Oracle for example, we create a CISREAD role (read only role) and a CISUSER role (read write role) that can be allocated to the appropriate database user. When the security permissions utility, oragensec in this case, is executed it uses the role to determine the permissions. To give you a case study, my underpowered laptop has multiple installations on it of multiple products but I have one database. I create a different schema for each product and each version (with my own naming convention to help me manage the databases). I create individual users on each schema and run oragensec to maintain the permissions for each appropriately. It works fine as long I have setup the userids appropriately. This means: Creating the users with the appropriate roles. I use the common CISUSER and CISREAD role across versions and across Oracle Utilities Application Framework products. Just remember to associate the CISUSER role with the database user you want to use for read/write operations and the CISREAD role with the user you wish to use for the read only operations. The role is treated as a tag to indicate the oragensec utility which appropriate permissions to assign to the user. The utilities for the other database types essentially do the same, obviously using the technology available within those databases. Run oragensec against the read write user and read only user against the appropriate administration user (I will abbreviate the user to ADM user). This ensures the right permissions are allocated to the right users for the right products. To help me there, I use the same prefix on the user name for the same product. For example, my Oracle Utilities Application Framework V4 environment has the administration user set to FW4ADM and the associated FW4USER and FW4READ as the users for the product to use. For my MWM environment I used MWMADM for the administration user and MWMUSER and MWMREAD for my associated users. You get the picture. When I run oragensec (once for each ADM user), I know what other users to associate with it. Remember to rerun oragensec against the users if I run upgrades, service packs or database based single fixes. This assures that the users are in synchronization with the ADM user. As a side note, for those who do not understand the difference between DML, DCL and DDL: DDL (Data Definition Language) - These are SQL statements that define the database schema and the structures within. SQL Statements such as CREATE and DROP are examples of DDL SQL statements. DCL (Data Control Language) - These are the SQL statements that define the database level permissions to DDL maintained objects within the database. SQL Statements such as GRANT and REVOKE are examples of DCL SQL statements. DML (Database Manipulation Language) - These are SQL statements that alter the data within the tables. SQL Statements such as SELECT, INSERT, UPDATE and DELETE are examples of DML SQL statements. Hope this has clarified the database user support. Remember in Oracle Utilities Application Framework V4 we enhanced this by also supporting CLIENT_IDENTIFIER to allow the database to still use the administration user for the main processing but make the database session more traceable.

    Read the article

  • Oracle Data Mining a Star Schema: Telco Churn Case Study

    - by charlie.berger
    There is a complete and detailed Telco Churn case study "How to" Blog Series just posted by Ari Mozes, ODM Dev. Manager.  In it, Ari provides detailed guidance in how to leverage various strengths of Oracle Data Mining including the ability to: mine Star Schemas and join tables and views together to obtain a complete 360 degree view of a customer combine transactional data e.g. call record detail (CDR) data, etc. define complex data transformation, model build and model deploy analytical methodologies inside the Database  His blog is posted in a multi-part series.  Below are some opening excerpts for the first 3 blog entries.  This is an excellent resource for any novice to skilled data miner who wants to gain competitive advantage by mining their data inside the Oracle Database.  Many thanks Ari! Mining a Star Schema: Telco Churn Case Study (1 of 3) One of the strengths of Oracle Data Mining is the ability to mine star schemas with minimal effort.  Star schemas are commonly used in relational databases, and they often contain rich data with interesting patterns.  While dimension tables may contain interesting demographics, fact tables will often contain user behavior, such as phone usage or purchase patterns.  Both of these aspects - demographics and usage patterns - can provide insight into behavior.Churn is a critical problem in the telecommunications industry, and companies go to great lengths to reduce the churn of their customer base.  One case study1 describes a telecommunications scenario involving understanding, and identification of, churn, where the underlying data is present in a star schema.  That case study is a good example for demonstrating just how natural it is for Oracle Data Mining to analyze a star schema, so it will be used as the basis for this series of posts...... Mining a Star Schema: Telco Churn Case Study (2 of 3) This post will follow the transformation steps as described in the case study, but will use Oracle SQL as the means for preparing data.  Please see the previous post for background material, including links to the case study and to scripts that can be used to replicate the stages in these posts.1) Handling missing values for call data recordsThe CDR_T table records the number of phone minutes used by a customer per month and per call type (tariff).  For example, the table may contain one record corresponding to the number of peak (call type) minutes in January for a specific customer, and another record associated with international calls in March for the same customer.  This table is likely to be fairly dense (most type-month combinations for a given customer will be present) due to the coarse level of aggregation, but there may be some missing values.  Missing entries may occur for a number of reasons: the customer made no calls of a particular type in a particular month, the customer switched providers during the timeframe, or perhaps there is a data entry problem.  In the first situation, the correct interpretation of a missing entry would be to assume that the number of minutes for the type-month combination is zero.  In the other situations, it is not appropriate to assume zero, but rather derive some representative value to replace the missing entries.  The referenced case study takes the latter approach.  The data is segmented by customer and call type, and within a given customer-call type combination, an average number of minutes is computed and used as a replacement value.In SQL, we need to generate additional rows for the missing entries and populate those rows with appropriate values.  To generate the missing rows, Oracle's partition outer join feature is a perfect fit.  select cust_id, cdre.tariff, cdre.month, minsfrom cdr_t cdr partition by (cust_id) right outer join     (select distinct tariff, month from cdr_t) cdre     on (cdr.month = cdre.month and cdr.tariff = cdre.tariff);   ....... Mining a Star Schema: Telco Churn Case Study (3 of 3) Now that the "difficult" work is complete - preparing the data - we can move to building a predictive model to help identify and understand churn.The case study suggests that separate models be built for different customer segments (high, medium, low, and very low value customer groups).  To reduce the data to a single segment, a filter can be applied: create or replace view churn_data_high asselect * from churn_prep where value_band = 'HIGH'; It is simple to take a quick look at the predictive aspects of the data on a univariate basis.  While this does not capture the more complex multi-variate effects as would occur with the full-blown data mining algorithms, it can give a quick feel as to the predictive aspects of the data as well as validate the data preparation steps.  Oracle Data Mining includes a predictive analytics package which enables quick analysis. begin  dbms_predictive_analytics.explain(   'churn_data_high','churn_m6','expl_churn_tab'); end; /select * from expl_churn_tab where rank <= 5 order by rank; ATTRIBUTE_NAME       ATTRIBUTE_SUBNAME EXPLANATORY_VALUE RANK-------------------- ----------------- ----------------- ----------LOS_BAND                                      .069167052          1MINS_PER_TARIFF_MON  PEAK-5                   .034881648          2REV_PER_MON          REV-5                    .034527798          3DROPPED_CALLS                                 .028110322          4MINS_PER_TARIFF_MON  PEAK-4                   .024698149          5From the above results, it is clear that some predictors do contain information to help identify churn (explanatory value > 0).  The strongest uni-variate predictor of churn appears to be the customer's (binned) length of service.  The second strongest churn indicator appears to be the number of peak minutes used in the most recent month.  The subname column contains the interior piece of the DM_NESTED_NUMERICALS column described in the previous post.  By using the object relational approach, many related predictors are included within a single top-level column. .....   NOTE:  These are just EXCERPTS.  Click here to start reading the Oracle Data Mining a Star Schema: Telco Churn Case Study from the beginning.    

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Remote Desktop to Your Azure Virtual Machine

    - by Shaun
    The Windows Azure Team had just published their new development portal this week and the SDK 1.3. Within this new release there are a lot of cool feature available. The one I’m looking forward to is Remote Desktop Access to your running Windows Azure Virtual Machine.   Configuration Remote Desktop Access It would be very simple to make the azure service enable the remote desktop access. First of all let’s create a new windows azure project from the Visual Studio. In this example I just created a normal MVC 2 web role without any modifications. Then we right-click the azure project node in the solution explorer window and select “Publish”. Then let’s select the “Deploy your Windows Azure project to Windows Azure” on the top radio button. And then select the credential, deployment service/slot, storage and label as susal. You must have the Management API Certificates uploaded to your Windows Azure account, and install the certification on you machine before in order to use this one-click deployment feature. If you are familiar with this dialog you will notice that there’s a linkage named “Configure Remote Desktop connections”. Here is where you need to make this service enable the remote desktop feature. After clicked this link we will set the configuration of the remote desktop access authorization information. There are 4 steps we need to do to configure our access. Certificates: We need either create or select a certificate file in order to encypt the access cerdenticals. In this example I will use the certificate file for my Management API. Username: The remote desktop user name to access the virtual machine. Password: The password for the access. Expiration: The access cerdentals would be expired after 1 month by default but we can amend here. After that we clicked the OK button to back to the publish dialog.   The next step is to back to the new windows azure portal and navigate to the hosted services list. I created a new hosted service and upload the certificate file onto this service. The user name and password access to the azure machine must be encrypted from the local machine, and then send to the windows azure platform, then decrypted on the azure side by the same file. This is why we need to upload the certificate file onto azure. We navigated to the “Hosted Services, Storage Accounts & CDN"” from the left panel and created a new hosted service named “SDK13” and selected the “Certificates” node. Then we clicked the “Add Certificates” button. Then we select the local certificate file and the password to install it into this azure service.   The final step would be back to our Visual Studio and in the pulish dialog just click the OK button. The Visual Studio will upload our package and the configuration into our service with the remote desktop settings.   Remote Desktop Access to Azure Virtual Machine All things had been done, let’s have a look back on the Windows Azure Development Portal. If I selected the web role that I had just published we can see on the toolbar there’s a section named “Remote Access”. In this section the Enable checkbox had been checked which means this role has the Remote Desktop Access feature enabled. If we want to modify the access cerdentals we can simply click the Configure button. Then we can update the user name, password, certificates and the expiration date.   Let’s select the instance node under the web role. In this case I just created one instance for demo. We can see that when we selected the instance node, the Connect button turned enabled. After clicked this button there will be a RDP file downloaded. This is a Remote Desctop configuration file that we can use to access to our azure virtual machine. Let’s download it to our local machine and execute. We input the user name and password we specified when we published our application to azure and then click OK. There might be some certificates warning dislog appeared. This is because the certificates we use to encryption is not signed by a trusted provider. Just select OK in these cases as we know the certificate is safty to us. Finally, the virtual machine of Windows Azure appeared.   A Quick Look into the Azure Virtual Machine Let’s just have a very quick look into our virtual machine. There are 3 disks available for us: C, D and E. Disk C: Store the local resource, diagnosis information, etc. Disk D: System disk which contains the OS, IIS, .NET Frameworks, etc. Disk E: Sotre our application code. The IIS which hosting our webiste on Azure. The IP configuration of the azure virtual machine.   Summary In this post I covered one of the new feature of the Azure SDK 1.3 – Remote Desktop Access. We can set the access per service and all of the instances of this service could be accessed through the remote desktop tool. With this feature we can deep into the virtual machines of our instances to see the inner information such as the system event, IIS log, system information, etc. But we should pay attention to modify the system settings. 2 reasons from what I know for now: 1. If we have more than one instances against our service we should ensure that all system settings we modifed are applied to all instances/virtual machines. Otherwise, as the machines are under the azure load balance proxy our application process may doesn’t work due to the defferent settings between the instances. 2. When the virtual machine encounted some problem and need to be translated to another physical machine all settings we made would be disappeared.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 3

    - by SQLOS Team
    In parts 1 and 2 of this series we looked at the basics of Hyper-V Dynamic Memory and SQL Server memory management. In this part Serdar looks at configuration guidelines for SQL Server memory management. Part 3: Configuration Guidelines for Hyper-V Dynamic Memory and SQL Server Now that we understand SQL Server Memory Management and Hyper-V Dynamic Memory basics, let’s take a look at general configuration guidelines in order to utilize benefits of Hyper-V Dynamic Memory in your SQL Server VMs. Requirements Host Operating System Requirements Hyper-V Dynamic Memory feature is introduced with Windows Server 2008 R2 SP1. Therefore in order to use Dynamic Memory for your virtual machines, you need to have Windows Server 2008 R2 SP1 or Microsoft Hyper-V Server 2008 R2 SP1 in your Hyper-V host. Guest Operating System Requirements In addition to this Dynamic Memory is only supported in Standard, Web, Enterprise and Datacenter editions of windows running inside VMs. Make sure that your VM is running one of these editions. For additional requirements on each operating system see “Dynamic Memory Configuration Guidelines” here. SQL Server Requirements All versions of SQL Server support Hyper-V Dynamic Memory. However, only certain editions of SQL Server are aware of dynamically changing system memory. To have a truly dynamic environment for your SQL Server VMs make sure that you are running one of the SQL Server editions listed below: ·         SQL Server 2005 Enterprise ·         SQL Server 2008 Enterprise / Datacenter Editions ·         SQL Server 2008 R2 Enterprise / Datacenter Editions Configuration guidelines for other versions of SQL Server are covered below in the FAQ section. Guidelines for configuring Dynamic Memory Parameters Here is how to configure Dynamic Memory for your SQL VMs in a nutshell: Hyper-V Dynamic Memory Parameter Recommendation Startup RAM 1 GB + SQL Min Server Memory Maximum RAM > SQL Max Server Memory Memory Buffer % 5 Memory Weight Based on performance needs   Startup RAM In order to ensure that your SQL Server VMs can start correctly, ensure that Startup RAM is higher than configured SQL Min Server Memory for your VMs. Otherwise SQL Server service will need to do paging in order to start since it will not be able to see enough memory during startup. Also note that Startup Memory will always be reserved for your VMs. This will guarantee a certain level of performance for your SQL Servers, however setting this too high will limit the consolidation benefits you’ll get out of your virtualization environment. Maximum RAM This one is obvious. If you’ve configured SQL Max Server Memory for your SQL Server, make sure that Dynamic Memory Maximum RAM configuration is higher than this value. Otherwise your SQL Server will not grow to memory values higher than the value configured for Dynamic Memory. Memory Buffer % Memory buffer configuration is used to provision file cache to virtual machines in order to improve performance. Due to the fact that SQL Server is managing its own buffer pool, Memory Buffer setting should be configured to the lowest value possible, 5%. Configuring a higher memory buffer will prevent low resource notifications from Windows Memory Manager and it will prevent reclaiming memory from SQL Server VMs. Memory Weight Memory weight configuration defines the importance of memory to a VM. Configure higher values for the VMs that have higher performance requirements. VMs with higher memory weight will have more memory under high memory pressure conditions on your host. Questions and Answers Q1 – Which SQL Server memory model is best for Dynamic Memory? The best SQL Server model for Dynamic Memory is “Locked Page Memory Model”. This memory model ensures that SQL Server memory is never paged out and it’s also adaptive to dynamically changing memory in the system. This will be extremely useful when Dynamic Memory is attempting to remove memory from SQL Server VMs ensuring no SQL Server memory is paged out. You can find instructions on configuring “Locked Page Memory Model” for your SQL Servers here. Q2 – What about other SQL Server Editions, how should I configure Dynamic Memory for them? Other editions of SQL Server do not adapt to dynamically changing environments. They will determine how much memory they should allocate during startup and don’t change this value afterwards. Therefore make sure that you configure a higher startup memory for your VM because that will be all the memory that SQL Server utilize Tune Maximum Memory and Memory Buffer based on the other workloads running on the system. If there are no other workloads consider using Static Memory for these editions. Q3 – What if I have multiple SQL Server instances in a VM? Having multiple SQL Server instances in a VM is not a general recommendation for predictable performance, manageability and isolation. In order to achieve a predictable behavior make sure that you configure SQL Min Server Memory and SQL Max Server Memory for each instance in the VM. And make sure that: ·         Dynamic Memory Startup Memory is greater than the sum of SQL Min Server Memory values for the instances in the VM ·         Dynamic Memory Maximum Memory is greater than the sum of SQL Max Server Memory values for the instances in the VM Q4 – I’m using Large Page Memory Model for my SQL Server. Can I still use Dynamic Memory? The short answer is no. SQL Server does not dynamically change its memory size when configured with Large Page Memory Model. In virtualized environments Hyper-V provides large page support by default. Most of the time, Large Page Memory Model doesn’t bring any benefits to a SQL Server if it’s running in virtualized environments. Q5 – How do I monitor SQL performance when I’m trying Dynamic Memory on my VMs? Use the performance counters below to monitor memory performance for SQL Server: Process - Working Set: This counter is available in the VM via process performance counters. It represents the actual amount of physical memory being used by SQL Server process in the VM. SQL Server – Buffer Cache Hit Ratio: This counter is available in the VM via SQL Server counters. This represents the paging being done by SQL Server. A rate of 90% or higher is desirable. Conclusion These blog posts are a quick start to a story that will be developing more in the near future. We’re still continuing our testing and investigations to provide more detailed configuration guidelines with example performance numbers with a white paper in the upcoming months. Now it’s time to give SQL Server and Hyper-V Dynamic Memory a try. Use this guidelines to kick-start your environment. See what you think about it and let us know of your experiences. - Serdar Sutay Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • ODI 12c - Parallel Table Load

    - by David Allan
    In this post we will look at the ODI 12c capability of parallel table load from the aspect of the mapping developer and the knowledge module developer - two quite different viewpoints. This is about parallel table loading which isn't to be confused with loading multiple targets per se. It supports the ability for ODI mappings to be executed concurrently especially if there is an overlap of the datastores that they access, so any temporary resources created may be uniquely constructed by ODI. Temporary objects can be anything basically - common examples are staging tables, indexes, views, directories - anything in the ETL to help the data integration flow do its job. In ODI 11g users found a few workarounds (such as changing the technology prefixes - see here) to build unique temporary names but it was more of a challenge in error cases. ODI 12c mappings by default operate exactly as they did in ODI 11g with respect to these temporary names (this is also true for upgraded interfaces and scenarios) but can be configured to support the uniqueness capabilities. We will look at this feature from two aspects; that of a mapping developer and that of a developer (of procedures or KMs). 1. Firstly as a Mapping Developer..... 1.1 Control when uniqueness is enabled A new property is available to set unique name generation on/off. When unique names have been enabled for a mapping, all temporary names used by the collection and integration objects will be generated using unique names. This property is presented as a check-box in the Property Inspector for a deployment specification. 1.2 Handle cleanup after successful execution Provided that all temporary objects that are created have a corresponding drop statement then all of the temporary objects should be removed during a successful execution. This should be the case with the KMs developed by Oracle. 1.3 Handle cleanup after unsuccessful execution If an execution failed in ODI 11g then temporary tables would have been left around and cleaned up in the subsequent run. In ODI 12c, KM tasks can now have a cleanup-type task which is executed even after a failure in the main tasks. These cleanup tasks will be executed even on failure if the property 'Remove Temporary Objects on Error' is set. If the agent was to crash and not be able to execute this task, then there is an ODI tool (OdiRemoveTemporaryObjects here) you can invoke to cleanup the tables - it supports date ranges and the like. That's all there is to it from the aspect of the mapping developer it's much, much simpler and straightforward. You can now execute the same mapping concurrently or execute many mappings using the same resource concurrently without worrying about conflict.  2. Secondly as a Procedure or KM Developer..... In the ODI Operator the executed code shows the actual name that is generated - you can also see the runtime code prior to execution (introduced in 11.1.1.7), for example below in the code type I selected 'Pre-executed Code' this lets you see the code about to be processed and you can also see the executed code (which is the default view). References to the collection (C$) and integration (I$) names will be automatically made unique by using the odiRef APIs - these objects will have unique names whenever concurrency has been enabled for a particular mapping deployment specification. It's also possible to use name uniqueness functions in procedures and your own KMs. 2.1 New uniqueness tags  You can also make your own temporary objects have unique names by explicitly including either %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG in the name passed to calls to the odiRef APIs. Such names would always include the unique tag regardless of the concurrency setting. To illustrate, let's look at the getObjectName() method. At <% expansion time, this API will append %UNIQUE_STEP_TAG to the object name for collection and integration tables. The name parameter passed to this API may contain  %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. This API always generates to the <? version of getObjectName() At execution time this API will replace the unique tag macros with a string that is unique to the current execution scope. The returned name will conform to the name-length restriction for the target technology, and its pattern for the unique tag. Any necessary truncation will be performed against the initial name for the object and any other fixed text that may have been specified. Examples are:- <?=odiRef.getObjectName("L", "%COL_PRFEMP%UNIQUE_STEP_TAG", "D")?> SCOTT.C$_EABH7QI1BR1EQI3M76PG9SIMBQQ <?=odiRef.getObjectName("L", "EMP%UNIQUE_STEP_TAG_AE", "D")?> SCOTT.EMPAO96Q2JEKO0FTHQP77TMSAIOSR_ Methods which have this kind of support include getFrom, getTableName, getTable, getObjectShortName and getTemporaryIndex. There are APIs for retrieving this tag info also, the getInfo API has been extended with the following properties (the UNIQUE* properties can also be used in ODI procedures); UNIQUE_STEP_TAG - Returns the unique value for the current step scope, e.g. 5rvmd8hOIy7OU2o1FhsF61 Note that this will be a different value for each loop-iteration when the step is in a loop. UNIQUE_SESSION_TAG - Returns the unique value for the current session scope, e.g. 6N38vXLrgjwUwT5MseHHY9 IS_CONCURRENT - Returns info about the current mapping, will return 0 or 1 (only in % phase) GUID_SRC_SET - Returns the UUID for the current source set/execution unit (only in % phase) The getPop API has been extended with the IS_CONCURRENT property which returns info about an mapping, will return 0 or 1.  2.2 Additional APIs Some new APIs are provided including getFormattedName which will allow KM developers to construct a name from fixed-text or ODI symbols that can be optionally truncate to a max length and use a specific encoding for the unique tag. It has syntax getFormattedName(String pName[, String pTechnologyCode]) This API is available at both the % and the ? phase.  The format string can contain the ODI prefixes that are available for getObjectName(), e.g. %INT_PRF, %COL_PRF, %ERR_PRF, %IDX_PRF alongwith %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. The latter tags will be expanded into a unique string according to the specified technology. Calls to this API within the same execution context are guaranteed to return the same unique name provided that the same parameters are passed to the call. e.g. <%=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")%> <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")?> C$_MY_TAB7wDiBe80vBog1auacS1xB_AE <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG.log", "FILE")?> C2_MY_TAB7wDiBe80vBog1auacS1xB.log 2.3 Name length generation  As part of name generation, the length of the generated name will be compared with the maximum length for the target technology and truncation may need to be applied. When a unique tag is included in the generated string it is important that uniqueness is not compromised by truncation of the unique tag. When a unique tag is NOT part of the generated name, the name will be truncated by removing characters from the end - this is the existing 11g algorithm. When a unique tag is included, the algorithm will first truncate the <postfix> and if necessary  the <prefix>. It is recommended that users will ensure there is sufficient uniqueness in the <prefix> section to ensure uniqueness of the final resultant name. SUMMARY To summarize, ODI 12c make it much simpler to utilize mappings in concurrent cases and provides APIs for helping developing any procedures or custom knowledge modules in such a way they can be used in highly concurrent, parallel scenarios. 

    Read the article

  • On Her Majesty's Secret Source Code: .NET Reflector 7 Early Access Builds Now Available

    - by Bart Read
    Dodgy Bond references aside, I'm extremely happy to be able to tell you that we've just released our first .NET Reflector 7 Early Access build. We're going to make these available over the coming weeks via the main .NET Reflector download page at: http://reflector.red-gate.com/Download.aspx Please have a play and tell us what you think in the forum we've set up. Also, please let us know if you run into any problems in the same place. The new version so far comes with numerous decompilation improvements including (after 5 years!) support for iterator blocks - i.e., the yield statement first seen in .NET 2.0. We've also done a lot of work to solidify the support for .NET 4.0. Clive's written about the work he's done to support iterator blocks in much more detail here, along with the odd problem he's encountered when dealing with compiler generated code: http://www.simple-talk.com/community/blogs/clivet/96199.aspx. On the UI front we've started what will ultimately be a rewrite of the entire front-end, albeit broken into stages over two or three major releases. The most obvious addition at the moment is tabbed browsing, which you can see in Figure 1. Figure 1. .NET Reflector's new tabbed decompilation feature. Use CTRL+Click on any item in the assembly browser tree, or any link in the source code view, to open it in a new tab. This isn't by any means finished. I'll be tying up loose ends for the next few weeks, with a major focus on performance and resource usage. .NET Reflector has historically been a largely single-threaded application which has been fine up until now but, as you might expect, the addition of browser-style tabbing has pushed this approach somewhat beyond its limit. You can see this if you refresh the assemblies list by hitting F5. This shows up another problem: we really need to make Reflector remember everything you had open before you refreshed the list, rather than just the last item you viewed - I discovered that it's always done the latter, but it used to hide all panes apart from the treeview after a Refresh, including the decompiler/disassembler window. Ultimately I've got plans to add the whole VS/Chrome/Firefox style ability to drag a tab into the middle of nowhere to spawn a new window, but I need to be mindful of the add-ins, amongst other things, so it's possible that might slip to a 7.5 or 8.0 release. You'll also notice that .NET Reflector 7 now needs .NET 3.5 or later to run. We made this jump because we wanted to offer ourselves a much better chance of adding some really cool functionality to support newer technologies, such as Silverlight and Windows Phone 7. We've also taken the opportunity to start using WPF for UI development, which has frankly been a godsend. The learning curve is practically vertical but, I kid you not, it's just a far better world. Really. Stop using WinForms. Now. Why are you still using it? I had to go back and work on an old WinForms dialog for an hour or two yesterday and it really made me wince. The point is we'll be able to move the UI in some exciting new directions that will make Reflector easier to use whilst continuing to develop its functionality without (and this is key) cluttering the interface. The 3.5 language enhancements should also enable us to be much more productive over the longer term. I know most of you have .NET Fx 3.5 or 4.0 already but, if you do need to install a new version, I'd recommend you jump straight to 4.0 because, for one thing, it's faster, and if you're starting afresh there's really no reason not to. Despite the Fx version jump the Visual Studio add-in should still work fine in Visual Studio 2005, and obviously will continue to work in Visual Studio 2008 and 2010. If you do run into problems, again, please let us know here. As before, we continue to support every edition of Visual Studio exception the Express Editions. Speaking of Visual Studio, we've also been improving the add-in. You can now open and explore decompiled code for any referenced assembly in any project in your solution. Just right-click on the reference, then click Decompile and Explore on the context menu. Reflector will pop up a progress box whilst it decompiles your assembly (Figure 2) - you can move this out of the way whilst you carry on working. Figure 2. Decompilation progress. This isn't modal so you can just move it out of the way and carry on working. Once it's done you can explore your assembly in the Reflector treeview (Figure 3), also accessible via the .NET Reflector Explore Decompiled Assemblies main menu item. Double-click on any item to open decompiled source in the Visual Studio source code view. Use right-click and Go To Definition on the source view context menu to navigate through the code. Figure 3. Using the .NET Reflector treeview within Visual Studio. Double-click on any item to open decompiled source in the source code view. There are loads of other changes and fixes that have gone in, often under the hood, which I don't have room to talk about here, and plenty more to come over the next few weeks. I'll try to keep you abreast of new functionality and changes as they go in. There are a couple of smaller things worth mentioning now though. Firstly, we've reorganised the menus and toolbar in Reflector itself to more closely mirror what you might be used to in other applications. Secondly, we've tried to make some of the functionality more discoverable. For example, you can now switch decompilation target framework version directly from the toolbar - and the default is now .NET 4.0. I think that about covers it for the moment. As I said, please use the new version, and send us your feedback. Here's that download URL again: http://reflector.red-gate.com/Download.aspx. Until next time! Technorati Tags: .net reflector,7,early access,new version,decompilation,tabbing,visual studio,software development,.net,c#,vb

    Read the article

  • CodePlex Daily Summary for Monday, March 22, 2010

    CodePlex Daily Summary for Monday, March 22, 2010New Projects[Tool] Vczh Non-public DLL Classes Caller: Generate C# code for you to call non-public classes in DLLs very easily.Artefact Animator: Artefact Animator provides an easy to use framework for procedural time-based animations in Silverlight and WPF.cacheroo: Cacheroo is a social networking community that will make it easier for people who love geocaching to get connected.Data Processing Toolkit: An utility app to collected data from different sources (i.e. bugzilla bug reports) in a structured way. We are currently setting up the site. Mo...eXternal SQL Bridge (PHP): The eXternal SQL Bridge (XSB) allows you to bridge two websites together in a secure manner through pre-shared keys. XSB is resilient against repla...'G' - Language to Define Gestures for Touch Based Applications: A cross plat form multi-touch application framework with a language to define gestures. The application is build on Silverlight 4.0 and the languag...IIS Network Diagnostic Tools: Web implementation of "looking glass" like services (ping, traceroute) as HTTP modules for Internet Information Services.Interop Router: This project establishes a communication framework and job dispatcher for a mixed operating system cluster environment.L2 Commander: L2Commander makes it easier for both new and old l2j users to manage your server.You no longer have to waste time on finding the files you need and...MediaHelper: A utility to help clean up empty/unwanted files and folders in your filesystem.mhinze: matt hinze stuffOneMan: Focus on Silverlight and WCF technology.Rss Photo Frame Android Widget: RSS Photo Frame Android Widget permits showing pictures from any RSS feed on your Android device's desktopSingle Web Session: Web Tool Kits Current project provide developer with different tools that help to enhance web site performance, security, and other common functio...Work Item Visualization: Use DGML to visualize and analyze your TFS Work Items. Included is the ability to perform basic risk/impact analysis. It helps answer the question,...New Releases[Tool] Vczh Non-public DLL Classes Caller: Wrapper Coder (beta): Click "<Click Me To Open Assembly File>", WrapperCoder will load the assembly and referenced assembly. Check the non-public classes that you want...APS - Automatic Print Screen: APS 1.0: APS automatizes the tasks of paste the image in Paint and save it after print screen or alt+print screen. Choose directory, name and file extension...BTP Tools: e-Sword generator build 20100321: 1. Modify the indent after subtitle. 2. Add 2 spaces after subtitle.Combres - WebForm & MVC Client-side Resource Combine Library: Combres 2.0: Changes since last version (1.2) Support ignore Combres pipeline in debug mode - see issue #6088 Debug mode generates comment helping identify in...Desafio Office 2010 Brasil: DesafioOutlook: Controlando um robo com o Outlook 2010dylan.NET: dylan.NET v. 9.4: Adding Platform Invocation Services Support, full Managed Pointer Support, Charset,Dllimport,Callconv setting for P/Invoke, MarshalAs for parametersFamily Tree Analyzer: Version 1.3.2.0: Version 1.3.2.0 Add open folder button to IGI Search Form Fixes to Fact Location processing - IGIName renamed to RegionID Fix if Region ID not fou...Fasterflect - A Fast and Simple Reflection API: Fasterflect 2.0: We are pleased to release version 2.0 of Fasterflect, which contains a lot of additions and improvements from the previous version. Please refer t...IIS Network Diagnostic Tools: 1.0: Initial public release.Informant: Informant (Desktop) v0.1: This release allows users to send sms messages to 1-Many Groups or 1-Many contacts. It is a very basic release of the application. No styling has b...InfoService: InfoService v1.5 - MPE1 Package: InfoService Release v1.5.0.65 Please read Plugin installation for installation instructions.InfoService: InfoService v1.5 - RAR Package: InfoService Release v1.5.0.65 Please read Plugin installation for installation instructions.L2 Commander: Source Code Link: Where to find our source.ModularCMS: ModularCMS 1.2: Minor bug fixes.NMTools: NMTools-v40b0-20100321-0: The most noticeable aspect of this release is that NMTools is now an independent project. It will no longer tied to OpenSLIM. Nevertheless, OpenSLI...SharePoint LogViewer: SharePoint LogViewer 1.5.3: Log loading performance enhanced. Search text box now has auto complete feature.Single Web Session: Single Web Session: !Single Web Session! <httpModules> <add name="SingleSession" type="SingleWebSession.Model.WebSessionModule, SingleWebSession"/> </httpModules>Sprite Sheet Packer: 2.1 Release: Made a few crucial fixes from 2.0: - Fixed error with paths having spaces. - Fixed error with UI not unlocking. - Fixed NullReferenceException on ...uManage - AD Self-Service Portal: uManage v1.1 (.NET 4.0 RC): Updated Releasev1.1 Adds the primary ability to setup and configure the application through a setup wizard. The setup wizard will continue to evol...VCC: Latest build, v2.1.30321.0: Automatic drop of latest buildVS ChessMania: VS ChessMania V2 March Beta: Second Beta Release with move correction and making application more safe for user. New features will be added soon.WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.9.00: Whats New Added New Toolbar Plugin (By Kent Safransk) 'MediaEmbed' to Include Embed Media from Youtube, Vimeo, etc. Media Embed Plugin Added New ...WeatherBar: WeatherBar 1.0 [No Installation]: Extract the ZIP archive and run WeatherBar.exe. Current release contains some bugs that will be fixed in the next version. Check the Issue Tracker...Work Item Visualization: Release 1.0: This is the initial release of the Work Item Visualization tool. There are no known issues when it comes to the visualization aspects of the tool b...WPF Application Framework (WAF): WPF Application Framework (WAF) 1.0.0.10: Version: 1.0.0.10 (Milestone 10): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requi...WPF AutoComplete TextBox Control: Version 1.2: What's Newadds AutoAppend feature adds a new provider: UrlHistoryDataProvider sample application is updated to reflect the new things Bug Fixe...ZoomBarPlus: V2 (Beta): - Fixed bug: if the active window changed while you were in the middle of a single tap delay, long tap delay, or swipe-repeat, it would continue re...Most Popular ProjectsMetaSharpSavvy DateTimeRawrWBFS ManagerSilverlight ToolkitASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)Most Active ProjectsLINQ to TwitterRawrOData SDK for PHPjQuery Library for SharePoint Web ServicesDirectQPHPExcelFarseer Physics Enginepatterns & practices – Enterprise LibraryBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

  • SSH new connection begins to hang (not reject or terminate) after a day or so on Ubuntu 13.04 server

    - by kross
    Recently we upgraded the server from 12.04 LTS server to 13.04. All was well, including after a reboot. With all packages updated we began to see a strange issue, ssh works for a day or so (unclear on timing) then a later request for SSH hangs (cannot ctrl+c, nothing). It is up and serving webserver traffic etc. Port 22 is open (ips etc altered slightly for posting): nmap -T4 -A x.acme.com Starting Nmap 6.40 ( http://nmap.org ) at 2013-09-12 16:01 CDT Nmap scan report for x.acme.com (69.137.56.18) Host is up (0.026s latency). rDNS record for 69.137.56.18: c-69-137-56-18.hsd1.tn.provider.net Not shown: 998 filtered ports PORT STATE SERVICE VERSION 22/tcp open ssh OpenSSH 6.1p1 Debian 4 (protocol 2.0) | ssh-hostkey: 1024 54:d3:e3:38:44:f4:20:a4:e7:42:49:d0:a7:f1:3e:21 (DSA) | 2048 dc:21:77:3b:f4:4e:74:d0:87:33:14:40:04:68:33:a6 (RSA) |_256 45:69:10:79:5a:9f:0b:f0:66:15:39:87:b9:a1:37:f7 (ECDSA) 80/tcp open http Jetty 7.6.2.v20120308 | http-title: Log in as a Bamboo user - Atlassian Bamboo |_Requested resource was http://x.acme.com/userlogin!default.action;jsessionid=19v135zn8cl1tgso28fse4d50?os_destination=%2Fstart.action Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 12.89 seconds Here is the ssh -vvv: ssh -vvv x.acme.com OpenSSH_5.9p1, OpenSSL 0.9.8x 10 May 2012 debug1: Reading configuration data /Users/tfergeson/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to x.acme.com [69.137.56.18] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/Users/tfergeson/.ssh/id_rsa" as a RSA1 public key debug1: identity file /Users/tfergeson/.ssh/id_rsa type 1 debug1: identity file /Users/tfergeson/.ssh/id_rsa-cert type -1 debug1: identity file /Users/tfergeson/.ssh/id_dsa type -1 debug1: identity file /Users/tfergeson/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.1p1 Debian-4 debug1: match: OpenSSH_6.1p1 Debian-4 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "x.acme.com" from file "/Users/tfergeson/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /Users/tfergeson/.ssh/known_hosts:10 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 130/256 debug2: bits set: 503/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA dc:21:77:3b:f4:4e:74:d0:87:33:14:40:04:68:33:a6 debug3: load_hostkeys: loading entries for host "x.acme.com" from file "/Users/tfergeson/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /Users/tfergeson/.ssh/known_hosts:10 debug3: load_hostkeys: loaded 1 keys debug3: load_hostkeys: loading entries for host "69.137.56.18" from file "/Users/tfergeson/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /Users/tfergeson/.ssh/known_hosts:6 debug3: load_hostkeys: loaded 1 keys debug1: Host 'x.acme.com' is known and matches the RSA host key. debug1: Found key in /Users/tfergeson/.ssh/known_hosts:10 debug2: bits set: 493/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/tfergeson/.ssh/id_rsa (0x7ff189c1d7d0) debug2: key: /Users/tfergeson/.ssh/id_dsa (0x0) debug1: Authentications that can continue: publickey debug3: start over, passed a different list publickey debug3: preferred publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/tfergeson/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 277 debug2: input_userauth_pk_ok: fp 3c:e5:29:6c:9d:27:d1:7d:e8:09:a2:e8:8e:6e:af:6f debug3: sign_and_send_pubkey: RSA 3c:e5:29:6c:9d:27:d1:7d:e8:09:a2:e8:8e:6e:af:6f debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). Authenticated to x.acme.com ([69.137.56.18]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: fd 3 setting TCP_NODELAY debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env ATLAS_OPTS debug3: Ignored env rvm_bin_path debug3: Ignored env TERM_PROGRAM debug3: Ignored env GEM_HOME debug3: Ignored env SHELL debug3: Ignored env TERM debug3: Ignored env CLICOLOR debug3: Ignored env IRBRC debug3: Ignored env TMPDIR debug3: Ignored env Apple_PubSub_Socket_Render debug3: Ignored env TERM_PROGRAM_VERSION debug3: Ignored env MY_RUBY_HOME debug3: Ignored env TERM_SESSION_ID debug3: Ignored env USER debug3: Ignored env COMMAND_MODE debug3: Ignored env rvm_path debug3: Ignored env COM_GOOGLE_CHROME_FRAMEWORK_SERVICE_PROCESS/USERS/tfergeson/LIBRARY/APPLICATION_SUPPORT/GOOGLE/CHROME_SOCKET debug3: Ignored env JPDA_ADDRESS debug3: Ignored env APDK_HOME debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env Apple_Ubiquity_Message debug3: Ignored env __CF_USER_TEXT_ENCODING debug3: Ignored env rvm_sticky_flag debug3: Ignored env MAVEN_OPTS debug3: Ignored env LSCOLORS debug3: Ignored env rvm_prefix debug3: Ignored env PATH debug3: Ignored env PWD debug3: Ignored env JAVA_HOME debug1: Sending env LANG = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env JPDA_TRANSPORT debug3: Ignored env rvm_version debug3: Ignored env M2_HOME debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env rvm_ruby_string debug3: Ignored env LOGNAME debug3: Ignored env M2_REPO debug3: Ignored env GEM_PATH debug3: Ignored env AWS_RDS_HOME debug3: Ignored env rvm_delete_flag debug3: Ignored env EC2_PRIVATE_KEY debug3: Ignored env RUBY_VERSION debug3: Ignored env SECURITYSESSIONID debug3: Ignored env EC2_CERT debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 I can hard reboot (only mac monitors at that location) and it will again be accessible. This now happens every single time. It is imperative that I get it sorted. The strange thing is that it behaves initially then starts to hang after several hours. I perused logs previously and nothing stood out. From the auth.log, I can see that it has allowed me in, but still I get nothing back on the client side: Sep 20 12:47:50 cbear sshd[25376]: Accepted publickey for tfergeson from 10.1.10.14 port 54631 ssh2 Sep 20 12:47:50 cbear sshd[25376]: pam_unix(sshd:session): session opened for user tfergeson by (uid=0) UPDATES: Still occurring even after setting UseDNS no and commenting out #session optional pam_mail.so standard noenv This does not appear to be a network/dns related issue, as all services running on the machine are as responsive and accessible as ever, with the exception of sshd. Any thoughts on where to start?

    Read the article

  • Creating a multi-column rollover image gallery with HTML 5

    - by nikolaosk
    I know it has been a while since I blogged about HTML 5. I have two posts in this blog about HTML 5. You can find them here and here.I am creating a small content website (only text,images and a contact form) for a friend of mine.He wanted to create a rollover gallery.The whole concept is that we have some small thumbnails on a page, the user hovers over them and they appear enlarged on a designated container/placeholder on a page. I am trying not to use Javascript scripts when I am using effects on a web page and this is what I will be doing in this post.  Well some people will say that HTML 5 is not supported in all browsers. That is true but most of the modern browsers support most of its recommendations. For people who still use IE6 some hacks must be devised.Well to be totally honest I cannot understand why anyone at this day and time is using IE 6.0.That really is beyond me.Well, the point of having a web browser is to be able to ENJOY the great experience that the WE? offers today.  Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. In order to be absolutely clear this is not (and could not be ) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.For the people who are not convinced yet that they should invest time and resources on becoming experts on HTML 5 I should point out that HTML 5 websites will be ranked higher than others. Search engines will be able to locate better the content of our site and its relevance/importance since it is using semantic tags. Let's move now to the actual hands-on example. In this case (since I am mad Liverpool supporter) I will create a rollover image gallery of Liverpool F.C legends. I create a folder in my desktop. I name it Liverpool Gallery.Then I create two subfolders in it, large-images (I place the large images in there) and thumbs (I place the small images in there).Then I create an empty .html file called LiverpoolLegends.html and an empty .css file called style.css.Please have a look at the HTML Markup that I typed in my fancy editor package below<!doctype html><html lang="en"><head><title>Liverpool Legends Gallery</title><meta charset="utf-8"><link rel="stylesheet" type="text/css" href="style.css"></head><body><header><h1>A page dedicated to Liverpool Legends</h1><h2>Do hover over the images with the mouse to see the full picture</h2></header><ul id="column1"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/john-barnes.jpg" alt=""><img class="large" src="large-images/john-barnes-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/ian-rush.jpg" alt=""><img class="large" src="large-images/ian-rush-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/graeme-souness.jpg" alt=""><img class="large" src="large-images/graeme-souness-large.jpg" alt=""></a></li></ul><ul id="column2"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/steven-gerrard.jpg" alt=""><img class="large" src="large-images/steven-gerrard-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/kenny-dalglish.jpg" alt=""><img class="large" src="large-images/kenny-dalglish-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/robbie-fowler.jpg" alt=""><img class="large" src="large-images/robbie-fowler-large.jpg" alt=""></a></li></ul><ul id="column3"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/alan-hansen.jpg" alt=""><img class="large" src="large-images/alan-hansen-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/michael-owen.jpg" alt=""><img class="large" src="large-images/michael-owen-large.jpg" alt=""></a></li></ul></body></html> It is very easy to follow the markup. Please have a look at the new doctype and the new semantic tag <header>. I have 3 columns and I place my images in there.There is a class called "large".I will use this class in my CSS code to hide the large image when the mouse is not on (hover) an image Make sure you validate your HTML 5 page in the validator found hereHave a look at the CSS code below that makes it all happen.img { border:none;}#column1 { position: absolute; top: 30; left: 100; }li { margin: 15px; list-style-type:none;}#column1 a img.large {  position: absolute; top: 0; left:700px; visibility: hidden;}#column1 a:hover { background: white;}#column1 a:hover img.large { visibility:visible;}#column2 { position: absolute; top: 30; left: 195px; }li { margin: 5px; list-style-type:none;}#column2 a img.large { position: absolute; top: 0; left:510px; margin-left:0; visibility: hidden;}#column2 a:hover { background: white;}#column2 a:hover img.large { visibility:visible;}#column3 { position: absolute; top: 30; left: 400px; width:108px;}li { margin: 5px; list-style-type:none;}#column3 a img.large { width: 260px; height:260px; position: absolute; top: 0; left:315px; margin-left:0; visibility: hidden;}#column3 a:hover { background: white;}#column3 a:hover img.large { visibility:visible;}?n the first line of the CSS code I set the images to have no border.Then I place the first column in the page and then remove the bullets from the list elements.Then I use the large CSS class to create a position for the large image and hide it.Finally when the hover event takes place I make the image visible.I repeat the process for the next two columns. I have tested the page with IE 10 and the latest versions of Opera,Chrome and Firefox.Feel free to style your HTML 5 gallery any way you want through the magic of CSS.I did not bother adding background colors and borders because that was beyond the scope of this post. Hope it helps!!!!

    Read the article

  • B2B and B2C Commerce are alike… but a little different – Oracle Commerce named Leader in Forrester B2B Commerce Wave

    - by Katrina Gosek
    We weren’t surprised to see Oracle Commerce positioned as a Leader in Forrester’s first Commerce Wave focused on B2B, released earlier this month. The reports validates much of what we’ve heard from our largest customers – the world’s largest distribution, manufacturing and high-tech customers who sell billions of dollars of goods and services to other businesses through their Web channels. More importantly, the report confirms something very important: B2B and B2C Commerce are alike… but a little different. B2B and B2C Commerce are alike… Clearly, B2C experiences have set expectations for B2B. Every B2B buyer is a consumer at home and brings the same expectations to a website selling electronic components, aftermarket parts, or MRO products. Forrester calls these rich consumer-based capabilities that help B2B customers do their jobs “table stakes”: search & navigation, promotions, cross-channel commerce and mobile: “Whether they are just beginning to sell online or are in the late stages of launching a next-generation site, B2B eCommerce operations today must: offer a customer experience standard comparable to what leading b2c sites now offer; address the growing influence that mobile devices are having in the workplace; make a qualitative and quantitative business case that drives sustained investment.” Just five years ago, many of our B2B customers’ online business comprised only 5-10% of their total revenue. Today, when we speak to those same brands, we hear about double and triple digit growth in their online channels. Many have seen the percentage of the business they perform in their web channels cross the 30-50% threshold. You can hear first-hand from several Oracle Commerce B2B customers about the success they are seeing, and what they’re trying to accomplish (Carolina Biological, Premier Farnell, DeliXL, Elsevier). This momentum is likely the reason Forrester broke out the separate B2B Commerce Wave from the B2C Wave. In fact, B2B is becoming the larger force in commerce, expected to collect twice the online dollars of B2C this year ($559 billion). But a little different… Despite the similarities, there is a key and very important difference between B2C and B2B. Unlike a consumer shopping for shoes, a business shopper buying from a distributor or manufacturer is coming to the Web channel as a part of their job. So in addition to a rich, consumer-like experience this shopper expects, these B2B buyers need quoting tools and complex pricing capabilities, like eProcurement, bulk order entry, and other self-service tools such as account, contract and organization management.  Forrester also is emphasizing three additional “back-end” tools and capabilities their clients say they need to drive growth in their B2B online channels: i) product information management (PIM), which provides a single system of record for large part lists and product catalogs; ii) web content management (WCM), needed to manage large volumes of unstructured marketing information, and iii) order management systems (OMS), which manage and orchestrate the complex B2B order life cycle from quote through approval, submission to manufacturing, distribution and delivery.  We would like to expand on each of these 3 areas: As Forrester highlights, back-end PIM is definitely needed by B2B Commerce providers. Most B2B companies have made significant investments in enterprise-grade PIMs, given the importance of product data management for aggregation and syndication of content, product attribution, analytics, and handling of complex workflows. While in principle it may sound appealing to have a PIM as part of a commerce offering (especially for SMBs who have to do more with less), our customers have typically found that PIM in a commerce platform is largely redundant with what they already have in-place, and is not fully-featured or robust enough to handle the complexity of the product data sets that B2B distributors and manufacturers usually handle. To meet the PIM needs for commerce, Oracle offers enterprise PIM (Product Hub/Fusion PIM) and a robust enterprise data quality product (EDQP) integrated with the Oracle Commerce solution. These are key differentiators of our offering and these capabilities are becoming even more tightly integrated with Oracle Commerce over time. For Commerce, what customers really need is a robust product catalog and content management system for enabling business users to further enrich and ready catalog and content data to be presented and sold online.  This has been a significant area of investment in the Oracle Commerce platform , which continue to get stronger. We see this combination of capabilities as best meeting the needs of our customers for a commerce platform without adding a largely redundant, less functional PIM in the commerce front-end.   On the topic of web content management, we were pleased to see Forrester recognize Oracle’s unique functional capabilities in this area and the “unique opportunity in the market to lead the convergence of commerce and content management with the amalgamation of Oracle Commerce with WebCenter Sites (formally FatWire).” Strong content management capabilities are critical for distributors and manufacturers who are frequently serving an engineering audience coming to their websites to conduct product research in search of technical data sheets, drawings, videos and more. The convergence of content, commerce, and experience is critical for B2B brands selling online. Regarding order management, Forrester notes that many businesses use their existing back-end enterprise resource planning (ERP) systems to manage order life cycles.  We hear the same from most of our B2B customers, as they already have an ERP system—if not several of them—and are not interested in yet another one.  So what do we take away from the Wave results? Forrester notes that the Oracle Commerce Platform “has always had strong B2B commerce capabilities and Oracle has an exhaustive list of B2B customers using the solution.”  What makes us excited about developing leading B2B solutions are the close relationships with our customers and the clear opportunity in the market – which we’ll address in an exciting new release in the coming months. Oracle has one of the world’s largest B2B customer bases, providing leading solutions across key business-to-business functions – from marketing, sales automation, and service to master data management, and ERP.  To learn more about Oracle’s Commerce product vision and strategy, visit our website and check out these other B2B Commerce Resources: - 2013 B2B Commerce Trends Report - B2B Commerce Whitepaper: Consumerization, Complexity, Change - B2B Commerce Webcast: What Industry Trend Setters Do Right - Internet Retailer, Web Drives Sales for B2B Companies - Internet Retailer, The Web Means Business: B2B Companies Beef Up Their Websites, borrowing from b2c retailers and breaking new ground - Internet Retailer, B2B e-Commerce is poised for growth ----------THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT 

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Developer’s Life – Disaster Lessons – Notes from the Field #039

    - by Pinal Dave
    [Note from Pinal]: This is a 39th episode of Notes from the Field series. What is the best solution do you have when you encounter a disaster in your organization. Now many of you would answer that in this scenario you would have another standby machine or alternative which you will plug in. Now let me ask second question – What would you do if you as an individual faces disaster?  In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue we face in our career, which is not technical but more to relate to human nature. Read on this may be the best blog post you might read in recent times. Howdy! When it was my turn to share the Notes from the Field last time, I took a departure from my normal technical content to talk about Attitude and Communication.(http://blog.sqlauthority.com/2014/05/08/developers-life-attitude-and-communication-they-can-cause-problems-notes-from-the-field-027/) Pinal said it was a popular topic so I hope he won’t mind if I stick with Professional Development for another of my turns at sharing some information here. Like I said last time, the “soft skills” of the IT world are often just as important – sometimes more important – than the technical skills. As a consultant with Linchpin People – I see so many situations where the professional skills I’ve gained and use are more valuable to clients than knowing the best way to tune a query. Today I want to continue talking about professional development and tell you about the way I almost got myself hit by a train – and why that matters in our day jobs. Sometimes we can learn a lot from disasters. Whether we caused them or someone else did. If you are interested in learning about some of my observations in these lessons you can see more where I talk about lessons from disasters on my blog. For now, though, onto how I almost got my vehicle hit by a train… The Train Crash That Almost Was…. My family and I own a little schoolhouse building about a 10 mile drive away from our house. We use it as a free resource for families in the area that homeschool their children – so they can have some class space. I go up there a lot to check in on the property, to take care of the trash and to do work on the property. On the way there, there is a very small Stop Sign controlled railroad intersection. There is only two small freight trains a day passing there. Actually the same train, making a journey south and then back North. That’s it. This road is a small rural road, barely ever a second car driving in the neighborhood there when I am. The stop sign is pretty much there only for the train crossing. When we first bought the building, I was up there a lot doing renovations on the property. Being familiar with the area, I am also familiar with the train schedule and know the tracks are normally free of trains. So I developed a bad habit. You see, I’d approach the stop sign and slow down as I roll through it. Sometimes I’d do a quick look and come to an “almost” stop there but keep on going. I let my impatience and complacency take over. And that is because most of the time I was going there long after the train was done for the day or in between the runs. This habit became pretty well established after a couple years of driving the route. The behavior reinforced a bit by the success ratio. I saw others doing it as well from the neighborhood when I would happen to be there around the time another car was there. Well. You already know where this ends up by the title and backstory here. A few months ago I came to that little crossing, and I started to do the normal routine. I’d pretty much stopped looking in some respects because of the pattern I’d gotten into.  For some reason I looked and heard and saw the train slowly approaching and slammed on my brakes and stopped. It was an abrupt stop, and it was close. I probably would have made it okay, but I sat there thinking about lessons for IT professionals from the situation once I started breathing again and watched the cars loaded with sand and propane slowly labored down the tracks… Here are Those Lessons… It’s easy to get stuck into a routine – That isn’t always bad. Except when it’s a bad routine. Momentum and inertia are powerful. Once you have a habit and a routine developed – it’s really hard to break that. Make sure you are setting the right routines and habits TODAY. What almost dangerous things are you doing today? How are you almost messing up your production environment today? Stop doing that. Be Deliberate – (Even when you are the only one) – Like I said – a lot of people roll through that stop sign. Perhaps the neighbors or other drivers think “why is he fully stopping and looking… The train only comes two times a day!” – they can think that all they want. Through deliberate actions and forcing myself to pay attention, I will avoid that oops again. Slow down. Take a deep breath. Be Deliberate in your job. Pay attention to the small stuff and go out of your way to be careful. It will save you later. Be Observant – Keep your eyes open. By looking around, observing the situation and understanding what your servers, databases, users and vendors are doing – you’ll notice when something is out of place. But if you don’t know what is normal, if you don’t look to make sure nothing has changed – that train will come and get you. Where can you be more observant? What warning signs are you ignoring in your environment today? In the IT world – trains are everywhere. Projects move fast. Decisions happen fast. Problems turn from a warning sign to a disaster quickly. If you get stuck in a complacent pattern of “Everything is okay, it always has been and always will be” – that’s the time that you will most likely get stuck in a bad situation. Don’t let yourself get complacent, don’t let your team get complacent. That will lead to being proactive. And a proactive environment spends less money on consultants for troubleshooting problems you should have seen ahead of time. You can spend your money and IT budget on improving for your customers. If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • High CPU usage with Team Speak 3.0.0-rc2

    - by AlexTheBird
    The CPU usage is always around 40 percent. I use push-to-talk and I had uninstalled pulseaudio. Now I use Alsa. I don't even have to connect to a Server. By simply starting TS the cpu usage goes up 40 percent and stays there. The CPU usage of 3.0.0-rc1 [Build: 14468] is constantly 14 percent. This is the output of top, mpstat and ps aux while I am running TS3 ... of course: alexandros@alexandros-laptop:~$ top top - 18:20:07 up 2:22, 3 users, load average: 1.02, 0.85, 0.77 Tasks: 163 total, 1 running, 162 sleeping, 0 stopped, 0 zombie Cpu(s): 5.3%us, 1.9%sy, 0.1%ni, 91.8%id, 0.7%wa, 0.1%hi, 0.1%si, 0.0%st Mem: 2061344k total, 964028k used, 1097316k free, 69116k buffers Swap: 3997688k total, 0k used, 3997688k free, 449032k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2714 alexandr 20 0 206m 31m 24m S 37 1.6 0:12.78 ts3client_linux 868 root 20 0 47564 27m 10m S 8 1.4 3:21.73 Xorg 1 root 20 0 2804 1660 1204 S 0 0.1 0:00.53 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.01 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.45 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 7 root 20 0 0 0 0 S 0 0.0 0:00.08 ksoftirqd/1 8 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root 20 0 0 0 0 S 0 0.0 0:01.17 events/0 10 root 20 0 0 0 0 S 0 0.0 0:00.81 events/1 11 root 20 0 0 0 0 S 0 0.0 0:00.00 cpuset 12 root 20 0 0 0 0 S 0 0.0 0:00.00 khelper 13 root 20 0 0 0 0 S 0 0.0 0:00.00 async/mgr 14 root 20 0 0 0 0 S 0 0.0 0:00.00 pm 16 root 20 0 0 0 0 S 0 0.0 0:00.00 sync_supers 17 root 20 0 0 0 0 S 0 0.0 0:00.00 bdi-default 18 root 20 0 0 0 0 S 0 0.0 0:00.00 kintegrityd/0 19 root 20 0 0 0 0 S 0 0.0 0:00.00 kintegrityd/1 20 root 20 0 0 0 0 S 0 0.0 0:00.05 kblockd/0 21 root 20 0 0 0 0 S 0 0.0 0:00.02 kblockd/1 22 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpid 23 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpi_notify 24 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpi_hotplug 25 root 20 0 0 0 0 S 0 0.0 0:00.99 ata/0 26 root 20 0 0 0 0 S 0 0.0 0:00.92 ata/1 27 root 20 0 0 0 0 S 0 0.0 0:00.00 ata_aux 28 root 20 0 0 0 0 S 0 0.0 0:00.00 ksuspend_usbd 29 root 20 0 0 0 0 S 0 0.0 0:00.00 khubd alexandros@alexandros-laptop:~$ mpstat Linux 2.6.32-32-generic (alexandros-laptop) 16.06.2011 _i686_ (2 CPU) 18:20:15 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle 18:20:15 all 5,36 0,09 1,91 0,68 0,07 0,06 0,00 0,00 91,83 alexandros@alexandros-laptop:~$ ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2804 1660 ? Ss 15:58 0:00 /sbin/init root 2 0.0 0.0 0 0 ? S 15:58 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S 15:58 0:00 [migration/0] root 4 0.0 0.0 0 0 ? S 15:58 0:00 [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S 15:58 0:00 [watchdog/0] root 6 0.0 0.0 0 0 ? S 15:58 0:00 [migration/1] root 7 0.0 0.0 0 0 ? S 15:58 0:00 [ksoftirqd/1] root 8 0.0 0.0 0 0 ? S 15:58 0:00 [watchdog/1] root 9 0.0 0.0 0 0 ? S 15:58 0:01 [events/0] root 10 0.0 0.0 0 0 ? S 15:58 0:00 [events/1] root 11 0.0 0.0 0 0 ? S 15:58 0:00 [cpuset] root 12 0.0 0.0 0 0 ? S 15:58 0:00 [khelper] root 13 0.0 0.0 0 0 ? S 15:58 0:00 [async/mgr] root 14 0.0 0.0 0 0 ? S 15:58 0:00 [pm] root 16 0.0 0.0 0 0 ? S 15:58 0:00 [sync_supers] root 17 0.0 0.0 0 0 ? S 15:58 0:00 [bdi-default] root 18 0.0 0.0 0 0 ? S 15:58 0:00 [kintegrityd/0] root 19 0.0 0.0 0 0 ? S 15:58 0:00 [kintegrityd/1] root 20 0.0 0.0 0 0 ? S 15:58 0:00 [kblockd/0] root 21 0.0 0.0 0 0 ? S 15:58 0:00 [kblockd/1] root 22 0.0 0.0 0 0 ? S 15:58 0:00 [kacpid] root 23 0.0 0.0 0 0 ? S 15:58 0:00 [kacpi_notify] root 24 0.0 0.0 0 0 ? S 15:58 0:00 [kacpi_hotplug] root 25 0.0 0.0 0 0 ? S 15:58 0:00 [ata/0] root 26 0.0 0.0 0 0 ? S 15:58 0:00 [ata/1] root 27 0.0 0.0 0 0 ? S 15:58 0:00 [ata_aux] root 28 0.0 0.0 0 0 ? S 15:58 0:00 [ksuspend_usbd] root 29 0.0 0.0 0 0 ? S 15:58 0:00 [khubd] root 30 0.0 0.0 0 0 ? S 15:58 0:00 [kseriod] root 31 0.0 0.0 0 0 ? S 15:58 0:00 [kmmcd] root 34 0.0 0.0 0 0 ? S 15:58 0:00 [khungtaskd] root 35 0.0 0.0 0 0 ? S 15:58 0:00 [kswapd0] root 36 0.0 0.0 0 0 ? SN 15:58 0:00 [ksmd] root 37 0.0 0.0 0 0 ? S 15:58 0:00 [aio/0] root 38 0.0 0.0 0 0 ? S 15:58 0:00 [aio/1] root 39 0.0 0.0 0 0 ? S 15:58 0:00 [ecryptfs-kthrea] root 40 0.0 0.0 0 0 ? S 15:58 0:00 [crypto/0] root 41 0.0 0.0 0 0 ? S 15:58 0:00 [crypto/1] root 48 0.0 0.0 0 0 ? S 15:58 0:03 [scsi_eh_0] root 50 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_1] root 53 0.0 0.0 0 0 ? S 15:58 0:00 [kstriped] root 54 0.0 0.0 0 0 ? S 15:58 0:00 [kmpathd/0] root 55 0.0 0.0 0 0 ? S 15:58 0:00 [kmpathd/1] root 56 0.0 0.0 0 0 ? S 15:58 0:00 [kmpath_handlerd] root 57 0.0 0.0 0 0 ? S 15:58 0:00 [ksnapd] root 58 0.0 0.0 0 0 ? S 15:58 0:03 [kondemand/0] root 59 0.0 0.0 0 0 ? S 15:58 0:02 [kondemand/1] root 60 0.0 0.0 0 0 ? S 15:58 0:00 [kconservative/0] root 61 0.0 0.0 0 0 ? S 15:58 0:00 [kconservative/1] root 213 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_2] root 222 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_3] root 234 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_4] root 235 0.0 0.0 0 0 ? S 15:58 0:01 [usb-storage] root 255 0.0 0.0 0 0 ? S 15:58 0:00 [jbd2/sda5-8] root 256 0.0 0.0 0 0 ? S 15:58 0:00 [ext4-dio-unwrit] root 257 0.0 0.0 0 0 ? S 15:58 0:00 [ext4-dio-unwrit] root 290 0.0 0.0 0 0 ? S 15:58 0:00 [flush-8:0] root 318 0.0 0.0 2316 888 ? S 15:58 0:00 upstart-udev-bridge --daemon root 321 0.0 0.0 2616 1024 ? S<s 15:58 0:00 udevd --daemon root 526 0.0 0.0 0 0 ? S 15:58 0:00 [kpsmoused] root 528 0.0 0.0 0 0 ? S 15:58 0:00 [led_workqueue] root 650 0.0 0.0 0 0 ? S 15:58 0:00 [radeon/0] root 651 0.0 0.0 0 0 ? S 15:58 0:00 [radeon/1] root 652 0.0 0.0 0 0 ? S 15:58 0:00 [ttm_swap] root 654 0.0 0.0 2612 984 ? S< 15:58 0:00 udevd --daemon root 656 0.0 0.0 0 0 ? S 15:58 0:00 [hd-audio0] root 657 0.0 0.0 2612 916 ? S< 15:58 0:00 udevd --daemon root 674 0.6 0.0 0 0 ? S 15:58 0:57 [phy0] syslog 715 0.0 0.0 34812 1776 ? Sl 15:58 0:00 rsyslogd -c4 102 731 0.0 0.0 3236 1512 ? Ss 15:58 0:02 dbus-daemon --system --fork root 740 0.0 0.1 19088 3380 ? Ssl 15:58 0:00 gdm-binary root 744 0.0 0.1 18900 4032 ? Ssl 15:58 0:01 NetworkManager avahi 749 0.0 0.0 2928 1520 ? S 15:58 0:00 avahi-daemon: running [alexandros-laptop.local] avahi 752 0.0 0.0 2928 544 ? Ss 15:58 0:00 avahi-daemon: chroot helper root 753 0.0 0.1 4172 2300 ? S 15:58 0:00 /usr/sbin/modem-manager root 762 0.0 0.1 20584 3152 ? Sl 15:58 0:00 /usr/sbin/console-kit-daemon --no-daemon root 836 0.0 0.1 20856 3864 ? Sl 15:58 0:00 /usr/lib/gdm/gdm-simple-slave --display-id /org/gnome/DisplayManager/Display1 root 856 0.0 0.1 4836 2388 ? S 15:58 0:00 /sbin/wpa_supplicant -u -s root 868 2.3 1.3 36932 27924 tty7 Rs+ 15:58 3:22 /usr/bin/X :0 -nr -verbose -auth /var/run/gdm/auth-for-gdm-a46T4j/database -nolisten root 891 0.0 0.0 1792 564 tty4 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty4 root 901 0.0 0.0 1792 564 tty5 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty5 root 908 0.0 0.0 1792 564 tty2 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty2 root 910 0.0 0.0 1792 568 tty3 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty3 root 913 0.0 0.0 1792 564 tty6 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty6 root 917 0.0 0.0 2180 1072 ? Ss 15:58 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket daemon 924 0.0 0.0 2248 432 ? Ss 15:58 0:00 atd root 927 0.0 0.0 2376 900 ? Ss 15:58 0:00 cron root 950 0.0 0.0 11736 1372 ? Ss 15:58 0:00 /usr/sbin/winbindd root 958 0.0 0.0 11736 1184 ? S 15:58 0:00 /usr/sbin/winbindd root 974 0.0 0.1 6832 2580 ? Ss 15:58 0:00 /usr/sbin/cupsd -C /etc/cups/cupsd.conf root 1078 0.0 0.0 1792 564 tty1 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty1 gdm 1097 0.0 0.0 3392 772 ? S 15:58 0:00 /usr/bin/dbus-launch --exit-with-session root 1112 0.0 0.1 19216 3292 ? Sl 15:58 0:00 /usr/lib/gdm/gdm-session-worker root 1116 0.0 0.1 5540 2932 ? S 15:58 0:01 /usr/lib/upower/upowerd root 1131 0.0 0.1 6308 3824 ? S 15:58 0:00 /usr/lib/policykit-1/polkitd 108 1163 0.0 0.2 16788 4360 ? Ssl 15:58 0:01 /usr/sbin/hald root 1164 0.0 0.0 3536 1300 ? S 15:58 0:00 hald-runner root 1188 0.0 0.0 3612 1256 ? S 15:58 0:00 hald-addon-input: Listening on /dev/input/event6 /dev/input/event5 /dev/input/event2 root 1194 0.0 0.0 3612 1224 ? S 15:58 0:00 /usr/lib/hal/hald-addon-rfkill-killswitch root 1200 0.0 0.0 3608 1240 ? S 15:58 0:00 /usr/lib/hal/hald-addon-generic-backlight root 1202 0.0 0.0 3616 1236 ? S 15:58 0:02 hald-addon-storage: polling /dev/sr0 (every 2 sec) root 1204 0.0 0.0 3616 1236 ? S 15:58 0:00 hald-addon-storage: polling /dev/sdb (every 2 sec) root 1211 0.0 0.0 3624 1220 ? S 15:58 0:00 /usr/lib/hal/hald-addon-cpufreq 108 1212 0.0 0.0 3420 1200 ? S 15:58 0:00 hald-addon-acpi: listening on acpid socket /var/run/acpid.socket 1000 1222 0.0 0.1 24196 2816 ? Sl 15:58 0:00 /usr/bin/gnome-keyring-daemon --daemonize --login 1000 1240 0.0 0.3 28228 7312 ? Ssl 15:58 0:00 gnome-session 1000 1274 0.0 0.0 3284 356 ? Ss 15:58 0:00 /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session gnome-session 1000 1277 0.0 0.0 3392 772 ? S 15:58 0:00 /usr/bin/dbus-launch --exit-with-session gnome-session 1000 1278 0.0 0.0 3160 1652 ? Ss 15:58 0:00 /bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session 1000 1281 0.0 0.2 8172 4636 ? S 15:58 0:00 /usr/lib/libgconf2-4/gconfd-2 1000 1287 0.0 0.5 24228 10896 ? Ss 15:58 0:03 /usr/lib/gnome-settings-daemon/gnome-settings-daemon 1000 1290 0.0 0.1 6468 2364 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd 1000 1293 0.0 0.6 38104 13004 ? S 15:58 0:03 metacity 1000 1296 0.0 0.1 30280 2628 ? Ssl 15:58 0:00 /usr/lib/gvfs//gvfs-fuse-daemon /home/alexandros/.gvfs 1000 1301 0.0 0.0 3344 988 ? S 15:58 0:03 syndaemon -i 0.5 -k 1000 1303 0.0 0.1 8060 3488 ? S 15:58 0:00 /usr/lib/gvfs/gvfs-gdu-volume-monitor root 1306 0.0 0.1 15692 3104 ? Sl 15:58 0:00 /usr/lib/udisks/udisks-daemon 1000 1307 0.4 1.0 50748 21684 ? S 15:58 0:34 python -u /usr/share/screenlets/DigiClock/DigiClockScreenlet.py 1000 1308 0.0 0.9 35608 18564 ? S 15:58 0:00 python /usr/share/screenlets-manager/screenlets-daemon.py 1000 1309 0.0 0.3 19524 6468 ? S 15:58 0:00 /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1 1000 1311 0.0 0.5 37412 11788 ? S 15:58 0:01 gnome-power-manager 1000 1312 0.0 1.0 50772 22628 ? S 15:58 0:03 gnome-panel 1000 1313 0.1 1.5 102648 31184 ? Sl 15:58 0:10 nautilus root 1314 0.0 0.0 5188 996 ? S 15:58 0:02 udisks-daemon: polling /dev/sdb /dev/sr0 1000 1315 0.0 0.6 51948 12464 ? SL 15:58 0:01 nm-applet --sm-disable 1000 1317 0.0 0.1 16956 2364 ? Sl 15:58 0:00 /usr/lib/gvfs/gvfs-afc-volume-monitor 1000 1318 0.0 0.3 20164 7792 ? S 15:58 0:00 bluetooth-applet 1000 1321 0.0 0.1 7260 2384 ? S 15:58 0:00 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor 1000 1323 0.0 0.5 37436 12124 ? S 15:58 0:00 /usr/lib/notify-osd/notify-osd 1000 1324 0.0 1.9 197928 40456 ? Ssl 15:58 0:06 /home/alexandros/.dropbox-dist/dropbox 1000 1329 0.0 0.3 20136 7968 ? S 15:58 0:00 /usr/bin/gnome-screensaver --no-daemon 1000 1331 0.0 0.1 7056 3112 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd-trash --spawner :1.6 /org/gtk/gvfs/exec_spaw/0 root 1340 0.0 0.0 2236 1008 ? S 15:58 0:00 /sbin/dhclient -d -sf /usr/lib/NetworkManager/nm-dhcp-client.action -pf /var/run/dhcl 1000 1348 0.0 0.1 42252 3680 ? Ssl 15:58 0:00 /usr/lib/bonobo-activation/bonobo-activation-server --ac-activate --ior-output-fd=19 1000 1384 0.0 1.7 80244 35480 ? Sl 15:58 0:02 /usr/bin/python /usr/lib/deskbar-applet/deskbar-applet/deskbar-applet --oaf-activate- 1000 1388 0.0 0.5 26196 11804 ? S 15:58 0:01 /usr/lib/gnome-panel/wnck-applet --oaf-activate-iid=OAFIID:GNOME_Wncklet_Factory --oa 1000 1393 0.1 0.5 25876 11548 ? S 15:58 0:08 /usr/lib/gnome-applets/multiload-applet-2 --oaf-activate-iid=OAFIID:GNOME_MultiLoadAp 1000 1394 0.0 0.5 25600 11140 ? S 15:58 0:03 /usr/lib/gnome-applets/cpufreq-applet --oaf-activate-iid=OAFIID:GNOME_CPUFreqApplet_F 1000 1415 0.0 0.5 39192 11156 ? S 15:58 0:01 /usr/lib/gnome-power-manager/gnome-inhibit-applet --oaf-activate-iid=OAFIID:GNOME_Inh 1000 1417 0.0 0.7 53544 15488 ? Sl 15:58 0:00 /usr/lib/gnome-applets/mixer_applet2 --oaf-activate-iid=OAFIID:GNOME_MixerApplet_Fact 1000 1419 0.0 0.4 23816 9068 ? S 15:58 0:00 /usr/lib/gnome-panel/notification-area-applet --oaf-activate-iid=OAFIID:GNOME_Notific 1000 1488 0.0 0.3 20964 7548 ? S 15:58 0:00 /usr/lib/gnome-disk-utility/gdu-notification-daemon 1000 1490 0.0 0.1 6608 2484 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd-burn --spawner :1.6 /org/gtk/gvfs/exec_spaw/1 1000 1510 0.0 0.1 6348 2084 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd-metadata 1000 1531 0.0 0.3 19472 6616 ? S 15:58 0:00 /usr/lib/gnome-user-share/gnome-user-share 1000 1535 0.0 0.4 77128 8392 ? Sl 15:58 0:00 /usr/lib/evolution/evolution-data-server-2.28 --oaf-activate-iid=OAFIID:GNOME_Evoluti 1000 1601 0.0 0.5 69576 11800 ? Sl 15:59 0:00 /usr/lib/evolution/2.28/evolution-alarm-notify 1000 1604 0.0 0.7 33924 15888 ? S 15:59 0:00 python /usr/share/system-config-printer/applet.py 1000 1701 0.0 0.5 37116 11968 ? S 15:59 0:00 update-notifier 1000 1892 4.5 7.0 406720 145312 ? Sl 17:11 3:09 /opt/google/chrome/chrome 1000 1896 0.0 0.1 69812 3680 ? S 17:11 0:02 /opt/google/chrome/chrome 1000 1898 0.0 0.6 91420 14080 ? S 17:11 0:00 /opt/google/chrome/chrome --type=zygote 1000 1916 0.2 1.3 140780 27220 ? Sl 17:11 0:12 /opt/google/chrome/chrome --type=extension --disable-client-side-phishing-detection - 1000 1918 0.7 1.8 155720 37912 ? Sl 17:11 0:31 /opt/google/chrome/chrome --type=extension --disable-client-side-phishing-detection - 1000 1921 0.0 1.0 135904 21052 ? Sl 17:11 0:02 /opt/google/chrome/chrome --type=extension --disable-client-side-phishing-detection - 1000 1927 6.5 3.6 194604 74960 ? Sl 17:11 4:32 /opt/google/chrome/chrome --type=renderer --disable-client-side-phishing-detection -- 1000 2156 0.4 0.7 48344 14896 ? Rl 18:03 0:04 gnome-terminal 1000 2157 0.0 0.0 1988 712 ? S 18:03 0:00 gnome-pty-helper 1000 2158 0.0 0.1 6504 3860 pts/0 Ss 18:03 0:00 bash 1000 2564 0.2 0.1 6624 3984 pts/1 Ss+ 18:17 0:00 bash 1000 2711 0.0 0.0 4208 1352 ? S 18:19 0:00 /bin/bash /home/alexandros/Programme/TeamSpeak3-Client-linux_x86_back/ts3client_runsc 1000 2714 36.5 1.5 210872 31960 ? SLl 18:19 0:18 ./ts3client_linux_x86 1000 2743 0.0 0.0 2716 1068 pts/0 R+ 18:20 0:00 ps aux Output of vmstat: alexandros@alexandros-laptop:~$ vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 1093324 69840 449496 0 0 27 10 476 667 6 2 91 1 Output of lsusb alexandros@alexandros-laptop:~$ lspci 00:00.0 Host bridge: Silicon Integrated Systems [SiS] 671MX 00:01.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:02.0 ISA bridge: Silicon Integrated Systems [SiS] SiS968 [MuTIOL Media IO] (rev 01) 00:02.5 IDE interface: Silicon Integrated Systems [SiS] 5513 [IDE] (rev 01) 00:03.0 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) 00:03.1 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) 00:03.3 USB Controller: Silicon Integrated Systems [SiS] USB 2.0 Controller 00:05.0 IDE interface: Silicon Integrated Systems [SiS] SATA Controller / IDE mode (rev 03) 00:06.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:07.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:0d.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 00:0f.0 Audio device: Silicon Integrated Systems [SiS] Azalia Audio Controller 01:00.0 VGA compatible controller: ATI Technologies Inc Mobility Radeon X2300 02:00.0 Ethernet controller: Atheros Communications Inc. AR5001 Wireless Network Adapter (rev 01) The Team Speak log file : 2011-06-19 19:04:04.223522|INFO | | | Logging started, clientlib version: 3.0.0-rc2 [Build: 14642] 2011-06-19 19:04:04.761149|ERROR |SoundBckndIntf| | /home/alexandros/Programme/TeamSpeak3-Client-linux_x86_back/soundbackends/libpulseaudio_linux_x86.so error: NOT_CONNECTED 2011-06-19 19:04:05.871770|INFO |ClientUI | | Failed to init text to speech engine 2011-06-19 19:04:05.894623|INFO |ClientUI | | TeamSpeak 3 client version: 3.0.0-rc2 [Build: 14642] 2011-06-19 19:04:05.895421|INFO |ClientUI | | Qt version: 4.7.2 2011-06-19 19:04:05.895571|INFO |ClientUI | | Using configuration location: /home/alexandros/.ts3client/ts3clientui_qt.conf 2011-06-19 19:04:06.559596|INFO |ClientUI | | Last update check was: Sa. Jun 18 00:08:43 2011 2011-06-19 19:04:06.560506|INFO | | | Checking for updates... 2011-06-19 19:04:07.357869|INFO | | | Update check, my version: 14642, latest version: 14642 2011-06-19 19:05:52.978481|INFO |PreProSpeex | 1| Speex version: 1.2rc1 2011-06-19 19:05:54.055347|INFO |UIHelpers | | setClientVolumeModifier: 10 -8 2011-06-19 19:05:54.057196|INFO |UIHelpers | | setClientVolumeModifier: 11 2 Thanks for taking the time to read my message. UPDATE: Thanks to nickguletskii's link I googled for "alsa cpu usage" (without quotes) and it brought me to a forum. A user wrote that by directly selecting the hardware with "plughw:x.x" won't impact the performance of the system. I have selected it in the TS 3 configuration and it worked. But this solution is not optimal because now no other program can access the sound output. If you need any further information or my question is unclear than please tell me.

    Read the article

  • CodePlex Daily Summary for Monday, May 03, 2010

    CodePlex Daily Summary for Monday, May 03, 2010New Projects.radiko: エアログラス採用のシンプルなradiko(http://radiko.jp/)クライアントです。タスクトレイのアイコンからラジオ局の切り替えができます。7Scale: EmptyB2C MVC Plattform: The B2C MVC Plattform aims to be pluggable site framework to help small busisness accomplish basic tasks between business and customers.ElValWeb: The goal of the project to create full featured implementation of ModelValidatorProvider for Enterprise Library Application Validation Block, wich ...esatis yazilimi: asp.net yazılımı ile satış magazasi websitesi kur.IEnumerable.It sample code: IEnumerable.It sample codejQuery MicroAjax for ASP.NET: MicroAjax is a set of jQuery plugins and .NET components designed to provide simple, powerful and efficient Ajax centric web application design pat...Karbon VOS: Karbon VOS is an advanced Virtual Operating System Template for Visual Basic Express. It's developed in Visual Basic. Karbon VOS hopes to one day b...LINQ Mapper: LINQ Mapper translates simple LINQ queries between different sources. It allows you to write queries against your domain model, but have them run ...Meccano Silverlight Framework: Meccano is a new generation of frameworks for creation of LOB Silverlight applications based on MEF, RX, WCF, ADO.NET Data Services etc. It is inte...Multiuse Model View (MMV) Library: This project is an open source library for the Multiuse Model View (MMV) pattern for building robust WPF and ASP.Net applications. Visit my blog ht...Process Affinity Control: Process Affinity Control allows to set the affinity masks of processes based on rules.SilverSpatial: This project helps bridge the gap between Silverlight and Geo-Spatial data type (such as SQL Spatial). It implements the Well-Known-Binary (WKB) fo...StageAssets: Application for storing data about "things" and people in theatre. For example equipment, actors and so on.Stratosphere: Mono compatible library with set of primitives to work with scalable table, queue and block containers with corresponding implementations for Amazo...TRX Web-Viewer: A simple web-based application to upload and view VSTS 2008 and VSTS 2010 test result files with some basic lookup and feature-wise management of r...WDT2: WDT 2 is the school project to begin learning .NET enviroment, The main focus is on learning the use of almost all the componenets.WPF Behavior Library: WPF Behavior Library is a set of additional actions for WPF that allow you to add extra behaviors to a control quickly and easily. Currently the on...YouTubeEmbeddedVideo WebControl for ASP.NET: A Control to embed YouTube videos in ASP.NET pages. Works in C# and VB.NETNew Releases.radiko: beta: 東京局のみ対応 あとは手抜きActiveWorlds Managed .NET SDK: AwManaged Technology Preview - WIN32 (Alpha): This WIN32 release contains the Server Console Application. The Setup executable should be run as administrator on O.S. using UAC (Vista/Win7)AJAX Control Framework: v1.0.1.0: v1.0.1.0 - Contains a Bing Maps sample project, a number of bug fixes and a few performance improvements. - AJAX enable ANY custom control that der...App_Code (and Usercontrol) Editor (ACE): v1.0.0 alpha: The first alpha release of the AppCode Editor for Umbraco 4.0.3 is now available to download! Tested to work with usercontrols - pre-compilation wi...ElValWeb: ElValWeb 0.0.1.0: Version 0.0.1.0 contains client validation support forAndCompositeValidator ContainsCharactersValidator DomainValidator NotNullValidator Or...esatis yazilimi: magaza: magazanın yazılımları ve veri tabanının yazılımlarıGrunty OS: Grunty OS USB: Download Grunty OS for USBGrunty OS: Grunty OS.ISO: Grunty OS ISOKarbon VOS: Milestone 1 (Kaptua): Milestone 1...Live Meeting API Wrapper: LiveMeetingAPIWrapperV1.2: Added get meeting and update meeting.Multiuse Model View (MMV) Library: v0.3: first alpha release. Medium amount of functionality and some use cases tested.MVC Foolproof Validation: Beta 0.9.3774: Adds resource provided error messages, regular expression operators and a new RegularExpressionIf attribute.Process Affinity Control: Version 1.0.0: This is the first release. Planned features for the next release: No administrative privileges needed to run the manager Select the active scena...SharePoint 2010 Service Manager: SharePoint 2010 Service Manager 1.1: Added support to run under UAC with automatic security elevationSharePoint Event Handler Manager: Event Handler Manager 2.0: Please download the application here: http://www.ackermantech.com/registerevents.aspxSkyDrive Synchronizer: SkyDrive Sync Beta 0.1: Beta release includes: Upload and download Synchronize updated files Delete files on web/locally if not in source Split larger files into sma...Stratosphere: Stratosphere 1.0.0.0: Initial beta releaseSuggested Resources for .NET Developers: 0.8.0.0 VS2010 - focus on displaying content: This is the first release of Suggested Resources that can be downloaded from the internet. While there is still a lot of work to be done this rele...TRX Web-Viewer: TRX Web-Viewer V1.0: First working versionVCC: Latest build, v2.1.30502.0: Automatic drop of latest buildWatchersNET.TagCloud: WatchersNET.TagCloud 01.04.00: !Whats New New Tag Mode: Search Referrers (Shows Search Tags From Google, Ask, Bing, Yahoo and the Dnn Site Search) Taxonomy Tags now contains L...Web/Cloud Applications Development Framework | Visual WebGui: 6.4 Beta 2e: Fully featured beta version of Visual WebGui Web/Cloud Applicaiton Development FrameworkWPF Behavior Library: WPF Behavior Library 0.1 Release: First alpha release of the WPF Behavior Library. It should be stable but doesn't have all of the features it will have in the future and the API ma...xvanneste: Sharepoint Social Network Client: Client permettant d'avoir accés au social network de sharepoint a l'exterieur du navigateur.Most Popular ProjectsRawrWBFS ManagerAJAX Control Toolkitpatterns & practices – Enterprise LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)iTuner - The iTunes CompanionASP.NETDotNetNuke® Community EditionMost Active ProjectsIonics Isapi Rewrite Filterpatterns & practices – Enterprise LibraryRawrHydroServer - CUAHSI Hydrologic Information System ServerAJAX Control Frameworkpatterns & practices: Azure Security GuidanceTinyProjectBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog ModuleDambach Linear Algebra Framework

    Read the article

  • CodePlex Daily Summary for Saturday, May 15, 2010

    CodePlex Daily Summary for Saturday, May 15, 2010New ProjectsBizTalk EDI Guidance: BizTalk EDI Guidance is intended to simplify the delivery of EDI solutions by leveraging the ESB Toolkit. This project is currently Alpha and sh...Continues Integration Sample: I'm providing a series of blog post to show a complete CI process using CruiseControl.Net and msbuild. The source code for this series is hosted here.DioM2D: My Dragons in our Midst RPG. Runs on my custom Starlight Engine.Ethical Hacking ASP.NET: Security tools and guidelines for white-hat hacking and protecting ASP.NET web applications.Farseer Engine with XNATouch: Farseer is great engine for game physics. This implementation uses XNATouch framework.Feature Builder Guidance Extensions: Feature Builder Guidance Extensions are Feature Extensions which extend the guidance for the Feature Building experience. Each FBGX will be suppli...Microsoft Office Document Security: MODS is a plugin for office 2007 thats includes Hash Encryption, Hex Convertion and more. Plugins: MODS For Word still working on (MODS for Excel ...Minimize Engine (XNA): The Minimize Engine is a basic 3D Games Engine created using XNA, with its primary focus around Grid Based games.MSForge TownCrier: This project is meant to build a notification and calling system for MSForge.net User Groups.NatureProtector: Silverlight 4 project.OutSync: OutSync is a free Windows desktop application that syncs photos of your Facebook friends with matching contacts in Microsoft Outlook. It allows you...Quick Save Images, Clipboard save to file, Quick save, bmp, png, jpeg, Image: ClipSa is a very small tool for very quick picture saving. You put some picture into the clipboard (PrintScrn/Alt-PrintScrn/Ctrl-C), ClipSa saves ...ResHelper Manager: Resource strings management tool that creates localization files for any type of localization target (asp.net, wpf and so on...)SecureCookieHttpModule: Secure your session cookie (and other session-based) cookies for replay attacks using this easy to use ASP.NET HttpModule.simpleChMS: A Church Management System (ChMS) designed for churches or ministries like youth groups that want to facilitate better care or theie membership. Fo...sMAPtool: -SPDomainObject: mapping strong type objects to sp listsSQL Trim: This project aims at developing a universal trim function for Microsoft SQL Server. It trims: 1) pre spaces 2) post spaces 3) double spaces 3) subs...TurretGunner: mt-experienceNew ReleasesBeanProxy: BeanProxy 3.0: BeanProxy is a C# (.NET 3.5) library housing classes that facilitates unit testing. Any non-static, public interface/class or abstract class can be...Blueset Studio Opensource Projects: 蓝色之风记事本 0.2 Alpha: 一个超级Bug版本……CSharp Intellisense: V2.1: - Bug fix (Pascal Casing)DioM2D: DioM2D0.01: http://www.dragonsinourmidst.com/forums/showthread.php?p=690058#post690058Ethical Hacking ASP.NET: Version 1.0.0.1: This is the initial release of the project. Read more about the available tests and features on the Documentation tab. You need the full .NET Frame...Event Scavenger: Collector service update - version 3.2.4: Added check if the database connection string is set up in the config file.Feature Builder Guidance Extensions: FBGX-Binaries: This release consists of a zip file containing all the VSIXs resulting from building each of the FBGX packages found here as source. This will mak...Floe IRC Client: Floe IRC Client 2010-05 R2: - Detaching windows (right click on the tabs to detach them) - Highlight lines with your nick or other patterns - Fixed several bugs - Tabs can now...Free language translator and file converter: Free Language Translator 1.96: Fixed some minor bugs and improved the UI a bit. If you can not install the msi file you might be missing some prerequisites. You can try running t...Geocache Downloader: release 1.0: This is the first release.kp.net: Alpha release is avalable: The goal of this alpha release is to try the code in some production scenarios and find out what features should be tuned.Live-Exchange Calendar Sync: Live-Exchange Calendar Sync: Live-Exchange Calendar Sync Beta May 14, 2010 release of Live-Exchange Calendar Sync 1.0 BETA. (Version 45334) Getting StartedInfo about installat...MAPILab Explorer for SharePoint: MAPILab Explorer for SharePoint ver 2.1.1: 1) Small bug fixed that appears on first start (when earliers versions wasn't installed). How to install:Download ZIP file and extract it on Sha...Microsoft Office Document Security: MODS 4 WORD (SOURCE INCLUDED): Includes Source CodeMoonyDesk (windows desktop widgets): MoonyDesk Alpha: MoonyDesk Alpha (some memory improvements)OnTopReplica: Release 2.9.3: Some bugfixes and improvements. Czech translation added (thanks René Mihula).OutSync: OutSync v1.0.100.0: OutSync v1.0.100.0 is the final release by Mel before the move to CodePlex. I have tested it on Windows 7 32bit and 64bit with Office 2007 and it ...Quick Save Images, Clipboard save to file, Quick save, bmp, png, jpeg, Image: Clipsa v 0.1: Download and extract to any place 2 files - clipSa.exe and clipSa.exe.config Run clipSa.exe. That's all.ResHelper Manager: ResHelperManager: List of changes applied to this version of ResHelper is included in main download zip package. Example sourcesIn Source Code tab are sources of De...Rx Contrib: V1.3: - Bug Fix - BufferWithTimeOrCount with flexible time period setting when ever the time period elapsed...SharePoint DVK Integration: SharePoint 2007 DVK integration v1.0.3: Fixes Fixed default field bindings. I rebound too many fields on every page load. Fixed extension replacing on creating target url (threw it out)...ShoutcastStast for DotNetNuke: DNN_ShoutcastStats alpha 05.00.495: First Alpha release of ShoutcastStats Module for DotNetNuke This first alpha version of the ShoutcastStats Module for DotNetNuke is still in devel...SilverPart 2.1: SilverPart 2.1: SilverPart 2.1 This interim release fixes some major bugs related to Firefox and anonymous access. - Fix for Issue ID 4005 - SilverPart does not w...sMAPtool: sMAPedit v0.7c (Base Release with Maps): Fixed: force a gargabe collection update to prevent pictureBox's memory leak Added: essential map pack with all basic maps in jpg format Added:...SQL Trim: Trim: Initial releaseSSIS Multiple Hash: Multiple Hash V1.2.1: This is version 1.2.1 of the Multiple Hash SSIS Component. It supports SQL 2005 and SQL 2008, although you have to download the correct install pa...StreamInsight Yahoo Finance input adapter example: StockTicker_v1_0_RTM: Updated for StreamInsight RTM.Update Controls .NET: 2.1.0.0: Automatic dependency management for WPF and Silverlight data binding. This release combines both the WPF and Silverlight assemblies into one insta...VCC: Latest build, v2.1.30514.0: Automatic drop of latest buildMost Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryMirror Testing SystemRawrPHPExcelBlogEngine.NETMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersShake - C# MakeStyleCop

    Read the article

  • SQL Monitor’s data repository

    - by Chris Lambrou
    As one of the developers of SQL Monitor, I often get requests passed on by our support people from customers who are looking to dip into SQL Monitor’s own data repository, in order to pull out bits of information that they’re interested in. Since there’s clearly interest out there in playing around directly with the data repository, I thought I’d write some blog posts to start to describe how it all works. The hardest part for me is knowing where to begin, since the schema of the data repository is pretty big. Hmmm… I guess it’s tricky for anyone to write anything but the most trivial of queries against the data repository without understanding the hierarchy of monitored objects, so perhaps my first post should start there. I always imagine that whenever a customer fires up SSMS and starts to explore their SQL Monitor data repository database, they become immediately bewildered by the schema – that was certainly my experience when I did so for the first time. The following query shows the number of different object types in the data repository schema: SELECT type_desc, COUNT(*) AS [count] FROM sys.objects GROUP BY type_desc ORDER BY type_desc;  type_desccount 1DEFAULT_CONSTRAINT63 2FOREIGN_KEY_CONSTRAINT181 3INTERNAL_TABLE3 4PRIMARY_KEY_CONSTRAINT190 5SERVICE_QUEUE3 6SQL_INLINE_TABLE_VALUED_FUNCTION381 7SQL_SCALAR_FUNCTION2 8SQL_STORED_PROCEDURE100 9SYSTEM_TABLE41 10UNIQUE_CONSTRAINT54 11USER_TABLE193 12VIEW124 With 193 tables, 124 views, 100 stored procedures and 381 table valued functions, that’s quite a hefty schema, and when you browse through it using SSMS, it can be a bit daunting at first. So, where to begin? Well, let’s narrow things down a bit and only look at the tables belonging to the data schema. That’s where all of the collected monitoring data is stored by SQL Monitor. The following query gives us the names of those tables: SELECT sch.name + '.' + obj.name AS [name] FROM sys.objects obj JOIN sys.schemas sch ON sch.schema_id = obj.schema_id WHERE obj.type_desc = 'USER_TABLE' AND sch.name = 'data' ORDER BY sch.name, obj.name; This query still returns 110 tables. I won’t show them all here, but let’s have a look at the first few of them:  name 1data.Cluster_Keys 2data.Cluster_Machine_ClockSkew_UnstableSamples 3data.Cluster_Machine_Cluster_StableSamples 4data.Cluster_Machine_Keys 5data.Cluster_Machine_LogicalDisk_Capacity_StableSamples 6data.Cluster_Machine_LogicalDisk_Keys 7data.Cluster_Machine_LogicalDisk_Sightings 8data.Cluster_Machine_LogicalDisk_UnstableSamples 9data.Cluster_Machine_LogicalDisk_Volume_StableSamples 10data.Cluster_Machine_Memory_Capacity_StableSamples 11data.Cluster_Machine_Memory_UnstableSamples 12data.Cluster_Machine_Network_Capacity_StableSamples 13data.Cluster_Machine_Network_Keys 14data.Cluster_Machine_Network_Sightings 15data.Cluster_Machine_Network_UnstableSamples 16data.Cluster_Machine_OperatingSystem_StableSamples 17data.Cluster_Machine_Ping_UnstableSamples 18data.Cluster_Machine_Process_Instances 19data.Cluster_Machine_Process_Keys 20data.Cluster_Machine_Process_Owner_Instances 21data.Cluster_Machine_Process_Sightings 22data.Cluster_Machine_Process_UnstableSamples 23… There are two things I want to draw your attention to: The table names describe a hierarchy of the different types of object that are monitored by SQL Monitor (e.g. clusters, machines and disks). For each object type in the hierarchy, there are multiple tables, ending in the suffixes _Keys, _Sightings, _StableSamples and _UnstableSamples. Not every object type has a table for every suffix, but the _Keys suffix is especially important and a _Keys table does indeed exist for every object type. In fact, if we limit the query to return only those tables ending in _Keys, we reveal the full object hierarchy: SELECT sch.name + '.' + obj.name AS [name] FROM sys.objects obj JOIN sys.schemas sch ON sch.schema_id = obj.schema_id WHERE obj.type_desc = 'USER_TABLE' AND sch.name = 'data' AND obj.name LIKE '%_Keys' ORDER BY sch.name, obj.name;  name 1data.Cluster_Keys 2data.Cluster_Machine_Keys 3data.Cluster_Machine_LogicalDisk_Keys 4data.Cluster_Machine_Network_Keys 5data.Cluster_Machine_Process_Keys 6data.Cluster_Machine_Services_Keys 7data.Cluster_ResourceGroup_Keys 8data.Cluster_ResourceGroup_Resource_Keys 9data.Cluster_SqlServer_Agent_Job_History_Keys 10data.Cluster_SqlServer_Agent_Job_Keys 11data.Cluster_SqlServer_Database_BackupType_Backup_Keys 12data.Cluster_SqlServer_Database_BackupType_Keys 13data.Cluster_SqlServer_Database_CustomMetric_Keys 14data.Cluster_SqlServer_Database_File_Keys 15data.Cluster_SqlServer_Database_Keys 16data.Cluster_SqlServer_Database_Table_Index_Keys 17data.Cluster_SqlServer_Database_Table_Keys 18data.Cluster_SqlServer_Error_Keys 19data.Cluster_SqlServer_Keys 20data.Cluster_SqlServer_Services_Keys 21data.Cluster_SqlServer_SqlProcess_Keys 22data.Cluster_SqlServer_TopQueries_Keys 23data.Cluster_SqlServer_Trace_Keys 24data.Group_Keys The full object type hierarchy looks like this: Cluster Machine LogicalDisk Network Process Services ResourceGroup Resource SqlServer Agent Job History Database BackupType Backup CustomMetric File Table Index Error Services SqlProcess TopQueries Trace Group Okay, but what about the individual objects themselves represented at each level in this hierarchy? Well that’s what the _Keys tables are for. This is probably best illustrated by way of a simple example – how can I query my own data repository to find the databases on my own PC for which monitoring data has been collected? Like this: SELECT clstr._Name AS cluster_name, srvr._Name AS instance_name, db._Name AS database_name FROM data.Cluster_SqlServer_Database_Keys db JOIN data.Cluster_SqlServer_Keys srvr ON db.ParentId = srvr.Id -- Note here how the parent of a Database is a Server JOIN data.Cluster_Keys clstr ON srvr.ParentId = clstr.Id -- Note here how the parent of a Server is a Cluster WHERE clstr._Name = 'dev-chrisl2' -- This is the hostname of my own PC ORDER BY clstr._Name, srvr._Name, db._Name;  cluster_nameinstance_namedatabase_name 1dev-chrisl2SqlMonitorData 2dev-chrisl2master 3dev-chrisl2model 4dev-chrisl2msdb 5dev-chrisl2mssqlsystemresource 6dev-chrisl2tempdb 7dev-chrisl2sql2005SqlMonitorData 8dev-chrisl2sql2005TestDatabase 9dev-chrisl2sql2005master 10dev-chrisl2sql2005model 11dev-chrisl2sql2005msdb 12dev-chrisl2sql2005mssqlsystemresource 13dev-chrisl2sql2005tempdb 14dev-chrisl2sql2008SqlMonitorData 15dev-chrisl2sql2008master 16dev-chrisl2sql2008model 17dev-chrisl2sql2008msdb 18dev-chrisl2sql2008mssqlsystemresource 19dev-chrisl2sql2008tempdb These results show that I have three SQL Server instances on my machine (a default instance, one named sql2005 and one named sql2008), and each instance has the usual set of system databases, along with a database named SqlMonitorData. Basically, this is where I test SQL Monitor on different versions of SQL Server, when I’m developing. There are a few important things we can learn from this query: Each _Keys table has a column named Id. This is the primary key. Each _Keys table has a column named ParentId. A foreign key relationship is defined between each _Keys table and its parent _Keys table in the hierarchy. There are two exceptions to this, Cluster_Keys and Group_Keys, because clusters and groups live at the root level of the object hierarchy. Each _Keys table has a column named _Name. This is used to uniquely identify objects in the table within the scope of the same shared parent object. Actually, that last item isn’t always true. In some cases, the _Name column is actually called something else. For example, the data.Cluster_Machine_Services_Keys table has a column named _ServiceName instead of _Name (sorry for the inconsistency). In other cases, a name isn’t sufficient to uniquely identify an object. For example, right now my PC has multiple processes running, all sharing the same name, Chrome (one for each tab open in my web-browser). In such cases, multiple columns are used to uniquely identify an object within the scope of the same shared parent object. Well, that’s it for now. I’ve given you enough information for you to explore the _Keys tables to see how objects are stored in your own data repositories. In a future post, I’ll try to explain how monitoring data is stored for each object, using the _StableSamples and _UnstableSamples tables. If you have any questions about this post, or suggestions for future posts, just submit them in the comments section below.

    Read the article

  • Silverlight for Windows Embedded Tutorial (step 5 and a bit of Windows Phone 7)

    - by Valter Minute
    If you haven’t spent the last week in the middle of the Sahara desert or traveling on a sled in the north pole area you should have heard something about the launch of Windows Phone 7 Series (or Windows Phone Series 7, or Windows Series Phone 7 or something like that). Even if you are in the middle of the desert or somewhere around the north pole you may have been reached by the news, since it seems that WP7S (using the full name will kill my available bandwidth!) is generating a lot of buzz in the development and IT communities. One of the most important aspects of this new platform is that it will be programmed using a new set of tools and frameworks, completely different from the ones used on older releases of Windows Mobile (or SmartPhone, or PocketPC or whatever…). WP7S applications can be developed using Silverlight or XNA. If you want to learn something more about WP7S development you can download the preview of Charles Petzold’s book about it: http://www.charlespetzold.com/phone/index.html Charles Petzold is also the author of “Programming Windows”, the first book I ever read about programming on Windows (it was Windows 3.0 at that time!). The fact that even I was able to learn how to develop Windows application is a proof of the quality of Petzold’s work. This book is up to his standards and the 150pages preview is already rich in technical contents without being boring or complicated to understand. I may be able to become a Windows Phone developer thanks to mr. Petzold. Mr. Petzold uses some nice samples to introduce the basic concepts of Silverlight development on WP7S. On this new platform you’ll use managed code to develop your application, so those samples can’t be ported on Windows CE R3 as they are, but I would like to take one of the first samples (called “SilverlightTapHello1”) and adapt it to Silverlight for Windows Embedded to show that even plain old native code can be used to develop “cool” user interfaces! The sample shows the standard WP7S title header and a textbox with an hello world message inside it. When the user touches the textbox, it will change its color. When the user touches the background (Grid) behind it, its default color (plain old White) will be restored. Let’s see how we can implement the same features on our embedded device! I took the XAML code of the sample (you can download the book samples here: http://download.microsoft.com/download/1/D/B/1DB49641-3956-41F1-BAFA-A021673C709E/CodeSamples_DRAFTPreview_ProgrammingWindowsPhone7Series.zip) and changed it a little bit to remove references to WP7S or managed runtime. If you compare the resulting files you will see that I was able to keep all the resources inside the App.xaml files and the structure of  MainPage.XAML almost intact. This is the Silverlight for Windows Embedded version of MainPage.XAML: <UserControl x:Class="SilverlightTapHello1.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:phoneNavigation="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Navigation" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="480" d:DesignHeight="800" FontFamily="{StaticResource PhoneFontFamilyNormal}" FontSize="{StaticResource PhoneFontSizeNormal}" Foreground="{StaticResource PhoneForegroundBrush}" Width="640" Height="480">   <Grid x:Name="LayoutRoot" Background="{StaticResource PhoneBackgroundBrush}"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> </Grid.RowDefinitions>   <!--TitleGrid is the name of the application and page title--> <Grid x:Name="TitleGrid" Grid.Row="0"> <TextBlock Text="SILVERLIGHT TAP HELLO #1" x:Name="textBlockPageTitle" Style="{StaticResource PhoneTextPageTitle1Style}"/> <TextBlock Text="main page" x:Name="textBlockListTitle" Style="{StaticResource PhoneTextPageTitle2Style}"/> </Grid>   <!--ContentGrid is empty. Place new content here--> <Grid x:Name="ContentGrid" Grid.Row="1" MouseLeftButtonDown="ContentGrid_MouseButtonDown" Background="{StaticResource PhoneBackgroundBrush}"> <TextBlock x:Name="TextBlock" Text="Hello, Silverlight for Windows Embedded!" HorizontalAlignment="Center" VerticalAlignment="Center" /> </Grid> </Grid> </UserControl> If you compare it to the WP7S sample (not reported here to avoid any copyright issue) you’ll notice that I had to replace the original phoneNavigation:PhoneApplicationPage with UserControl as the root node. This make sense because there is not support for phone applications on CE 6. I also had to specify width and height of my main page (on the WP7S device this will be adjusted by the OS) and I had to replace the multi-touch event handler with the MouseLeftButtonDown event (no multitouch support for Windows CE R3, still). I also changed the hello message, of course. I used XAML2CPP to generate the boring part of our application and then added the initialization code to WinMain: int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { if (!XamlRuntimeInitialize()) return -1;   HRESULT retcode;   IXRApplicationPtr app; if (FAILED(retcode=GetXRApplicationInstance(&app))) return -1; XRXamlSource dictsrc;   dictsrc.SetResource(hInstance,TEXT("XAML"),IDR_XAML_App);   if (FAILED(retcode=app->LoadResourceDictionary(&dictsrc,NULL))) return -1;   MainPage page;   if (FAILED(page.Init(hInstance,app))) return -1;   UINT exitcode;   if (FAILED(page.GetVisualHost()->StartDialog(&exitcode))) return -1;   return exitcode; }   You may have noticed that there is something different from the previous samples. I added the code to load a resource dictionary. Resources are an important feature of XAML that allows you to define some values that could be replaced inside any XAML file loaded by the runtime. You can use resources to define custom styles for your fonts, backgrounds, controls etc. and to support internationalization, by providing different strings for different languages. The rest of our WinMain isn’t that different. It creates an instances of our MainPage object and displays it. The MainPage class implements an event handler for the MouseLeftButtonDown event of the ContentGrid: class MainPage : public TMainPage<MainPage> { public:   HRESULT ContentGrid_MouseButtonDown(IXRDependencyObject* source,XRMouseButtonEventArgs* args) { HRESULT retcode; IXRSolidColorBrushPtr brush; IXRApplicationPtr app;   if (FAILED(retcode=GetXRApplicationInstance(&app))) return retcode;   if (FAILED(retcode=app->CreateObject(IID_IXRSolidColorBrush,&brush))) return retcode;   COLORREF color=RGBA(0xff,0xff,0xff,0xff);   if (args->pOriginalSource==TextBlock) color=RGBA(rand()&0xFF,rand()&0xFF,rand()&0xFF,0xFF);   if (FAILED(retcode=brush->SetColor(color))) return retcode;   if (FAILED(retcode=TextBlock->SetForeground(brush))) return retcode; return S_OK; } }; As you can see this event is generated when a used clicks inside the grid or inside one of the objects it contains. Since our TextBlock is inside the grid, we don’t need to provide an event handler for its MouseLeftButtonDown event. We can just use the pOriginalSource member of the event arguments to check if the event was generated inside the textblock. If the event was generated inside the grid we create a white brush,if it’s inside the textblock we create some randomly colored brush. Notice that we need to use the RGBA macro to create colors, specifying also a transparency value for them. If we use the RGB macro the resulting color will have its Alpha channel set to zero and will be transparent. Using the SetForeground method we can change the color of our control. You can compare this to the managed code that you can find at page 40-41 of Petzold’s preview book and you’ll see that the native version isn’t much more complex than the managed one. As usual you can download the full code of the sample here: http://cid-9b7b0aefe3514dc5.skydrive.live.com/self.aspx/.Public/SilverlightTapHello1.zip And remember to pre-order Charles Petzold’s “Programming Windows Phone 7 series”, I bet it will be a best-seller! Technorati Tags: Silverlight for Windows Embedded,Windows CE

    Read the article

  • B2B and B2C alike… but a little different – Oracle Commerce named Leader in Forrester B2B Commerce Wave

    - by Katrina Gosek
    We weren’t surprised to see Oracle Commerce positioned as a Leader in Forrester Research, Inc.’s first Commerce Wave focused on B2B, “The Forrester Wave™: B2B Commerce Suites, Q4 2013,” released earlier this month. We believe that the report validates much of what we’ve heard from our largest customers – the world’s largest distribution, manufacturing and high-tech customers who sell billions of dollars of goods and services to other businesses through their Web channels. More importantly, we feel that the report confirms something very important: B2B and B2C Commerce are alike… but a little different. B2B and B2C Commerce are alike… Clearly, B2C experiences have set expectations for B2B. Every B2B buyer is a consumer at home and brings the same expectations to a website selling electronic components, aftermarket parts, or MRO products. Forrester calls these rich consumer-based capabilities that help B2B customers do their jobs “table stakes”: front-office content, community, and commerce features that meet customer expectations for 24x7x365 ordering, real-time customer service, and expedited shipping — both online and on mobile devices: “Whether they are just beginning to sell online or are in the late stages of launching a next-generation site, B2B eCommerce operations today must: offer a customer experience standard comparable to what leading b2c sites now offer; address the growing influence that mobile devices are having in the workplace; make a qualitative and quantitative business case that drives sustained investment.” Just five years ago, many of our B2B customers’ online business comprised only 5-10% of their total revenue. Today, when we speak to those same brands, we hear about double and triple digit growth in their online channels. Many have seen the percentage of the business they perform in their web channels cross the 30-50% threshold. You can hear first-hand from several Oracle Commerce B2B customers about the success they are seeing, and what they’re trying to accomplish (Carolina Biological, Premier Farnell, DeliXL, Elsevier). It seems that this market momentum is likely the reason Forrester broke out the separate B2B Commerce Wave from the B2C Wave. In fact, B2B is becoming the larger force in commerce, expected to collect twice the online dollars of B2C this year ($559 billion). But a little different… Despite the similarities, there is a key and very important difference between B2C and B2B. Unlike a consumer shopping for shoes, a business shopper buying from a distributor or manufacturer is coming to the Web channel as a part of their job. So in addition to a rich, consumer-like experience this shopper expects, these B2B buyers need quoting tools and complex pricing capabilities, like eProcurement, bulk order entry, and other self-service tools such as account, contract and organization management. Forrester also is emphasizing three additional “back-end” tools and capabilities their clients say they need to drive growth in their B2B online channels: i) product information management (PIM), which provides a single system of record for large part lists and product catalogs; ii) web content management (WCM), needed to manage large volumes of unstructured marketing information, and iii) order management systems (OMS), which manage and orchestrate the complex B2B order life cycle from quote through approval, submission to manufacturing, distribution and delivery. We would like to expand on each of these 3 areas: As Forrester suggests, back-end PIM is definitely needed by B2B Commerce providers. Most B2B companies have made significant investments in enterprise-grade PIMs, given the importance of product data management for aggregation and syndication of content, product attribution, analytics, and handling of complex workflows. While in principle it may sound appealing to have a PIM as part of a commerce offering (especially for SMBs who have to do more with less), our customers have typically found that PIM in a commerce platform is largely redundant with what they already have in-place, and is not fully-featured or robust enough to handle the complexity of the product data sets that B2B distributors and manufacturers usually handle. To meet the PIM needs for commerce, Oracle offers enterprise PIM (Product Hub/Fusion PIM) and a robust enterprise data quality product (EDQP) integrated with the Oracle Commerce solution. These are key differentiators of our offering and these capabilities are becoming even more tightly integrated with Oracle Commerce over time. For Commerce, what customers really need is a robust product catalog and content management system for enabling business users to further enrich and ready catalog and content data to be presented and sold online.  This has been a significant area of investment in the Oracle Commerce platform , which continue to get stronger. We see this combination of capabilities as best meeting the needs of our customers for a commerce platform without adding a largely redundant, less functional PIM in the commerce front-end.  On the topic of web content management, we were pleased to see Forrester cite Oracle’s differentiated digital experience capability in this area and the “unique opportunity in the market to lead the convergence of commerce and content management with the amalgamation of Oracle Commerce with WebCenter Sites (formally FatWire).” Strong content management capabilities are critical for distributors and manufacturers who are frequently serving an engineering audience coming to their websites to conduct product research in search of technical data sheets, drawings, videos and more. The convergence of content, commerce, and experience is critical for B2B brands selling online. Regarding order management, Forrester notes that many businesses use their existing back-end enterprise resource planning (ERP) systems to manage order life cycles.  We hear the same from most of our B2B customers, as they already have an ERP system—if not several of them—and are not interested in yet another one. So what do we take away from the Wave results? Forrester notes that the Oracle Commerce Platform “has always had strong B2B commerce capabilities and Oracle certainly has an exhaustive list of B2B customers using the solution.”  What makes us excited about developing leading B2B solutions are the close relationships with our customers and the clear opportunity in the market – which we'll address in an exciting new release planned for the next 12 months. Oracle has one of the world’s largest B2B customer bases, providing leading solutions across key business-to-business functions – from marketing, sales automation, and service to master data management, and ERP. To learn more about Oracle’s Commerce product vision and strategy, visit our website and check out these other B2B Commerce Resources: -       2013 B2B Commerce Trends Report -       B2B Commerce Whitepaper: Consumerization, Complexity, Change -       B2B Commerce Webcast: What Industry Trend Setters Do Right -       Internet Retailer, Web Drives Sales for B2B Companies -       Internet Retailer Article, The Web Means Business: B2B Companies Beef Up Their Websites,        borrowing from b2c retailers and breaking new ground -       Internet Retailer Article, B2B e-Commerce is poised for growth

    Read the article

  • Career development as a Software Developer without becoming a manager.

    - by albertpascual
    I’m a developer, I like to write new exciting code everyday, my perfect day at work is a day that when I wake up, I know that I have to write some code that I haven’t done before or to use a new framework/language/platform that is unknown to me. The best days in the office is when a project is waiting for me to architect or write. In my 15 years in the development field, I had to in order to get a better salary to manage people, not just to lead developers, to actually manage people. Something that I found out when I get into a management position is that I’m not that good at managing people, and not afraid to say it. I do not enjoy that part of the job, the worse one, takes time away from what I really like. Leading developers and managing people are very different things. I do like teaching and leading developers in a project. Yet most people believe, and is true in most companies, the way to get a better salary is to be promoted to a manager position. In order to advance in your career you need to let go of the everyday writing code and become a supervisor or manager. This is the path for developers after they become senior developers. As you get older and your family grows, the only way to hit your salary requirements is to advance your career to become a manager and get that manager salary. That path is the common in most companies, the most intelligent companies out there, have learned that promoting good developers mean getting a crappy manager and losing a good resource. Now scratch everything I said, because as I previously stated, I don’t see myself going to the office everyday and just managing people until is time to go home. I like to spend hours working in some code to accomplish a task, learning new platforms and languages or patterns to existing languages. Being interrupted every 15 minutes by emails or people stopping by my office to resolve their problems, is not something I could enjoy. All the sudden riding my motorcycle to work one cold morning over the Redlands Canyon and listening to .NET Rocks podcast, Michael “Doc” Norton explaining how to take control of your development career without necessary going to the manager’s track. I know, I should not have headphones under my helmet when riding a motorcycle in California. His conversation with Carl Franklin and Richard Campbell was just confirming everything I have ever did with actually more details and assuring that there are other paths. His method was simple yet most of us, already do many of those steps, Mr. Michael “Doc” Norton believe that it pays off on the long run, that finally companies prefer to pay higher salaries to those developers, yet I would actually think that many companies do not see developers that way, this is not true for bigger companies. However I do believe the value of those developers increase and most of the time, changing companies could increase their salary instead of staying in the same one. In short without even trying to get into the shadow of Mr. Norton and without following the steps in the order; you should love to learn new technologies, and then teach them to other geeks. I personally have learn many technologies and I haven’t stop doing that, I am a professor at UCR where I teach ASP.NET and Silverlight. Mr Norton continues that after than, you want to be involve in the development community, user groups, online forums, open source projects. I personally talk to user groups, I’m very active in forums asking and answering questions as well as for those I got awarded the Microsoft MVP for ASP.NET. After you accomplish all those, you should also expose yourself for what you know and what you do not know, learning a new language will make you humble again as well as extremely happy. There is no better feeling that learning a new language or pattern in your daily job. If you love your job everyday and what you do, I really recommend you to follow Michael’s presentation that he kindly share it on the link below. His confirmation is a refreshing, knowing that my future is not behind a desk where the computer screen is on my right hand side instead of in front of me. Where I don’t have to spent the days filling up performance forms for people and the new platforms that I haven’t been using yet are just at my fingertips. Presentation here. http://www.slideshare.net/LeanDog/take-control-of-your-development-career-michael-doc-norton?from=share_email_logout3 Take Control of Your Development Career Welcome! Michael “Doc” Norton @DocOnDev http://docondev.blogspot.com/ [email protected] Recovering Post Technical I love to learn I love to teach I love to work in teams I love to write code I really love to write code What about YOU? Do you love your job? Do you love your Employer? Do you love your Boss? What do you love? What do you really love? Take Control Take Control • Get Noticed • Get Together • Get Your Mojo • Get Naked • Get Schooled Get Noticed Get Noticed Know Your Business Get Noticed Get Noticed Understand Management Get Noticed Get Noticed Do Your Existing Job Get Noticed Get Noticed Make Yourself Expendable Get Together Get Together Join a User Group Get Together Help Run a User Group Get Together Start a User Group Get Your Mojo Get Your Mojo Kata Get Your Mojo Koans Get Your Mojo Breakable Toys Get Your Mojo Open Source Get Naked Get Naked Run with Group A Get Naked Do Something Different Get Naked Own Your Mistakes Get Naked Admit You Don’t Know Get Schooled Get Schooled Choose a Mentor Get Schooled Attend Conferences Get Schooled Teach a New Subject Get Started Read These (Again) Take Control of Your Development Career Thank You! Michael “Doc” Norton @DocOnDev http://docondev.blogspot.com/ [email protected] In a short summary, I recommend any developer to check his blog and more important his presentation, I haven’t been lucky enough to watch him live, I’m looking forward the day I have the opportunity. He is giving us hope in the future of developers, when I see some of my geek friends moving to position that in short years they begin to regret, I get more unsure of my future doing what I love. I would say that now is looking at the spectrum of companies that understand and appreciate developers. There are a few there, hopefully with time code sweat shops will start disappearing and being a developer will feed a family of 4. Cheers Al tweetmeme_url = 'http://weblogs.asp.net/albertpascual/archive/2010/12/07/career-development-as-a-software-developer-without-becoming-a-manager.aspx'; tweetmeme_source = 'alpascual';

    Read the article

  • UPK Pre-Built Content Update

    - by Karen Rihs
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} UPK pre-built content development efforts are always underway and growing. Over the last few months, the following new, upgraded, and revised modules became available:  NEW CONTENT RELEASES E-Business Suite 12.1 Install Base Process Manufacturing, Process Quality Fundamentals for EBS Fusion 11g Release 1 Receivables Assets Purchasing Distributed Order Orchestration Payables Functional Setup Manager Project Portfolio Management Self Service Procurement JDE E1 9.0 Accounts Payable 9.0 with 9.1 Tools Fundamentals 9.0 with 9.1 Tools General Ledger 9.0 with 9.1 Tools Accounts Receivable 9.0 with 9.1 Tools Procurement and Subcontract Management 9.0 with 9.1 Tools Oracle Utilities Customer Care and Billing 2.3.1 Administrative Setup User Tasks Primavera Primavera Contract Management 14 Primavera P6 Enterprise Project Portfolio Management 8.2 UPK CONTENT UPGRADES Agile CNM 1.2 Customer Needs Management E-Business Suite 12.1 Project Foundation JDE E1 9.1 Fixed Assets Accounting General Ledger Fundamentals Inventory Management Sales Order Management PeopleSoft 9.1 Reporting Tools for PeopleTools 8.5.2  UPK CONTENT REVISIONS Oracle Utilities for Meter Data Management 2.0.1 Administrative Setup User Tasks VEE and Usage Rules Working with Measurement Data PeopleSoft 9.0 and 9.1 Enterprise Learning Management Reporting Tools for HCM (previously Reporting Tools for HRMS) PeopleSoft 9.1 Expenses General Ledger Inventory Contracts Grants Strategic Sourcing For a list of modules currently available for each product line, visit the UPK Resource Library on Oracle.com. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For more information on how your organization can take advantage of UPK pre-built content, see our previous blog,  Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The Value of UPK Pre-Built Content. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} - Karen Rihs, UPK Outbound Product Management

    Read the article

  • Dynamic Code for type casting Generic Types 'generically' in C#

    - by Rick Strahl
    C# is a strongly typed language and while that's a fundamental feature of the language there are more and more situations where dynamic types make a lot of sense. I've written quite a bit about how I use dynamic for creating new type extensions: Dynamic Types and DynamicObject References in C# Creating a dynamic, extensible C# Expando Object Creating a dynamic DataReader for dynamic Property Access Today I want to point out an example of a much simpler usage for dynamic that I use occasionally to get around potential static typing issues in C# code especially those concerning generic types. TypeCasting Generics Generic types have been around since .NET 2.0 I've run into a number of situations in the past - especially with generic types that don't implement specific interfaces that can be cast to - where I've been unable to properly cast an object when it's passed to a method or assigned to a property. Granted often this can be a sign of bad design, but in at least some situations the code that needs to be integrated is not under my control so I have to make due with what's available or the parent object is too complex or intermingled to be easily refactored to a new usage scenario. Here's an example that I ran into in my own RazorHosting library - so I have really no excuse, but I also don't see another clean way around it in this case. A Generic Example Imagine I've implemented a generic type like this: public class RazorEngine<TBaseTemplateType> where TBaseTemplateType : RazorTemplateBase, new() You can now happily instantiate new generic versions of this type with custom template bases or even a non-generic version which is implemented like this: public class RazorEngine : RazorEngine<RazorTemplateBase> { public RazorEngine() : base() { } } To instantiate one: var engine = new RazorEngine<MyCustomRazorTemplate>(); Now imagine that the template class receives a reference to the engine when it's instantiated. This code is fired as part of the Engine pipeline when it gets ready to execute the template. It instantiates the template and assigns itself to the template: var template = new TBaseTemplateType() { Engine = this } The problem here is that possibly many variations of RazorEngine<T> can be passed. I can have RazorTemplateBase, RazorFolderHostTemplateBase, CustomRazorTemplateBase etc. as generic parameters and the Engine property has to reflect that somehow. So, how would I cast that? My first inclination was to use an interface on the engine class and then cast to the interface.  Generally that works, but unfortunately here the engine class is generic and has a few members that require the template type in the member signatures. So while I certainly can implement an interface: public interface IRazorEngine<TBaseTemplateType> it doesn't really help for passing this generically templated object to the template class - I still can't cast it if multiple differently typed versions of the generic type could be passed. I have the exact same issue in that I can't specify a 'generic' generic parameter, since there's no underlying base type that's common. In light of this I decided on using object and the following syntax for the property (and the same would be true for a method parameter): public class RazorTemplateBase :MarshalByRefObject,IDisposable { public object Engine {get;set; } } Now because the Engine property is a non-typed object, when I need to do something with this value, I still have no way to cast it explicitly. What I really would need is: public RazorEngine<> Engine { get; set; } but that's not possible. Dynamic to the Rescue Luckily with the dynamic type this sort of thing can be mitigated fairly easily. For example here's a method that uses the Engine property and uses the well known class interface by simply casting the plain object reference to dynamic and then firing away on the properties and methods of the base template class that are common to all templates:/// <summary> /// Allows rendering a dynamic template from a string template /// passing in a model. This is like rendering a partial /// but providing the input as a /// </summary> public virtual string RenderTemplate(string template,object model) { if (template == null) return string.Empty; // if there's no template markup if(!template.Contains("@")) return template; // use dynamic to get around generic type casting dynamic engine = Engine; string result = engine.RenderTemplate(template, model); if (result == null) throw new ApplicationException("RenderTemplate failed: " + engine.ErrorMessage); return result; } Prior to .NET 4.0  I would have had to use Reflection for this sort of thing which would have a been a heck of a lot more verbose, but dynamic makes this so much easier and cleaner and in this case at least the overhead is negliable since it's a single dynamic operation on an otherwise very complex operation call. Dynamic as  a Bailout Sometimes this sort of thing often reeks of a design flaw, and I agree that in hindsight this could have been designed differently. But as is often the case this particular scenario wasn't planned for originally and removing the generic signatures from the base type would break a ton of other code in the framework. Given the existing fairly complex engine design, refactoring an interface to remove generic types just to make this particular code work would have been overkill. Instead dynamic provides a nice and simple and relatively clean solution. Now if there were many other places where this occurs I would probably consider reworking the code to make this cleaner but given this isolated instance and relatively low profile operation use of dynamic seems a valid choice for me. This solution really works anywhere where you might end up with an inheritance structure that doesn't have a common base or interface that is sufficient. In the example above I know what I'm getting but there's no common base type that I can cast to. All that said, it's a good idea to think about use of dynamic before you rush in. In many situations there are alternatives that can still work with static typing. Dynamic definitely has some overhead compared to direct static access of objects, so if possible we should definitely stick to static typing. In the example above the application already uses dynamics extensively for dynamic page page templating and passing models around so introducing dynamics here has very little additional overhead. The operation itself also fires of a fairly resource heavy operation where the overhead of a couple of dynamic member accesses are not a performance issue. So, what's your experience with dynamic as a bailout mechanism? © Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • API Message Localization

    - by Jesse Taber
    In my post, “Keep Localizable Strings Close To Your Users” I talked about the internationalization and localization difficulties that can arise when you sprinkle static localizable strings throughout the different logical layers of an application. The main point of that post is that you should have your localizable strings reside as close to the user-facing modules of your application as possible. For example, if you’re developing an ASP .NET web forms application all of the localizable strings should be kept in .resx files that are associated with the .aspx views of the application. In this post I want to talk about how this same concept can be applied when designing and developing APIs. An API Facilitates Machine-to-Machine Interaction You can typically think about a web, desktop, or mobile application as a collection “views” or “screens” through which users interact with the underlying logic and data. The application can be designed based on the assumption that there will be a human being on the other end of the screen working the controls. You are designing a machine-to-person interaction and the application should be built in a way that facilitates the user’s clear understanding of what is going on. Dates should be be formatted in a way that the user will be familiar with, messages should be presented in the user’s preferred language, etc. When building an API, however, there are no screens and you can’t make assumptions about who or what is on the other end of each call. An API is, by definition, a machine-to-machine interaction. A machine-to-machine interaction should be built in a way that facilitates a clear and unambiguous understanding of what is going on. Dates and numbers should be formatted in predictable and standard ways (e.g. ISO 8601 dates) and messages should be presented in machine-parseable formats. For example, consider an API for a time tracking system that exposes a resource for creating a new time entry. The JSON for creating a new time entry for a user might look like: 1: { 2: "userId": 4532, 3: "startDateUtc": "2012-10-22T14:01:54.98432Z", 4: "endDateUtc": "2012-10-22T11:34:45.29321Z" 5: }   Note how the parameters for start and end date are both expressed as ISO 8601 compliant dates in UTC. Using a date format like this in our API leaves little room for ambiguity. It’s also important to note that using ISO 8601 dates is a much, much saner thing than the \/Date(<milliseconds since epoch>)\/ nonsense that is sometimes used in JSON serialization. Probably the most important thing to note about the JSON snippet above is the fact that the end date comes before the start date! The API should recognize that and disallow the time entry from being created, returning an error to the caller. You might inclined to send a response that looks something like this: 1: { 2: "errors": [ {"message" : "The end date must come after the start date"}] 3: }   While this may seem like an appropriate thing to do there are a few problems with this approach: What if there is a user somewhere on the other end of the API call that doesn’t speak English?  What if the message provided here won’t fit properly within the UI of the application that made the API call? What if the verbiage of the message isn’t consistent with the rest of the application that made the API call? What if there is no user directly on the other end of the API call (e.g. this is a batch job uploading time entries once per night unattended)? The API knows nothing about the context from which the call was made. There are steps you could take to given the API some context (e.g.allow the caller to send along a language code indicating the language that the end user speaks), but that will only get you so far. As the designer of the API you could make some assumptions about how the API will be called, but if we start making assumptions we could very easily make the wrong assumptions. In this situation it’s best to make no assumptions and simply design the API in such a way that the caller has the responsibility to convey error messages in a manner that is appropriate for the context in which the error was raised. You would work around some of these problems by allowing callers to add metadata to each request describing the context from which the call is being made (e.g. accepting a ‘locale’ parameter denoting the desired language), but that will add needless clutter and complexity. It’s better to keep the API simple and push those context-specific concerns down to the caller whenever possible. For our very simple time entry example, this can be done by simply changing our error message response to look like this: 1: { 2: "errors": [ {"code": 100}] 3: }   By changing our error error from exposing a string to a numeric code that is easily parseable by another application, we’ve placed all of the responsibility for conveying the actual meaning of the error message on the caller. It’s best to have the caller be responsible for conveying this meaning because the caller understands the context much better than the API does. Now the caller can see error code 100, know that it means that the end date submitted falls before the start date and take appropriate action. Now all of the problems listed out above are non-issues because the caller can simply translate the error code of ‘100’ into the proper action and message for the current context. The numeric code representation of the error is a much better way to facilitate the machine-to-machine interaction that the API is meant to facilitate. An API Does Have Human Users While APIs should be built for machine-to-machine interaction, people still need to wire these interactions together. As a programmer building a client application that will consume the time entry API I would find it frustrating to have to go dig through the API documentation every time I encounter a new error code (assuming the documentation exists and is accurate). The numeric error code approach hurts the discoverability of the API and makes it painful to integrate with. We can help ease this pain by merging our two approaches: 1: { 2: "errors": [ {"code": 100, "message" : "The end date must come after the start date"}] 3: }   Now we have an easily parseable numeric error code for the machine-to-machine interaction that the API is meant to facilitate and a human-readable message for programmers working with the API. The human-readable message here is not intended to be viewed by end-users of the API and as such is not really a “localizable string” in my opinion. We could opt to expose a locale parameter for all API methods and store translations for all error messages, but that’s a lot of extra effort and overhead that doesn’t add a lot real value to the API. I might be a bit of an “ugly American”, but I think it’s probably fine to have the API return English messages when the target for those messages is a programmer. When resources are limited (which they always are), I’d argue that you’re better off hard-coding these messages in English and putting more effort into building more useful features, improving security, tweaking performance, etc.

    Read the article

  • TFS 2010 SDK: Smart Merge - Programmatically Create your own Merge Tool

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,TFS SDK,TFS API,TFS Merge Programmatically,TFS Work Items Programmatically,TFS Administration Console,ALM   The information available in the Merge window in Team Foundation Server 2010 is very important in the decision making during the merging process. However, at present the merge window shows very limited information, more that often you are interested to know the work item, files modified, code reviewer notes, policies overridden, etc associated with the change set. Our friends at Microsoft are working hard to change the game again with vNext, but because at present the merge window is a model window you have to cancel the merge process and go back one after the other to check the additional information you need. If you can relate to what i am saying, you will enjoy this blog post! I will show you how to programmatically create your own merging window using the TFS 2010 API. A few screen shots of the WPF TFS 2010 API – Custom Merging Application that we will be creating programmatically, Excited??? Let’s start coding… 1. Get All Team Project Collections for the TFS Server You can read more on connecting to TFS programmatically on my blog post => How to connect to TFS Programmatically 1: public static ReadOnlyCollection<CatalogNode> GetAllTeamProjectCollections() 2: { 3: TfsConfigurationServer configurationServer = 4: TfsConfigurationServerFactory. 5: GetConfigurationServer(new Uri("http://xxx:8080/tfs/")); 6: 7: CatalogNode catalogNode = configurationServer.CatalogNode; 8: return catalogNode.QueryChildren(new Guid[] 9: { CatalogResourceTypes.ProjectCollection }, 10: false, CatalogQueryOptions.None); 11: } 2. Get All Team Projects for the selected Team Project Collection You can read more on connecting to TFS programmatically on my blog post => How to connect to TFS Programmatically 1: public static ReadOnlyCollection<CatalogNode> GetTeamProjects(string instanceId) 2: { 3: ReadOnlyCollection<CatalogNode> teamProjects = null; 4: 5: TfsConfigurationServer configurationServer = 6: TfsConfigurationServerFactory.GetConfigurationServer(new Uri("http://xxx:8080/tfs/")); 7: 8: CatalogNode catalogNode = configurationServer.CatalogNode; 9: var teamProjectCollections = catalogNode.QueryChildren(new Guid[] {CatalogResourceTypes.ProjectCollection }, 10: false, CatalogQueryOptions.None); 11: 12: foreach (var teamProjectCollection in teamProjectCollections) 13: { 14: if (string.Compare(teamProjectCollection.Resource.Properties["InstanceId"], instanceId, true) == 0) 15: { 16: teamProjects = teamProjectCollection.QueryChildren(new Guid[] { CatalogResourceTypes.TeamProject }, false, 17: CatalogQueryOptions.None); 18: } 19: } 20: 21: return teamProjects; 22: } 3. Get All Branches with in a Team Project programmatically I will be passing the name of the Team Project for which i want to retrieve all the branches. When consuming the ‘Version Control Service’ you have the method QueryRootBranchObjects, you need to pass the recursion type => none, one, full. Full implies you are interested in all branches under that root branch. 1: public static List<BranchObject> GetParentBranch(string projectName) 2: { 3: var branches = new List<BranchObject>(); 4: 5: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<teamProjectName>")); 6: var versionControl = tfs.GetService<VersionControlServer>(); 7: 8: var allBranches = versionControl.QueryRootBranchObjects(RecursionType.Full); 9: 10: foreach (var branchObject in allBranches) 11: { 12: if (branchObject.Properties.RootItem.Item.ToUpper().Contains(projectName.ToUpper())) 13: { 14: branches.Add(branchObject); 15: } 16: } 17: 18: return branches; 19: } 4. Get All Branches associated to the Parent Branch Programmatically Now that we have the parent branch, it is important to retrieve all child branches of that parent branch. Lets see how we can achieve this using the TFS API. 1: public static List<ItemIdentifier> GetChildBranch(string parentBranch) 2: { 3: var branches = new List<ItemIdentifier>(); 4: 5: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 6: var versionControl = tfs.GetService<VersionControlServer>(); 7: 8: var i = new ItemIdentifier(parentBranch); 9: var allBranches = 10: versionControl.QueryBranchObjects(i, RecursionType.None); 11: 12: foreach (var branchObject in allBranches) 13: { 14: foreach (var childBranche in branchObject.ChildBranches) 15: { 16: branches.Add(childBranche); 17: } 18: } 19: 20: return branches; 21: } 5. Get Merge candidates between two branches Programmatically Now that we have the parent and the child branch that we are interested to perform a merge between we will use the method ‘GetMergeCandidates’ in the namespace ‘Microsoft.TeamFoundation.VersionControl.Client’ => http://msdn.microsoft.com/en-us/library/bb138934(v=VS.100).aspx 1: public static MergeCandidate[] GetMergeCandidates(string fromBranch, string toBranch) 2: { 3: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 4: var versionControl = tfs.GetService<VersionControlServer>(); 5: 6: return versionControl.GetMergeCandidates(fromBranch, toBranch, RecursionType.Full); 7: } 6. Get changeset details Programatically Now that we have the changeset id that we are interested in, we can get details of the changeset. The Changeset object contains the properties => http://msdn.microsoft.com/en-us/library/microsoft.teamfoundation.versioncontrol.client.changeset.aspx - Changes: Gets or sets an array of Change objects that comprise this changeset. - CheckinNote: Gets or sets the check-in note of the changeset. - Comment: Gets or sets the comment of the changeset. - PolicyOverride: Gets or sets the policy override information of this changeset. - WorkItems: Gets an array of work items that are associated with this changeset. 1: public static Changeset GetChangeSetDetails(int changeSetId) 2: { 3: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 4: var versionControl = tfs.GetService<VersionControlServer>(); 5: 6: return versionControl.GetChangeset(changeSetId); 7: } 7. Possibilities In future posts i will try and extend this idea to explore further possibilities, but few features that i am sure will further help during the merge decision making process would be, - View changed files - Compare modified file with current/previous version - Merge Preview - Last Merge date Any other features that you can think of?

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >