Search Results

Search found 19017 results on 761 pages for 'purchase order'.

Page 147/761 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Forcing an External Activation with Service Broker

    - by Davide Mauri
    In these last days I’ve been working quite a lot with Service Broker, a technology I’m really happy to work with, since it can give a lot of satisfaction. The scale-out solution one can easily build is simply astonishing. I’m helping a company to build a very scalable and – yet almost inexpensive – invoicing system that has to be able to scale out using commodity hardware. To offload the work from the main server to satellite “compute nodes” (yes, I’ve borrowed this term from PDW) we’re using Service Broker and the External Activator application available in the SQL Server Feature Pack. For those who are not used to work with SSB, the External Activation is a feature that allows you to intercept the arrival of a message in a queue right from your application code. http://msdn.microsoft.com/en-us/library/ms171617.aspx (Look for “Event-Based Activation”) In order to make life even more easier, Microsoft released the External Activation application that saves you even from writing even this code. http://blogs.msdn.com/b/sql_service_broker/archive/tags/external+activator/ The External Activator application can be configured to execute your own application so that each time a message – an invoice in my case – arrives in the target queue, the invoking application is executed and the invoice is calculated. The very nice feature of External Activator is that it can automatically execute as many configured application in order to process as many messages as your system can handle.  This also a lot of create a scale-out solution, leaving to the developer only a fraction of the problems that usually came with asynchronous programming. Developers are also shielded from Service Broker since everything can be encapsulated in Stored Procedures, so that – for them – developing such scale-out asynchronous solution is not much more complex than just executing a bunch of Stored Procedures. Now, if everything works correctly, you don’t have to bother of anything else. You put messages in the queue and your application, invoked by the External Activator, process them. But what happen if for some reason your application fails to process the messages. For examples, it crashes? The message is safe in the queue so you just need to process it again. But your application is invoked by the External Activator application, so now the question is, how do you wake up that app? Service Broker will engage the activation process only if certain conditions are met: http://msdn.microsoft.com/en-us/library/ms171601.aspx But how we can invoke the activation process manually, without having to wait for another message to arrive (the arrival of a new message is a condition that can fire the activation process)? The “trick” is to do manually with the activation process does: sending a system message to a queue in charge of handling External Activation messages: declare @conversationHandle uniqueidentifier; declare @n xml = N' <EVENT_INSTANCE>   <EventType>QUEUE_ACTIVATION</EventType>   <PostTime>' + CONVERT(CHAR(24),GETDATE(),126) + '</PostTime>   <SPID>' + CAST(@@SPID AS VARCHAR(9)) + '</SPID>   <ServerName>[your_server_name]</ServerName>   <LoginName>[your_login_name]</LoginName>   <UserName>[your_user_name]</UserName>   <DatabaseName>[your_database_name]</DatabaseName>   <SchemaName>[your_queue_schema_name]</SchemaName>   <ObjectName>[your_queue_name]</ObjectName>   <ObjectType>QUEUE</ObjectType> </EVENT_INSTANCE>' begin dialog conversation     @conversationHandle from service        [<your_initiator_service_name>] to service          '<your_event_notification_service>' on contract         [http://schemas.microsoft.com/SQL/Notifications/PostEventNotification] with     encryption = off,     lifetime = 6000 ; send on conversation     @conversationHandle message type     [http://schemas.microsoft.com/SQL/Notifications/EventNotification] (@n) ;     end conversation @conversationHandle; That’s it! Put the code in a Stored Procedure and you can add to your application a button that says “Force Queue Processing” (or something similar) in order to start the activation process whenever you need it (which should not occur too frequently but it may happen). PS I know that the “fire-and-forget” (ending the conversation without waiting for an answer) technique is not a best practice, but in this case I don’t see how it can hurts so I decided to stay very close to the KISS principle []

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #005

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 SQL SERVER – Cursor to Kill All Process in Database I indeed wrote this cursor and when I often look back, I wonder how naive I was to write this. The reason for writing this cursor was to free up my database from any existing connection so I can do database operation. This worked fine but there can be a potentially big issue if there was any important transaction was killed by this process. There is another way to to achieve the same thing where we can use ALTER syntax to take database in single user mode. Read more about that over here and here. 2007 Rules of Third Normal Form and Normalization Advantage – 3NF The rules of 3NF are mentioned here Make a separate table for each set of related attributes, and give each table a primary key. If an attribute depends on only part of a multi-valued key, remove it to a separate table If attributes do not contribute to a description of the key, remove them to a separate table. Correct Syntax for Stored Procedure SP Sometime a simple question is the most important question. I often see in industry incorrectly written Stored Procedure. Few writes code after the most outer BEGIN…END and few writes code after the GO Statement. In this brief blog post, I have attempted to explain the same. 2008 Switch Between Result Pan and Query Pan – SQL Shortcut Many times when I am writing query I have to scroll the result displayed in the result set. Most of the developer uses the mouse to switch between and Query Pane and Result Pane. There are few developers who are crazy about Keyboard shortcuts. F6 is the keyword which can be used to switch between query pane and tabs of the result pane. Interesting Observation – Use of Index and Execution Plan Query Optimization is a complex game and it has its own rules. From the example in the article we have discovered that Query Optimizer does not use clustered index to retrieve data, sometime non clustered index provides optimal performance for retrieving Primary Key. When all the rows and columns are selected Primary Key should be used to select data as it provides optimal performance. 2009 Interesting Observation – TOP 100 PERCENT and ORDER BY If you pull up any application or system where there are more than 100 SQL Server Views are created – I am very confident that at one or two places you will notice the scenario wherein View the ORDER BY clause is used with TOP 100 PERCENT. SQL Server 2008 VIEW with ORDER BY clause does not throw an error; moreover, it does not acknowledge the presence of it as well. In this article we have taken three perfect examples and demonstrated which clause we should use when. Comma Separated Values (CSV) from Table Column A Very common question – How to create comma separated values from a table in the database? The answer is also very common if we use XML. Check out this article for quick learning on the same subject. Azure Start Guide – Step by Step Installation Guide Though Azure portal has changed a quite bit since I wrote this article, the concept used in this article are not old. They are still valid and many of the functions are still working as mentioned in the article. I believe this one article will put you on the track to use Azure! Size of Index Table for Each Index – Solution Earlier I have posted a small question on this blog and requested help from readers to participate here and provide a solution. The puzzle was to write a query that will return the size for each index that is on any particular table. We need a query that will return an additional column in the above listed query and it should contain the size of the index. This article presents two of the best solutions from the puzzle. 2010 Well, this week in 2010 was the week of puzzles as I posted three interesting puzzles. Till today I am noticing pretty good interesting in the puzzles. They are tricky but for sure brings a great value if you are a database developer for a long time. I suggest you go over this puzzles and their answers. Did you really know all of the answers? I am confident that reading following three blog post will for sure help you enhance the experience with T-SQL. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists 2011 DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012 Have you ever faced a situation where something does not work? When you try to fix it - you enjoy fixing it and started to appreciate the breaking changes. Well, this was exactly I felt yesterday. Before I begin my story, I want to candidly state that I do not encourage anybody to use * in the SELECT statement. Now the disclaimer is over – I suggest you read the original story – you will love it! Get Directory Structure using Extended Stored Procedure xp_dirtree Here is the question to you – why would you do something in SQL Server where you can do the same task in command prompt much easily. Well, the answer is sometime there are real use cases when we have to do such thing. This is a similar example where I have demonstrated how in SQL Server 2012 we can use extended stored procedure to retrieve directory structure. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Messaging Systems – Handshaking, Reconciliation and Tracking for Data Transparency

    - by Ahsan Alam
    As many corporations build business partnerships with other organizations, the need to share information becomes necessary. Large amount of data sharing using snail mail, email and/or fax are quickly becoming a thing of the past. More and more organizations are relying heavily on Ftp and/or Web Service to exchange data. Corporations apply wide range of technologies and techniques based on available resources and data transfer needs. Sometimes, it involves simple home-grown applications. Other times, large investments are made on products like BizTalk, TIBCO etc. Complexity of information management also varies significantly from one organizations to another. Some may deal with handful of simple steps to process and manage shared data; whereas others may rely on fairly complex processes with heavy interaction with internal and external systems in order to serve the business needs. It is not surprising that many of these systems end up becoming black boxes over a period of time. Consequently, people and business start to rely more and more on developers and support personnel just to extract simple information adding to the loss of productivity. One of the most important factor in any business is transparency to data irrespective of technology preferences and the complexity of business processes. Not knowing the state of data could become very costly to the business. Being involved in messaging systems for some time now, I have heard the same type of questions over and over again. Did we transmit messages successfully? Did we get responses back? What is the expected turn-around-time? Did the system experience any errors? When one company transmits data to one or more company, it may invoke a set of processes that could complete in matter of seconds, or it could days. As data travels from one organizations to another, the uncertainty grows, and the longer it takes to track uncertain state of the data the costlier it gets for the business, So, in every business scenario, it's extremely important to be aware of the state of the data.   Architects of messaging systems can take several steps to aid with data transparency. Some forms of data handshaking and reconciliation mechanism as well as extensive data tracking can be incorporated into the system to provide clear visibility to the data. What do I mean by handshaking and reconciliation? Some might consider these to be a single concept; however, I like to consider them in two unique categories. Handshaking serves as message receipts or acknowledgment. When one transmits messages to another, the receiver must acknowledge each message by sending immediate responses for each transaction. Whenever we use Web Services, handshaking is often achieved utilizing request/reply pattern. Similarly, if Ftp is used, a receiver can acknowledge by dropping messages for the sender as soon as the files are picked up. These forms of handshaking or acknowledgment informs the message sender and receiver that a successful transaction has occurred. I have mentioned earlier that it could take anywhere from a few seconds to a number of days before shared data is completely processed. In addition, whenever a batched transaction is used, processing time for each data element inside the batch could also vary significantly. So, in order to successfully manage data processing, reconciliation becomes extremely important; otherwise it may result into data loss or in some cases hefty penalty. Reconciliation can be done in many ways. Partner organizations can share and compare ad hoc reports to achieve reconciliation. On the other hand, partners can agree on some type of systematic reconciliation messages. Systems within responsible parties can trigger messages to partners as soon as the data process completes.   Next step in the data transparency is extensive data tracking. Some products such as BizTalk and TIBCO provide built-in functionality for data tracking; however, built-in functionality may not always be adequate. Sometimes additional tracking system (or databases) needs to be built in order monitor all types of data flow including, message transactions, handshaking, reconciliation, system errors and many more. If these types of data are captured, then these can be presented to business users in any forms or fashion. When business users are empowered with such information, then the reliance on developers and support teams decreases dramatically.   In today's collaborative world of information sharing, data transparency is key to the success of every business. The state of business data will constantly change. However, when people have easier access to various states of data, it allows them to make better and quicker decisions. Therefore, I feel that data handshaking, reconciliation and tracking is very important aspect of messaging systems.

    Read the article

  • Find only physical network adapters with WMI Win32_NetworkAdapter class

    - by Mladen Prajdic
    WMI is Windows Management Instrumentation infrastructure for managing data and machines. We can access it by using WQL (WMI querying language or SQL for WMI). One thing to remember from the WQL link is that it doesn't support ORDER BY. This means that when you do SELECT * FROM wmiObject, the returned order of the objects is not guaranteed. It can return adapters in different order based on logged-in user, permissions of that user, etc… This is not documented anywhere that I've looked and is derived just from my observations. To get network adapters we have to query the Win32_NetworkAdapter class. This returns us all network adapters that windows detect, real and virtual ones, however it only supplies IPv4 data. I've tried various methods of combining properties that are common on all systems since Windows XP. The first thing to do to remove all virtual adapters (like tunneling, WAN miniports, etc…) created by Microsoft. We do this by adding WHERE Manufacturer!='Microsoft' to our WMI query. This greatly narrows the number of adapters we have to work with. Just on my machine it went from 20 adapters to 5. What was left were one real physical Realtek LAN adapter, 2 virtual adapters installed by VMware and 2 virtual adapters installed by VirtualBox. If you read the Win32_NetworkAdapter help page you'd notice that there's an AdapterType that enumerates various adapter types like LAN or Wireless and AdapterTypeID that gives you the same information as AdapterType only in integer form. The dirty little secret is that these 2 properties don't work. They are both hardcoded, AdapterTypeID to "0" and AdapterType to "Ethernet 802.3". The only exceptions I've seen so far are adapters that have no values at all for the two properties, "RAS Async Adapter" that has values of AdapterType = "Wide Area Network" and AdapterTypeID = "3" and various tunneling adapters that have values of AdapterType = "Tunnel" and AdapterTypeID = "15". In the help docs there isn't even a value for 15. So this property was of no help. Next property to give hope is NetConnectionId. This is the name of the network connection as it appears in the Control Panel -> Network Connections. Problem is this value is also localized into various languages and can have different names for different connection. So both of these properties don't help and we haven't even started talking about eliminating virtual adapters. Same as the previous one this property was also of no help. Next two properties I checked were ConfigManagerErrorCode and NetConnectionStatus in hopes of finding disabled and disconnected adapters. If an adapter is enabled but disconnected the ConfigManagerErrorCode = 0 with different NetConnectionStatus. If the adapter is disabled it reports ConfigManagerErrorCode = 22. This looked like a win by using (ConfigManagerErrorCode=0 or ConfigManagerErrorCode=22) in our condition. This way we get enabled (connected and disconnected adapters). Problem with all of the above properties is that none of them filter out the virtual adapters installed by virtualization software like VMware and VirtualBox. The last property to give hope is PNPDeviceID. There's an interesting observation about physical and virtual adapters with this property. Every virtual adapter PNPDeviceID starts with "ROOT\". Even VMware and VirtualBox ones. There were some really, really old physical adapters that had PNPDeviceID starting with "ROOT\" but those were in pre win XP era AFAIK. Since my minimum system to check was Windows XP SP2 I didn't have to worry about those. The only virtual adapter I've seen to not have PNPDeviceID start with "ROOT\" is the RAS Async Adapter for Wide Area Network. But because it is made by Microsoft we've eliminated it with the first condition for the manufacturer. Using the PNPDeviceID has so far proven to be really effective and I've tested it on over 20 different computers of various configurations from Windows XP laptops with wireless and bluetooth cards to virtualized Windows 2008 R2 servers. So far it always worked as expected. I will appreciate you letting me know if you find a configuration where it doesn't work. Let's see some C# code how to do this: ManagementObjectSearcher mos = null;// WHERE Manufacturer!='Microsoft' removes all of the // Microsoft provided virtual adapters like tunneling, miniports, and Wide Area Network adapters.mos = new ManagementObjectSearcher(@"SELECT * FROM Win32_NetworkAdapter WHERE Manufacturer != 'Microsoft'");// Trying the ConfigManagerErrorCode and NetConnectionStatus variations // proved to still not be enough and it returns adapters installed by // the virtualization software like VMWare and VirtualBox// ConfigManagerErrorCode = 0 -> Device is working properly. This covers enabled and/or disconnected devices// ConfigManagerErrorCode = 22 AND NetConnectionStatus = 0 -> Device is disabled and Disconnected. // Some virtual devices report ConfigManagerErrorCode = 22 (disabled) and some other NetConnectionStatus than 0mos = new ManagementObjectSearcher(@"SELECT * FROM Win32_NetworkAdapter WHERE Manufacturer != 'Microsoft' AND (ConfigManagerErrorCode = 0 OR (ConfigManagerErrorCode = 22 AND NetConnectionStatus = 0))");// Final solution with filtering on the Manufacturer and PNPDeviceID not starting with "ROOT\"// Physical devices have PNPDeviceID starting with "PCI\" or something else besides "ROOT\"mos = new ManagementObjectSearcher(@"SELECT * FROM Win32_NetworkAdapter WHERE Manufacturer != 'Microsoft' AND NOT PNPDeviceID LIKE 'ROOT\\%'");// Get the physical adapters and sort them by their index. // This is needed because they're not sorted by defaultIList<ManagementObject> managementObjectList = mos.Get() .Cast<ManagementObject>() .OrderBy(p => Convert.ToUInt32(p.Properties["Index"].Value)) .ToList();// Let's just show all the properties for all physical adapters.foreach (ManagementObject mo in managementObjectList){ foreach (PropertyData pd in mo.Properties) Console.WriteLine(pd.Name + ": " + (pd.Value ?? "N/A"));}   That's it. Hope this helps you in some way.

    Read the article

  • MVVM Light V4 preview 2 (BL0015) #mvvmlight

    - by Laurent Bugnion
    Over the past few weeks, I have worked hard on a few new features for MVVM Light V4. Here is a second early preview (consider this pre-alpha if you wish). The features are unit-tested, but I am now looking for feedback and there might be bugs! Bug correction: Messenger.CleanupList is now thread safe This was an annoying bug that is now corrected: In some circumstances, an exception could be thrown when the Messenger’s recipients list was cleaned up (i.e. the “dead” instances were removed). The method is called now and then and the exception was thrown apparently at random. In fact it was really a multi-threading issue, which is now corrected. Bug correction: AllowPartiallyTrustedCallers prevents EventToCommand to work This is a particularly annoying regression bug that was introduced in BL0014. In order to allow MVVM Light to work in XBAPs too, I added the AllowPartiallyTrustedCallers attribute to the assemblies. However, we just found out that this causes issues when using EventToCommand. In order to allow EventToCommand to continue working, I reverted to the previous state by removing the AllowPartiallyTrustedCallers attribute for now. I will work with my friends at Microsoft to try and find a solution. Stay tuned. Bug correction: XML documentation file is now generated in Release configuration The XML documentation file was not generated for the Release configuration. This was a simple flag in the project file that I had forgotten to set. This is corrected now. Applying EventToCommand to non-FrameworkElements This feature has been requested in order to be able to execute a command when a Storyboard is completed. I implemented this, but unfortunately found out that EventToCommand can only be added to Storyboards in Silverlight 3 and Silverlight 4, but not in WPF or in Windows Phone 7. This obviously limits the usefulness of this change, but I decided to publish it anyway, because it is pretty damn useful in Silverlight… Why not in WPF? In WPF, Storyboards added to a resource dictionary are frozen. This is a feature of WPF which allows to optimize certain objects for performance: By freezing them, it is a contract where we say “this object will not be modified anymore, so do your perf optimization on them without worrying too much”. Unfortunately, adding a Trigger (such as EventTrigger) to an object in resources does not work if this object is frozen… and unfortunately, there is no way to tell WPF not to freeze the Storyboard in the resources… so there is no way around that (at least none I can see. In Silverlight, objects are not frozen, so an EventTrigger can be added without problems. Why not in WP7? In Windows Phone 7, there is a totally different issue: Adding a Trigger can only be done to a FrameworkElement, which Storyboard is not. Here I think that we might see a change in a future version of the framework, so maybe this small trick will work in the future. Workaround? Since you cannot use the EventToCommand on a Storyboard in WPF and in WP7, the workaround is pretty obvious: Handle the Completed event in the code behind, and call the Command from there on the ViewModel. This object can be obtained by casting the DataContext to the ViewModel type. This means that the View needs to know about the ViewModel, but I never had issues with that anyway. New class: NotifyPropertyChanged Sometimes when you implement a model object (for example Customer), you would like to have it implement INotifyPropertyChanged, but without having all the frills of a ViewModelBase. A new class named NotifyPropertyChanged allows you to do that. This class is a simple implementation of INotifyPropertyChaned (with all the overloads of RaisePropertyChanged that were implemented in BL0014). In fact, ViewModelBase inherits NotifyPropertyChanged. ViewModelBase does not implement IDisposable anymore The IDisposable interface and the Dispose method had been marked obsolete in the ViewModelBase class already in V3. Now they have been removed. Note: By this, I do not mean that IDisposable is a bad interface, or that it shouldn’t be used on viewmodels. In the contrary, I know that this interface is very useful in certain circumstances. However, I think that having it by default on every instance of ViewModelBase was sending a wrong message. This interface has a strong meaning in .NET: After Dispose has been executed, the instance should not be used anymore, and should be ready for garbage collection. What I really wanted to have on ViewModelBase was rather a simple cleanup method, something that can be executed now and then during runtime. This is fulfilled by the ICleanup interface and its Cleanup method. If your ViewModels need IDisposable, you can still use it! You will just have to implement the interface on the class itself, because it is not available on ViewModelBase anymore. What’s next? I have a couple exciting new features implemented already but that need more testing before they go live… Just stay tuned and by MIX11 (12-14 April 2011), we should see at least a major addition to MVVM Light Toolkit, as well as another smaller feature which is pretty cool nonetheless More about this later! Happy Coding Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Forcing an External Activation with Service Broker

    - by Davide Mauri
    In these last days I’ve been working quite a lot with Service Broker, a technology I’m really happy to work with, since it can give a lot of satisfaction. The scale-out solution one can easily build is simply astonishing. I’m helping a company to build a very scalable and – yet almost inexpensive – invoicing system that has to be able to scale out using commodity hardware. To offload the work from the main server to satellite “compute nodes” (yes, I’ve borrowed this term from PDW) we’re using Service Broker and the External Activator application available in the SQL Server Feature Pack. For those who are not used to work with SSB, the External Activation is a feature that allows you to intercept the arrival of a message in a queue right from your application code. http://msdn.microsoft.com/en-us/library/ms171617.aspx (Look for “Event-Based Activation”) In order to make life even more easier, Microsoft released the External Activation application that saves you even from writing even this code. http://blogs.msdn.com/b/sql_service_broker/archive/tags/external+activator/ The External Activator application can be configured to execute your own application so that each time a message – an invoice in my case – arrives in the target queue, the invoking application is executed and the invoice is calculated. The very nice feature of External Activator is that it can automatically execute as many configured application in order to process as many messages as your system can handle.  This also a lot of create a scale-out solution, leaving to the developer only a fraction of the problems that usually came with asynchronous programming. Developers are also shielded from Service Broker since everything can be encapsulated in Stored Procedures, so that – for them – developing such scale-out asynchronous solution is not much more complex than just executing a bunch of Stored Procedures. Now, if everything works correctly, you don’t have to bother of anything else. You put messages in the queue and your application, invoked by the External Activator, process them. But what happen if for some reason your application fails to process the messages. For examples, it crashes? The message is safe in the queue so you just need to process it again. But your application is invoked by the External Activator application, so now the question is, how do you wake up that app? Service Broker will engage the activation process only if certain conditions are met: http://msdn.microsoft.com/en-us/library/ms171601.aspx But how we can invoke the activation process manually, without having to wait for another message to arrive (the arrival of a new message is a condition that can fire the activation process)? The “trick” is to do manually with the activation process does: sending a system message to a queue in charge of handling External Activation messages: declare @conversationHandle uniqueidentifier; declare @n xml = N' <EVENT_INSTANCE>   <EventType>QUEUE_ACTIVATION</EventType>   <PostTime>' + CONVERT(CHAR(24),GETDATE(),126) + '</PostTime>   <SPID>' + CAST(@@SPID AS VARCHAR(9)) + '</SPID>   <ServerName>[your_server_name]</ServerName>   <LoginName>[your_login_name]</LoginName>   <UserName>[your_user_name]</UserName>   <DatabaseName>[your_database_name]</DatabaseName>   <SchemaName>[your_queue_schema_name]</SchemaName>   <ObjectName>[your_queue_name]</ObjectName>   <ObjectType>QUEUE</ObjectType> </EVENT_INSTANCE>' begin dialog conversation     @conversationHandle from service        [<your_initiator_service_name>] to service          '<your_event_notification_service>' on contract         [http://schemas.microsoft.com/SQL/Notifications/PostEventNotification] with     encryption = off,     lifetime = 6000 ; send on conversation     @conversationHandle message type     [http://schemas.microsoft.com/SQL/Notifications/EventNotification] (@n) ;     end conversation @conversationHandle; That’s it! Put the code in a Stored Procedure and you can add to your application a button that says “Force Queue Processing” (or something similar) in order to start the activation process whenever you need it (which should not occur too frequently but it may happen). PS I know that the “fire-and-forget” (ending the conversation without waiting for an answer) technique is not a best practice, but in this case I don’t see how it can hurts so I decided to stay very close to the KISS principle []

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #053 – Final Post in Series

    - by Pinal Dave
    It has been a fantastic journey to write memory lane series for an entire year. This series gave me the opportunity to go back and see what I have contributed to this blog throughout the last 7 years. This was indeed fantastic series as this provided me the opportunity to witness how technology has grown throughout the year and how I have progressed in my career while writing this blog post. This series was indeed fantastic experience readers as many joined during the last few years and were not sure what they have missed in recent years. Let us continue with the final episode of the Memory Lane Series. Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Get Current User – Get Logged In User Here is the straight script which list logged in SQL Server users. Disable All Triggers on a Database – Disable All Triggers on All Servers Question : How to disable all the triggers for a database? Additionally, how to disable all the triggers for all servers? For answer execute the script in the blog post. Importance of Master Database for SQL Server Startup I have received following questions many times. I will list all the questions here and answer them together. What is the purpose of Master database? Should our backup Master database? Which database is must have database for SQL Server for startup? Which are the default system database created when SQL Server 2005 is installed for the first time? What happens if Master database is corrupted? Answers to all of the questions are very much related. 2008 DECLARE Multiple Variables in One Statement SQL Server is a great product and it has many features which are very unique to SQL Server. Regarding feature of SQL Server where multiple variable can be declared in one statement, it is absolutely possible to do. 2009 How to Enable Index – How to Disable Index – Incorrect syntax near ‘ENABLE’ Many times I have seen that the index is disabled when there is a large update operation on the table. Bulk insert of very large file updates in any table using SSIS is usually preceded by disabling the index and followed by enabling the index. I have seen many developers running the following query to disable the index. 2010 List of all the Views from Database Many emails I received suggesting that they have hundreds of the view and now have no clue what is going on and how many of them have indexes and how many does not have an index. Some even asked me if there is any way they can get a list of the views with the property of Index along with it. Here is the quick script which does exactly the same. You can also include many other columns from the same view. Minimum Maximum Memory – Server Memory Options I was recently reading about SQL Server Memory Options over here. While reading this one line really caught my attention is minimum value allowed for maximum memory options. The default setting for min server memory is 0, and the default setting for max server memory is 2147483647. The minimum amount of memory you can specify for max server memory is 16 megabytes (MB). 2011 Fundamentals of Columnstore Index There are two kinds of storage in a database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data are relevant or not, column store queries need only to search a much lesser number of the columns. How to Ignore Columnstore Index Usage in Query In summary the question in simple words “How can we ignore using the column store index in selective queries?” Very interesting question – you can use I can understand there may be the cases when the column store index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the column store index. The SQL Server Engine will use any other index which is best after ignoring the column store index. 2012 Storing Variable Values in Temporary Array or Temporary List SQL Server does not support arrays or a dynamic length storage mechanism like list. Absolutely there are some clever workarounds and few extra-ordinary solutions but everybody can;t come up with such solution. Additionally, sometime the requirements are very simple that doing extraordinary coding is not required. Here is the simple case. Move Database Files MDF and LDF to Another Location It is not common to keep the Database on the same location where OS is installed. Usually Database files are in SAN, Separate Disk Array or on SSDs. This is done usually for performance reason and manageability perspective. Now the challenges comes up when database which was installed at not preferred default location and needs to move to a different location. Here is the quick tutorial how you can do it. UNION ALL and ORDER BY – How to Order Table Separately While Using UNION ALL If your requirement is such that you want your top and bottom query of the UNION resultset independently sorted but in the same result set you can add an additional static column and order by that column. Let us re-create the same scenario. Copy Data from One Table to Another Table – SQL in Sixty Seconds #031 – Video http://www.youtube.com/watch?v=FVWIA-ACMNo Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Integrating Windows Form Click Once Application into SharePoint 2007 &ndash; Part 2 of 4

    - by Kelly Jones
    In my last post, I explained why we decided to use a Click Once application to solve our business problem. To quickly review, we needed a way for our business users to upload documents to a SharePoint 2007 document library in mass, set the meta data, set the permissions per document, and to do so easily. Let’s look at the pieces that make up our solution.  First, we have the Windows Form application.  This app is deployed using Click Once and calls SharePoint web services in order to upload files and then calls web services to set the meta data (SharePoint columns and permissions).  Second, we have a custom action.  The custom action is responsible for providing our users a link that will launch the Windows app, as well as passing values to it via the query string.  And lastly, we have the web services that the Windows Form application calls.  For our solution, we used both out of the box web services and a custom web service in order to set the column values in the document library as well as the permissions on the documents. Now, let’s look at the technical details of each of these pieces.  (All of the code is downloadable from here: )   Windows Form application deployed via Click Once The Windows Form application, called “Custom Upload”, has just a few classes in it: Custom Upload -- the form FileList.xsd -- the dataset used to track the names of the files and their meta data values SharePointUpload -- this class handles uploading the file SharePointUpload uses an HttpWebRequest to transfer the file to the web server. We had to change this code from a WebClient object to the HttpWebRequest object, because we needed to be able to set the time out value.  public bool UploadDocument(string localFilename, string remoteFilename) { bool result = true; //Need to use an HttpWebRequest object instead of a WebClient object // so we can set the timeout (WebClient doesn't allow you to set the timeout!) HttpWebRequest req = (HttpWebRequest)WebRequest.Create(remoteFilename); try { req.Method = "PUT"; req.Timeout = 60 * 1000; //convert seconds to milliseconds req.AllowWriteStreamBuffering = true; req.Credentials = System.Net.CredentialCache.DefaultCredentials; req.SendChunked = false; req.KeepAlive = true; Stream reqStream = req.GetRequestStream(); FileStream rdr = new FileStream(localFilename, FileMode.Open, FileAccess.Read); byte[] inData = new byte[4096]; int bytesRead = rdr.Read(inData, 0, inData.Length); while (bytesRead > 0) { reqStream.Write(inData, 0, bytesRead); bytesRead = rdr.Read(inData, 0, inData.Length); } reqStream.Close(); rdr.Close(); System.Net.HttpWebResponse response = (HttpWebResponse)req.GetResponse(); if (response.StatusCode != HttpStatusCode.OK && response.StatusCode != HttpStatusCode.Created) { String msg = String.Format("An error occurred while uploading this file: {0}\n\nError response code: {1}", System.IO.Path.GetFileName(localFilename), response.StatusCode.ToString()); LogWarning(msg, "2ACFFCCA-59BA-40c8-A9AB-05FA3331D223"); result = false; } } catch (Exception ex) { LogException(ex, "{E9D62A93-D298-470d-A6BA-19AAB237978A}"); result = false; } return result; } The class also contains the LogException() and LogWarning() methods. When the application is launched, it parses the query string for some initial values.  The query string looks like this: string queryString = "Srv=clickonce&Sec=N&Doc=DMI&SiteName=&Speed=128000&Max=50"; This Srv is the path to the server (my Virtual Machine is name “clickonce”), the Sec is short for security – meaning HTTPS or HTTP, the Doc is the shortcut for which document library to use, and SiteName is the name of the SharePoint site.  Speed is used to calculate an estimate for download speed for each file.  We added this so our users uploading documents would realize how long it might take for clients in remote locations (using slow WAN connections) to download the documents. The last value, Max, is the maximum size that the SharePoint site will allow documents to be.  This allowed us to give users a warning that a file is too large before we even attempt to upload it. Another critical piece is the meta data collection.  We organized our site using SharePoint content types, so when the app loads, it gets a list of the document library’s content types.  The user then select one of the content types from the drop down list, and then we query SharePoint to get a list of the fields that make up that content type.  We used both an out of the box web service, and one that we custom built, in order to get these values. Once we have the content type fields, we then add controls to the form.  Which type of control we add depends on the data type of the field.  (DateTime pickers for date/time fields, etc)  We didn’t write code to cover every data type, since we were working with a limited set of content types and field data types. Here’s a screen shot of the Form, before and after someone has selected the content types and our code has added the custom controls:     The other piece of meta data we collect is the in the upper right corner of the app, “Users with access”.  This box lists the different SharePoint Groups that we have set up and by checking the boxes, the user can set the permissions on the uploaded documents. All of this meta data is collected and submitted to our custom web service, which then sets the values on the documents on the list.  We’ll look at these web services in a future post. In the next post, we’ll walk through the Custom Action we built.

    Read the article

  • Some non-generic collections

    - by Simon Cooper
    Although the collections classes introduced in .NET 2, 3.5 and 4 cover most scenarios, there are still some .NET 1 collections that don't have generic counterparts. In this post, I'll be examining what they do, why you might use them, and some things you'll need to bear in mind when doing so. BitArray System.Collections.BitArray is conceptually the same as a List<bool>, but whereas List<bool> stores each boolean in a single byte (as that's what the backing bool[] does), BitArray uses a single bit to store each value, and uses various bitmasks to access each bit individually. This means that BitArray is eight times smaller than a List<bool>. Furthermore, BitArray has some useful functions for bitmasks, like And, Xor and Not, and it's not limited to 32 or 64 bits; a BitArray can hold as many bits as you need. However, it's not all roses and kittens. There are some fundamental limitations you have to bear in mind when using BitArray: It's a non-generic collection. The enumerator returns object (a boxed boolean), rather than an unboxed bool. This means that if you do this: foreach (bool b in bitArray) { ... } Every single boolean value will be boxed, then unboxed. And if you do this: foreach (var b in bitArray) { ... } you'll have to manually unbox b on every iteration, as it'll come out of the enumerator an object. Instead, you should manually iterate over the collection using a for loop: for (int i=0; i<bitArray.Length; i++) { bool b = bitArray[i]; ... } Following on from that, if you want to use BitArray in the context of an IEnumerable<bool>, ICollection<bool> or IList<bool>, you'll need to write a wrapper class, or use the Enumerable.Cast<bool> extension method (although Cast would box and unbox every value you get out of it). There is no Add or Remove method. You specify the number of bits you need in the constructor, and that's what you get. You can change the length yourself using the Length property setter though. It doesn't implement IList. Although not really important if you're writing a generic wrapper around it, it is something to bear in mind if you're using it with pre-generic code. However, if you use BitArray carefully, it can provide significant gains over a List<bool> for functionality and efficiency of space. OrderedDictionary System.Collections.Specialized.OrderedDictionary does exactly what you would expect - it's an IDictionary that maintains items in the order they are added. It does this by storing key/value pairs in a Hashtable (to get O(1) key lookup) and an ArrayList (to maintain the order). You can access values by key or index, and insert or remove items at a particular index. The enumerator returns items in index order. However, the Keys and Values properties return ICollection, not IList, as you might expect; CopyTo doesn't maintain the same ordering, as it copies from the backing Hashtable, not ArrayList; and any operations that insert or remove items from the middle of the collection are O(n), just like a normal list. In short; don't use this class. If you need some sort of ordered dictionary, it would be better to write your own generic dictionary combining a Dictionary<TKey, TValue> and List<KeyValuePair<TKey, TValue>> or List<TKey> for your specific situation. ListDictionary and HybridDictionary To look at why you might want to use ListDictionary or HybridDictionary, we need to examine the performance of these dictionaries compared to Hashtable and Dictionary<object, object>. For this test, I added n items to each collection, then randomly accessed n/2 items: So, what's going on here? Well, ListDictionary is implemented as a linked list of key/value pairs; all operations on the dictionary require an O(n) search through the list. However, for small n, the constant factor that big-o notation doesn't measure is much lower than the hashing overhead of Hashtable or Dictionary. HybridDictionary combines a Hashtable and ListDictionary; for small n, it uses a backing ListDictionary, but switches to a Hashtable when it gets to 9 items (you can see the point it switches from a ListDictionary to Hashtable in the graph). Apart from that, it's got very similar performance to Hashtable. So why would you want to use either of these? In short, you wouldn't. Any gain in performance by using ListDictionary over Dictionary<TKey, TValue> would be offset by the generic dictionary not having to cast or box the items you store, something the graphs above don't measure. Only if the performance of the dictionary is vital, the dictionary will hold less than 30 items, and you don't need type safety, would you use ListDictionary over the generic Dictionary. And even then, there's probably more useful performance gains you can make elsewhere.

    Read the article

  • Sort Data in Windows Phone using Collection View Source

    - by psheriff
    When you write a Windows Phone application you will most likely consume data from a web service somewhere. If that service returns data to you in a sort order that you do not want, you have an easy alternative to sort the data without writing any C# or VB code. You use the built-in CollectionViewSource object in XAML to perform the sorting for you. This assumes that you can get the data into a collection that implements the IEnumerable or IList interfaces.For this example, I will be using a simple Product class with two properties, and a list of Product objects using the Generic List class. Try this out by creating a Product class as shown in the following code:public class Product {  public Product(int id, string name)   {    ProductId = id;    ProductName = name;  }  public int ProductId { get; set; }  public string ProductName { get; set; }}Create a collection class that initializes a property called DataCollection with some sample data as shown in the code below:public class Products : List<Product>{  public Products()  {    InitCollection();  }  public List<Product> DataCollection { get; set; }  List<Product> InitCollection()  {    DataCollection = new List<Product>();    DataCollection.Add(new Product(3,        "PDSA .NET Productivity Framework"));    DataCollection.Add(new Product(1,        "Haystack Code Generator for .NET"));    DataCollection.Add(new Product(2,        "Fundamentals of .NET eBook"));    return DataCollection;  }}Notice that the data added to the collection is not in any particular order. Create a Windows Phone page and add two XML namespaces to the Page.xmlns:scm="clr-namespace:System.ComponentModel;assembly=System.Windows"xmlns:local="clr-namespace:WPSortData"The 'local' namespace is an alias to the name of the project that you created (in this case WPSortData). The 'scm' namespace references the System.Windows.dll and is needed for the SortDescription class that you will use for sorting the data. Create a phone:PhoneApplicationPage.Resources section in your Windows Phone page that looks like the following:<phone:PhoneApplicationPage.Resources>  <local:Products x:Key="products" />  <CollectionViewSource x:Key="prodCollection"      Source="{Binding Source={StaticResource products},                       Path=DataCollection}">    <CollectionViewSource.SortDescriptions>      <scm:SortDescription PropertyName="ProductName"                           Direction="Ascending" />    </CollectionViewSource.SortDescriptions>  </CollectionViewSource></phone:PhoneApplicationPage.Resources>The first line of code in the resources section creates an instance of your Products class. The constructor of the Products class calls the InitCollection method which creates three Product objects and adds them to the DataCollection property of the Products class. Once the Products object is instantiated you now add a CollectionViewSource object in XAML using the Products object as the source of the data to this collection. A CollectionViewSource has a SortDescriptions collection that allows you to specify a set of SortDescription objects. Each object can set a PropertyName and a Direction property. As you see in the above code you set the PropertyName equal to the ProductName property of the Product object and tell it to sort in an Ascending direction.All you have to do now is to create a ListBox control and set its ItemsSource property to the CollectionViewSource object. The ListBox displays the data in sorted order by ProductName and you did not have to write any LINQ queries or write other code to sort the data!<ListBox    ItemsSource="{Binding Source={StaticResource prodCollection}}"   DisplayMemberPath="ProductName" />SummaryIn this blog post you learned that you can sort any data without having to change the source code of where the data comes from. Simply feed the data into a CollectionViewSource in XAML and set some sort descriptions in XAML and the rest is done for you! This comes in very handy when you are consuming data from a source where the data is given to you and you do not have control over the sorting.NOTE: You can download this article and many samples like the one shown in this blog entry at my website. http://www.pdsa.com/downloads. Select “Tips and Tricks”, then “Sort Data in Windows Phone using Collection View Source” from the drop down list.Good Luck with your Coding,Paul Sheriff** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • Using Transaction Logging to Recover Post-Archived Essbase data

    - by Keith Rosenthal
    Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry: Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories: Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries. Enabling Transaction Logging Transaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax: TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLE Note that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example: TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLE The first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance. Configuring Transaction Log Replay Although transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads. To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases. Replaying the Transaction Log and Transaction Log Security Considerations To replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs. The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail. Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used. Removing Transaction Logs and Archived Replay Data Load and Rules Files Transaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file. Partitioned Database Considerations For partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order. If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases. References For additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sam Drake
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • jQuery Globalization Plugin from Microsoft

    - by ScottGu
    Last month I blogged about how Microsoft is starting to make code contributions to jQuery, and about some of the first code contributions we were working on: jQuery Templates and Data Linking support. Today, we released a prototype of a new jQuery Globalization Plugin that enables you to add globalization support to your JavaScript applications. This plugin includes globalization information for over 350 cultures ranging from Scottish Gaelic, Frisian, Hungarian, Japanese, to Canadian English.  We will be releasing this plugin to the community as open-source. You can download our prototype for the jQuery Globalization plugin from our Github repository: http://github.com/nje/jquery-glob You can also download a set of samples that demonstrate some simple use-cases with it here. Understanding Globalization The jQuery Globalization plugin enables you to easily parse and format numbers, currencies, and dates for different cultures in JavaScript. For example, you can use the Globalization plugin to display the proper currency symbol for a culture: You also can use the Globalization plugin to format dates so that the day and month appear in the right order and the day and month names are correctly translated: Notice above how the Arabic year is displayed as 1431. This is because the year has been converted to use the Arabic calendar. Some cultural differences, such as different currency or different month names, are obvious. Other cultural differences are surprising and subtle. For example, in some cultures, the grouping of numbers is done unevenly. In the "te-IN" culture (Telugu in India), groups have 3 digits and then 2 digits. The number 1000000 (one million) is written as "10,00,000". Some cultures do not group numbers at all. All of these subtle cultural differences are handled by the jQuery Globalization plugin automatically. Getting dates right can be especially tricky. Different cultures have different calendars such as the Gregorian and UmAlQura calendars. A single culture can even have multiple calendars. For example, the Japanese culture uses both the Gregorian calendar and a Japanese calendar that has eras named after Japanese emperors. The Globalization Plugin includes methods for converting dates between all of these different calendars. Using Language Tags The jQuery Globalization plugin uses the language tags defined in the RFC 4646 and RFC 5646 standards to identity cultures (see http://tools.ietf.org/html/rfc5646). A language tag is composed out of one or more subtags separated by hyphens. For example: Language Tag Language Name (in English) en-AU English (Australia) en-BZ English (Belize) en-CA English (Canada) Id Indonesian zh-CHS Chinese (Simplified) Legacy Zu isiZulu Notice that a single language, such as English, can have several language tags. Speakers of English in Canada format numbers, currencies, and dates using different conventions than speakers of English in Australia or the United States. You can find the language tag for a particular culture by using the Language Subtag Lookup tool located here:  http://rishida.net/utils/subtags/ The jQuery Globalization plugin download includes a folder named globinfo that contains the information for each of the 350 cultures. Actually, this folder contains more than 700 files because the folder includes both minified and un-minified versions of each file. For example, the globinfo folder includes JavaScript files named jQuery.glob.en-AU.js for English Australia, jQuery.glob.id.js for Indonesia, and jQuery.glob.zh-CHS for Chinese (Simplified) Legacy. Example: Setting a Particular Culture Imagine that you have been asked to create a German website and want to format all of the dates, currencies, and numbers using German formatting conventions correctly in JavaScript on the client. The HTML for the page might look like this: Notice the span tags above. They mark the areas of the page that we want to format with the Globalization plugin. We want to format the product price, the date the product is available, and the units of the product in stock. To use the jQuery Globalization plugin, we’ll add three JavaScript files to the page: the jQuery library, the jQuery Globalization plugin, and the culture information for a particular language: In this case, I’ve statically added the jQuery.glob.de-DE.js JavaScript file that contains the culture information for German. The language tag “de-DE” is used for German as spoken in Germany. Now that I have all of the necessary scripts, I can use the Globalization plugin to format the product price, date available, and units in stock values using the following client-side JavaScript: The jQuery Globalization plugin extends the jQuery library with new methods - including new methods named preferCulture() and format(). The preferCulture() method enables you to set the default culture used by the jQuery Globalization plugin methods. Notice that the preferCulture() method accepts a language tag. The method will find the closest culture that matches the language tag. The $.format() method is used to actually format the currencies, dates, and numbers. The second parameter passed to the $.format() method is a format specifier. For example, passing “c” causes the value to be formatted as a currency. The ReadMe file at github details the meaning of all of the various format specifiers: http://github.com/nje/jquery-glob When we open the page in a browser, everything is formatted correctly according to German language conventions. A euro symbol is used for the currency symbol. The date is formatted using German day and month names. Finally, a period instead of a comma is used a number separator: You can see a running example of the above approach with the 3_GermanSite.htm file in this samples download. Example: Enabling a User to Dynamically Select a Culture In the previous example we explicitly said that we wanted to globalize in German (by referencing the jQuery.glob.de-DE.js file). Let’s now look at the first of a few examples that demonstrate how to dynamically set the globalization culture to use. Imagine that you want to display a dropdown list of all of the 350 cultures in a page. When someone selects a culture from the dropdown list, you want all of the dates in the page to be formatted using the selected culture. Here’s the HTML for the page: Notice that all of the dates are contained in a <span> tag with a data-date attribute (data-* attributes are a new feature of HTML 5 that conveniently also still work with older browsers). We’ll format the date represented by the data-date attribute when a user selects a culture from the dropdown list. In order to display dates for any possible culture, we’ll include the jQuery.glob.all.js file like this: The jQuery Globalization plugin includes a JavaScript file named jQuery.glob.all.js. This file contains globalization information for all of the more than 350 cultures supported by the Globalization plugin.  At 367KB minified, this file is not small. Because of the size of this file, unless you really need to use all of these cultures at the same time, we recommend that you add the individual JavaScript files for particular cultures that you intend to support instead of the combined jQuery.glob.all.js to a page. In the next sample I’ll show how to dynamically load just the language files you need. Next, we’ll populate the dropdown list with all of the available cultures. We can use the $.cultures property to get all of the loaded cultures: Finally, we’ll write jQuery code that grabs every span element with a data-date attribute and format the date: The jQuery Globalization plugin’s parseDate() method is used to convert a string representation of a date into a JavaScript date. The plugin’s format() method is used to format the date. The “D” format specifier causes the date to be formatted using the long date format. And now the content will be globalized correctly regardless of which of the 350 languages a user visiting the page selects.  You can see a running example of the above approach with the 4_SelectCulture.htm file in this samples download. Example: Loading Globalization Files Dynamically As mentioned in the previous section, you should avoid adding the jQuery.glob.all.js file to a page whenever possible because the file is so large. A better alternative is to load the globalization information that you need dynamically. For example, imagine that you have created a dropdown list that displays a list of languages: The following jQuery code executes whenever a user selects a new language from the dropdown list. The code checks whether the globalization file associated with the selected language has already been loaded. If the globalization file has not been loaded then the globalization file is loaded dynamically by taking advantage of the jQuery $.getScript() method. The globalizePage() method is called after the requested globalization file has been loaded, and contains the client-side code to perform the globalization. The advantage of this approach is that it enables you to avoid loading the entire jQuery.glob.all.js file. Instead you only need to load the files that you need and you don’t need to load the files more than once. The 5_Dynamic.htm file in this samples download demonstrates how to implement this approach. Example: Setting the User Preferred Language Automatically Many websites detect a user’s preferred language from their browser settings and automatically use it when globalizing content. A user can set a preferred language for their browser. Then, whenever the user requests a page, this language preference is included in the request in the Accept-Language header. When using Microsoft Internet Explorer, you can set your preferred language by following these steps: Select the menu option Tools, Internet Options. Select the General tab. Click the Languages button in the Appearance section. Click the Add button to add a new language to the list of languages. Move your preferred language to the top of the list. Notice that you can list multiple languages in the Language Preference dialog. All of these languages are sent in the order that you listed them in the Accept-Language header: Accept-Language: fr-FR,id-ID;q=0.7,en-US;q=0.3 Strangely, you cannot retrieve the value of the Accept-Language header from client JavaScript. Microsoft Internet Explorer and Mozilla Firefox support a bevy of language related properties exposed by the window.navigator object, such as windows.navigator.browserLanguage and window.navigator.language, but these properties represent either the language set for the operating system or the language edition of the browser. These properties don’t enable you to retrieve the language that the user set as his or her preferred language. The only reliable way to get a user’s preferred language (the value of the Accept-Language header) is to write server code. For example, the following ASP.NET page takes advantage of the server Request.UserLanguages property to assign the user’s preferred language to a client JavaScript variable named acceptLanguage (which then allows you to access the value using client-side JavaScript): In order for this code to work, the culture information associated with the value of acceptLanguage must be included in the page. For example, if someone’s preferred culture is fr-FR (French in France) then you need to include either the jQuery.glob.fr-FR.js or the jQuery.glob.all.js JavaScript file in the page or the culture information won’t be available.  The “6_AcceptLanguages.aspx” sample in this samples download demonstrates how to implement this approach. If the culture information for the user’s preferred language is not included in the page then the $.preferCulture() method will fall back to using the neutral culture (for example, using jQuery.glob.fr.js instead of jQuery.glob.fr-FR.js). If the neutral culture information is not available then the $.preferCulture() method falls back to the default culture (English). Example: Using the Globalization Plugin with the jQuery UI DatePicker One of the goals of the Globalization plugin is to make it easier to build jQuery widgets that can be used with different cultures. We wanted to make sure that the jQuery Globalization plugin could work with existing jQuery UI plugins such as the DatePicker plugin. To that end, we created a patched version of the DatePicker plugin that can take advantage of the Globalization plugin when rendering a calendar. For example, the following figure illustrates what happens when you add the jQuery Globalization and the patched jQuery UI DatePicker plugin to a page and select Indonesian as the preferred culture: Notice that the headers for the days of the week are displayed using Indonesian day name abbreviations. Furthermore, the month names are displayed in Indonesian. You can download the patched version of the jQuery UI DatePicker from our github website. Or you can use the version included in this samples download and used by the 7_DatePicker.htm sample file. Summary I’m excited about our continuing participation in the jQuery community. This Globalization plugin is the third jQuery plugin that we’ve released. We’ve really appreciated all of the great feedback and design suggestions on the jQuery templating and data-linking prototypes that we released earlier this year.  We also want to thank the jQuery and jQuery UI teams for working with us to create these plugins. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. You can follow me at: twitter.com/scottgu

    Read the article

  • JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes

    - by John-Brown.Evans
    JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} ol{margin:0;padding:0} .c17_6{vertical-align:top;width:468pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c5_6{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c6_6{vertical-align:top;width:156pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c15_6{background-color:#ffffff} .c10_6{color:#1155cc;text-decoration:underline} .c1_6{text-align:center;direction:ltr} .c0_6{line-height:1.0;direction:ltr} .c16_6{color:#666666;font-size:12pt} .c18_6{color:inherit;text-decoration:inherit} .c8_6{background-color:#f3f3f3} .c2_6{direction:ltr} .c14_6{font-size:8pt} .c11_6{font-size:10pt} .c7_6{font-weight:bold} .c12_6{height:0pt} .c3_6{height:11pt} .c13_6{border-collapse:collapse} .c4_6{font-family:"Courier New"} .c9_6{font-style:italic} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue This example leads you through the creation of an Oracle database Advanced Queue and the related WebLogic server objects in order to use AQ JMS in connection with a SOA composite. If you have not already done so, I recommend you look at the previous posts in this series, as they include steps which this example builds upon. The following examples will demonstrate how to write and read from the queue from a SOA process. 1. Recap and Prerequisites In the previous examples, we created a JMS Queue, a Connection Factory and a Connection Pool in the WebLogic Server Console. Then we wrote and deployed BPEL composites, which enqueued and dequeued a simple XML payload. AQ JMS allows you to interoperate with database Advanced Queueing via JMS in WebLogic server and therefore take advantage of database features, while maintaining compliance with the JMS architecture. AQ JMS uses the WebLogic JMS Foreign Server framework. A full description of this functionality can be found in the following Oracle documentation Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server 11g Release 1 (10.3.6) Part Number E13738-06 7. Interoperating with Oracle AQ JMS http://docs.oracle.com/cd/E23943_01/web.1111/e13738/aq_jms.htm#CJACBCEJ For easier reference, this sample will use the same names for the objects as in the above document, except for the name of the database user, as it is possible that this user already exists in your database. We will create the following objects Database Objects Name Type AQJMSUSER Database User MyQueueTable Advanced Queue (AQ) Table UserQueue Advanced Queue WebLogic Server Objects Object Name Type JNDI Name aqjmsuserDataSource Data Source jdbc/aqjmsuserDataSource AqJmsModule JMS System Module AqJmsForeignServer JMS Foreign Server AqJmsForeignServerConnectionFactory JMS Foreign Server Connection Factory AqJmsForeignServerConnectionFactory AqJmsForeignDestination AQ JMS Foreign Destination queue/USERQUEUE eis/aqjms/UserQueue Connection Pool eis/aqjms/UserQueue 2. Create a Database User and Advanced Queue The following steps can be executed in the database client of your choice, e.g. JDeveloper or SQL Developer. The examples below use SQL*Plus. Log in to the database as a DBA user, for example SYSTEM or SYS. Create the AQJMSUSER user and grant privileges to enable the user to create AQ objects. Create Database User and Grant AQ Privileges sqlplus system/password as SYSDBA GRANT connect, resource TO aqjmsuser IDENTIFIED BY aqjmsuser; GRANT aq_user_role TO aqjmsuser; GRANT execute ON sys.dbms_aqadm TO aqjmsuser; GRANT execute ON sys.dbms_aq TO aqjmsuser; GRANT execute ON sys.dbms_aqin TO aqjmsuser; GRANT execute ON sys.dbms_aqjms TO aqjmsuser; Create the Queue Table and Advanced Queue and Start the AQ The following commands are executed as the aqjmsuser database user. Create the Queue Table connect aqjmsuser/aqjmsuser; BEGIN dbms_aqadm.create_queue_table ( queue_table = 'myQueueTable', queue_payload_type = 'sys.aq$_jms_text_message', multiple_consumers = false ); END; / Create the AQ BEGIN dbms_aqadm.create_queue ( queue_name = 'userQueue', queue_table = 'myQueueTable' ); END; / Start the AQ BEGIN dbms_aqadm.start_queue ( queue_name = 'userQueue'); END; / The above commands can be executed in a single PL/SQL block, but are shown as separate blocks in this example for ease of reference. You can verify the queue by executing the SQL command SELECT object_name, object_type FROM user_objects; which should display the following objects: OBJECT_NAME OBJECT_TYPE ------------------------------ ------------------- SYS_C0056513 INDEX SYS_LOB0000170822C00041$$ LOB SYS_LOB0000170822C00040$$ LOB SYS_LOB0000170822C00037$$ LOB AQ$_MYQUEUETABLE_T INDEX AQ$_MYQUEUETABLE_I INDEX AQ$_MYQUEUETABLE_E QUEUE AQ$_MYQUEUETABLE_F VIEW AQ$MYQUEUETABLE VIEW MYQUEUETABLE TABLE USERQUEUE QUEUE Similarly, you can view the objects in JDeveloper via a Database Connection to the AQJMSUSER. 3. Configure WebLogic Server and Add JMS Objects All these steps are executed from the WebLogic Server Administration Console. Log in as the webLogic user. Configure a WebLogic Data Source The data source is required for the database connection to the AQ created above. Navigate to domain > Services > Data Sources and press New then Generic Data Source. Use the values:Name: aqjmsuserDataSource JNDI Name: jdbc/aqjmsuserDataSource Database type: Oracle Database Driver: *Oracle’ Driver (Thin XA) for Instance connections; Versions:9.0.1 and later Connection Properties: Enter the connection information to the database containing the AQ created above and enter aqjmsuser for the User Name and Password. Press Test Configuration to verify the connection details and press Next. Target the data source to the soa server. The data source will be displayed in the list. It is a good idea to test the data source at this stage. Click on aqjmsuserDataSource, select Monitoring > Testing > soa_server1 and press Test Data Source. The result is displayed at the top of the page. Configure a JMS System Module The JMS system module is required to host the JMS foreign server for AQ resources. Navigate to Services > Messaging > JMS Modules and select New. Use the values: Name: AqJmsModule (Leave Descriptor File Name and Location in Domain empty.) Target: soa_server1 Click Finish. The other resources will be created in separate steps. The module will be displayed in the list.   Configure a JMS Foreign Server A foreign server is required in order to reference a 3rd-party JMS provider, in this case the database AQ, within a local WebLogic server JNDI tree. Navigate to Services > Messaging > JMS Modules and select (click on) AqJmsModule to configure it. Under Summary of Resources, select New then Foreign Server. Name: AqJmsForeignServer Targets: The foreign server is targeted automatically to soa_server1, based on the JMS module’s target. Press Finish to create the foreign server. The foreign server resource will be listed in the Summary of Resources for the AqJmsModule, but needs additional configuration steps. Click on AqJmsForeignServer and select Configuration > General to complete the configuration: JNDI Initial Context Factory: oracle.jms.AQjmsInitialContextFactory JNDI Connection URL: <empty> JNDI Properties Credential:<empty> Confirm JNDI Properties Credential: <empty> JNDI Properties: datasource=jdbc/aqjmsuserDataSource This is an important property. It is the JNDI name of the data source created above, which points to the AQ schema in the database and must be entered as a name=value pair, as in this example, e.g. datasource=jdbc/aqjmsuserDataSource, including the “datasource=” property name. Default Targeting Enabled: Leave this value checked. Press Save to save the configuration. At this point it is a good idea to verify that the data source was written correctly to the config file. In a terminal window, navigate to $MIDDLEWARE_HOME/user_projects/domains/soa_domain/config/jms  and open the file aqjmsmodule-jms.xml . The foreign server configuration should contain the datasource name-value pair, as follows:   <foreign-server name="AqJmsForeignServer">         <default-targeting-enabled>true</default-targeting-enabled>         <initial-context-factory>oracle.jms.AQjmsInitialContextFactory</initial-context-factory>         <jndi-property>           <key> datasource </key>           <value> jdbc/aqjmsuserDataSource </value>         </jndi-property>   </foreign-server> </weblogic-jms> Configure a JMS Foreign Server Connection Factory When creating the foreign server connection factory, you enter local and remote JNDI names. The name of the connection factory itself and the local JNDI name are arbitrary, but the remote JNDI name must match a specific format, depending on the type of queue or topic to be accessed in the database. This is very important and if the incorrect value is used, the connection to the queue will not be established and the error messages you get will not immediately reflect the cause of the error. The formats required (Remote JNDI names for AQ JMS Connection Factories) are described in the section Configure AQ Destinations  of the Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server document mentioned earlier. In this example, the remote JNDI name used is   XAQueueConnectionFactory  because it matches the AQ and data source created earlier, i.e. thin with AQ. Navigate to JMS Modules > AqJmsModule > AqJmsForeignServer > Connection Factories then New.Name: AqJmsForeignServerConnectionFactory Local JNDI Name: AqJmsForeignServerConnectionFactory Note: this local JNDI name is the JNDI name which your client application, e.g. a later BPEL process, will use to access this connection factory. Remote JNDI Name: XAQueueConnectionFactory Press OK to save the configuration. Configure an AQ JMS Foreign Server Destination A foreign server destination maps the JNDI name on the foreign JNDI provider to the respective local JNDI name, allowing the foreign JNDI name to be accessed via the local server. As with the foreign server connection factory, the local JNDI name is arbitrary (but must be unique), but the remote JNDI name must conform to a specific format defined in the section Configure AQ Destinations  of the Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server document mentioned earlier. In our example, the remote JNDI name is Queues/USERQUEUE , because it references a queue (as opposed to a topic) with the name USERQUEUE. We will name the local JNDI name queue/USERQUEUE, which is a little confusing (note the missing “s” in “queue), but conforms better to the JNDI nomenclature in our SOA server and also allows us to differentiate between the local and remote names for demonstration purposes. Navigate to JMS Modules > AqJmsModule > AqJmsForeignServer > Destinations and select New.Name: AqJmsForeignDestination Local JNDI Name: queue/USERQUEUE Remote JNDI Name:Queues/USERQUEUE After saving the foreign destination configuration, this completes the JMS part of the configuration. We still need to configure the JMS adapter in order to be able to access the queue from a BPEL processt. 4. Create a JMS Adapter Connection Pool in Weblogic Server Create the Connection Pool Access to the AQ JMS queue from a BPEL or other SOA process in our example is done via a JMS adapter. To enable this, the JmsAdapter in WebLogic server needs to be configured to have a connection pool which points to the local connection factory JNDI name which was created earlier. Navigate to Deployments > Next and select (click on) the JmsAdapter. Select Configuration > Outbound Connection Pools and New. Check the radio button for oracle.tip.adapter.jms.IJmsConnectionFactory and press Next. JNDI Name: eis/aqjms/UserQueue Press Finish Expand oracle.tip.adapter.jms.IJmsConnectionFactory and click on eis/aqjms/UserQueue to configure it. The ConnectionFactoryLocation must point to the foreign server’s local connection factory name created earlier. In our example, this is AqJmsForeignServerConnectionFactory . As a reminder, this connection factory is located under JMS Modules > AqJmsModule > AqJmsForeignServer > Connection Factories and the value needed here is under Local JNDI Name. Enter AqJmsForeignServerConnectionFactory  into the Property Value field for ConnectionFactoryLocation. You must then press Return/Enter then Save for the value to be accepted. If your WebLogic server is running in Development mode, you should see the message that the changes have been activated and the deployment plan successfully updated. If not, then you will manually need to activate the changes in the WebLogic server console.Although the changes have been activated, the JmsAdapter needs to be redeployed in order for the changes to become effective. This should be confirmed by the message Remember to update your deployment to reflect the new plan when you are finished with your changes. Redeploy the JmsAdapter Navigate back to the Deployments screen, either by selecting it in the left-hand navigation tree or by selecting the “Summary of Deployments” link in the breadcrumbs list at the top of the screen. Then select the checkbox next to JmsAdapter and press the Update button. On the Update Application Assistant page, select “Redeploy this application using the following deployment files” and press Finish. After a few seconds you should get the message that the selected deployments were updated. The JMS adapter configuration is complete and it can now be used to access the AQ JMS queue. You can verify that the JNDI name was created correctly, by navigating to Environment > Servers > soa_server1 and View JNDI Tree. Then scroll down in the JNDI Tree Structure to eis and select aqjms. This concludes the sample. In the following post, I will show you how to create a BPEL process which sends a message to this advanced queue via JMS. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • Ops Center 12c - Provisioning Solaris Using a Card-Based NIC

    - by scottdickson
    It's been a long time since last I added something here, but having some conversations this last week, I got inspired to update things. I've been spending a lot of time with Ops Center for managing and installing systems these days.  So, I suspect a number of my upcoming posts will be in that area. Today, I want to look at how to provision Solaris using Ops Center when your network is not connected to one of the built-in NICs.  We'll talk about how this can work for both Solaris 10 and Solaris 11, since they are pretty similar.  In both cases, WANboot is a key piece of the story. Here's what I want to do:  I have a Sun Fire T2000 server with a Quad-GbE nxge card installed.  The only network is connected to port 2 on that card rather than the built-in network interfaces.  I want to install Solaris on it across the network, either Solaris 10 or Solaris 11.  I have met with a lot of customers lately who have a similar architecture.  Usually, they have T4-4 servers with the network connected via 10GbE connections. Add to this mix the fact that I use Ops Center to manage the systems in my lab, so I really would like to add this to Ops Center.  If possible, I would like this to be completely hands free.  I can't quite do that yet. Close, but not quite. WANBoot or Old-Style NetBoot? When a system is installed from the network, it needs some help getting the process rolling.  It has to figure out what its network configuration (IP address, gateway, etc.) ought to be.  It needs to figure out what server is going to help it boot and install, and it needs the instructions for the installation.  There are two different ways to bootstrap an installation of Solaris on SPARC across the network.   The old way uses a broadcast of RARP or more recently DHCP to obtain the IP configuration and the rest of the information needed.  The second is to explicitly configure this information in the OBP and use WANBoot for installation WANBoot has a number of benefits over broadcast-based installation: it is not restricted to a single subnet; it does not require special DHCP configuration or DHCP helpers; it uses standard HTTP and HTTPS protocols which traverse firewalls much more easily than NFS-based package installation.  But, WANBoot is not available on really old hardware and WANBoot requires the use o Flash Archives in Solaris 10.  Still, for many people, this is a great approach. As it turns out, WANBoot is necessary if you plan to install using a NIC on a card rather than a built-in NIC. Identifying Which Network Interface to Use One of the trickiest aspects to this process, and the one that actually requires manual intervention to set up, is identifying how the OBP and Solaris refer to the NIC that we want to use to boot.  The OBP already has device aliases configured for the built-in NICs called net, net0, net1, net2, net3.  The device alias net typically points to net0 so that when you issue the command  "boot net -v install", it uses net0 for the boot.  Our task is to figure out the network instance for the NIC we want to use.  We will need to get to the OBP console of the system we want to install in order to figure out what the network should be called.  I will presume you know how to get to the ok prompt.  Once there, we have to see what networks the OBP sees and identify which one is associated with our NIC using the OBP command show-nets. SunOS Release 5.11 Version 11.0 64-bit Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved. {4} ok banner Sun Fire T200, No Keyboard Copyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.30.4.b, 32640 MB memory available, Serial #69057548. Ethernet address 0:14:4f:1d:bc:c, Host ID: 841dbc0c. {4} ok show-nets a) /pci@7c0/pci@0/pci@2/network@0,1 b) /pci@7c0/pci@0/pci@2/network@0 c) /pci@780/pci@0/pci@8/network@0,3 d) /pci@780/pci@0/pci@8/network@0,2 e) /pci@780/pci@0/pci@8/network@0,1 f) /pci@780/pci@0/pci@8/network@0 g) /pci@780/pci@0/pci@1/network@0,1 h) /pci@780/pci@0/pci@1/network@0 q) NO SELECTION Enter Selection, q to quit: d /pci@780/pci@0/pci@8/network@0,2 has been selected. Type ^Y ( Control-Y ) to insert it in the command line. e.g. ok nvalias mydev ^Y for creating devalias mydev for /pci@780/pci@0/pci@8/network@0,2 {4} ok devalias ... net3 /pci@7c0/pci@0/pci@2/network@0,1 net2 /pci@7c0/pci@0/pci@2/network@0 net1 /pci@780/pci@0/pci@1/network@0,1 net0 /pci@780/pci@0/pci@1/network@0 net /pci@780/pci@0/pci@1/network@0 ... name aliases By looking at the devalias and the show-nets output, we can see that our Quad-GbE card must be the device nodes starting with  /pci@780/pci@0/pci@8/network@0.  The cable for our network is plugged into the 3rd slot, so the device address for our network must be /pci@780/pci@0/pci@8/network@0,2. With that, we can create a device alias for our network interface.  Naming the device alias may take a little bit of trial and error, especially in Solaris 11 where the device alias seems to matter more with the new virtualized network stack. So far in my testing, since this is the "next" network interface to be used, I have found success in naming it net4, even though it's a NIC in the middle of a card that might, by rights, be called net6 (assuming the 0th interface on the card is the next interface identified by Solaris and this is the 3rd interface on the card).  So, we will call it net4.  We need to assign a device alias to it: {4} ok nvalias net4 /pci@780/pci@0/pci@8/network@0,2 {4} ok devalias net4 /pci@780/pci@0/pci@8/network@0,2 ... We also may need to have the MAC for this particular interface, so let's get it, too.  To do this, we go to the device and interrogate its properties. {4} ok cd /pci@780/pci@0/pci@8/network@0,2 {4} ok .properties assigned-addresses 82060210 00000000 03000000 00000000 01000000 82060218 00000000 00320000 00000000 00008000 82060220 00000000 00328000 00000000 00008000 82060230 00000000 00600000 00000000 00100000 local-mac-address 00 21 28 20 42 92 phy-type mif ... From this, we can see that the MAC for this interface is  00:21:28:20:42:92.  We will need this later. This is all we need to do at the OBP.  Now, we can configure Ops Center to use this interface. Network Boot in Solaris 10 Solaris 10 turns out to be a little simpler than Solaris 11 for this sort of a network boot.  Since WANBoot in Solaris 10 fetches a specified In order to install the system using Ops Center, it is necessary to create a OS Provisioning profile and its corresponding plan.  I am going to presume that you already know how to do this within Ops Center 12c and I will just cover the differences between a regular profile and a profile that can use an alternate interface. Create a OS Provisioning profile for Solaris 10 as usual.  However, when you specify the network resources for the primary network, click on the name of the NIC, probably GB_0, and rename it to GB_N/netN, where N is the instance number you used previously in creating the device alias.  This is where the trial and error may come into play.  You may need to try a few instance numbers before you, the OBP, and Solaris all agree on the instance number.  Mark this as the boot network. For Solaris 10, you ought to be able to then apply the OS Provisioning profile to the server and it should install using that interface.  And if you put your cards in the same slots and plug the networks into the same NICs, this profile is reusable across multiple servers. Why This Works If you watch the console as Solaris boots during the OSP process, Ops Center is going to look for the device alias netN.  Since WANBoot requires a device alias called just net, Ops Center uses the value of your netN device alias and assigns that device to the net alias.  That means that boot net will automatically use this device.  Very cool!  Here's a trace from the console as Ops Center provisions a server: Sun Sun Fire T200, No KeyboardCopyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved.OpenBoot 4.30.4.b, 32640 MB memory available, Serial #69057548.Ethernet address 0:14:4f:1d:bc:c, Host ID: 841dbc0c.auto-boot? =            false{0} ok  {0} ok printenv network-boot-argumentsnetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok devalias net net                      /pci@780/pci@0/pci@1/network@0{0} ok devalias net4 net4                     /pci@780/pci@0/pci@8/network@0,2{0} ok devalias net /pci@780/pci@0/pci@8/network@0,2{0} ok setenv network-boot-arguments host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:8004/cgi-bin/wanboot-cginetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:8004/cgi-bin/wanboot-cgi{0} ok {0} ok boot net - installBoot device: /pci@780/pci@0/pci@8/network@0,2  File and args: - install/pci@780/pci@0/pci@8/network@0,2: 1000 Mbps link up<time unavailable> wanboot info: WAN boot messages->console<time unavailable> wanboot info: configuring /pci@780/pci@0/pci@8/network@0,2 See what happened?  Ops Center looked for the network device alias called net4 that we specified in the profile, took the value from it, and made it the net device alias for the boot.  Pretty cool! WANBoot and Solaris 11 Solaris 11 requires an additional step since the Automated Installer in Solaris 11 uses the MAC address of the network to figure out which manifest to use for system installation.  In order to make sure this is available, we have to take an extra step to associate the MAC of the NIC on the card with the host.  So, in addition to creating the device alias like we did above, we also have to declare to Ops Center that the host has this new MAC. Declaring the NIC Start out by discovering the hardware as usual.  Once you have discovered it, take a look under the Connectivity tab to see what networks it has discovered.  In the case of this system, it shows the 4 built-in networks, but not the networks on the additional cards.  These are not directly visible to the system controller.  In order to add the additional network interface to the hardware asset, it is necessary to Declare it.  We will declare that we have a server with this additional NIC, but we will also  specify the existing GB_0 network so that Ops Center can associate the right resources together.  The GB_0 acts as sort of a key to tie our new declaration to the old system already discovered.  Go to the Assets tab, select All Assets, and then in the Actions tab, select Add Asset.  Rather than going through a discovery this time, we will manually declare a new asset. When we declare it, we will give the hostname, IP address, system model that match those that have already been discovered.  Then, we will declare both GB_0 with its existing MAC and the new GB_4 with its MAC.  Remember that we collected the MAC for GB_4 when we created its device alias. After you declare the asset, you will see the new NIC in the connectivity tab for the asset.  You will notice that only the NICs you listed when you declared it are seen now.  If you want Ops Center to see all of the existing NICs as well as the additional one, declare them as well.  Add the other GB_1, GB_2, GB_3 links and their MACs just as you did GB_0 and GB_4.  Installing the OS  Once you have declared the asset, you can create an OS Provisioning profile for Solaris 11 in the same way that you did for Solaris 10.  The only difference from any other provisioning profile you might have created already is the network to use for installation.  Again, use GB_N/netN where N is the interface number you used for your device alias and in your declaration.  And away you go.  When the system boots from the network, the automated installer (AI) is able to see which system manifest to use, based on the new MAC that was associated, and the system gets installed. {0} ok {0} ok printenv network-boot-argumentsnetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok devalias net net                      /pci@780/pci@0/pci@1/network@0{0} ok devalias net4 net4                     /pci@780/pci@0/pci@8/network@0,2{0} ok devalias net /pci@780/pci@0/pci@8/network@0,2{0} ok setenv network-boot-arguments host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cginetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok boot net - installBoot device: /pci@780/pci@0/pci@8/network@0,2  File and args: - install/pci@780/pci@0/pci@8/network@0,2: 1000 Mbps link up<time unavailable> wanboot info: WAN boot messages->console<time unavailable> wanboot info: configuring /pci@780/pci@0/pci@8/network@0,2...SunOS Release 5.11 Version 11.0 64-bitCopyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.Remounting root read/writeProbing for device nodes ...Preparing network image for useDownloading solaris.zlib--2012-02-17 15:10:17--  http://10.140.204.22:5555/var/js/AI/sparc//solaris.zlibConnecting to 10.140.204.22:5555... connected.HTTP request sent, awaiting response... 200 OKLength: 126752256 (121M) [text/plain]Saving to: `/tmp/solaris.zlib'100%[======================================>] 126,752,256 28.6M/s   in 4.4s    2012-02-17 15:10:21 (27.3 MB/s) - `/tmp/solaris.zlib' saved [126752256/126752256] Conclusion So, why go to all of this trouble?  More and more, I find that customers are wiring their data center to only use higher speed networks - 10GbE only to the hosts.  Some customers are moving aggressively toward consolidated networks combining storage and network on CNA NICs.  All of this means that network-based provisioning cannot rely exclusively on the built-in network interfaces.  So, it's important to be able to provision a system using other than the built-in networks.  Turns out, that this is pretty straight-forward for both Solaris 10 and Solaris 11 and fits into the Ops Center deployment process quite nicely. Hopefully, you will be able to use this as you build out your own private cloud solutions with Ops Center.

    Read the article

  • The Product Owner

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. Picking a most important part of Scrum is difficult.  All of the rules are required, but if there were one rule that is “more” required that every other rule, its having a good Product Owner.  Simply put, the Product Owner can make or break the project. Duties of the Product Owner A Product Owner has many duties and responsibilities.  I’ll talk about each of these duties in detail below. A Product Owner: Discovers and records stories for the backlog. Prioritizes stories in the Product Backlog, Release Backlog and Iteration Backlog. Determines Release dates and Iteration Dates. Develops story details and helps the team understand those details. Helps QA to develop acceptance tests. Interact with the Customer to make sure that the product is meeting the customer’s needs. Discovers and Records Stories for the Backlog When I do Scrum, I always use User Stories as the means for capturing functionality that’s required in the system.  Some people will use Use Cases, but the same rule applies.  The Product Owner has the ultimate responsibility for figuring out what functionality will be in the system.  Many different mechanisms for capturing this input can be used.  User interviews are great, but all sources should be considered, including talking with Customer Support types.  Often, they hear what users are struggling with the most and are a great source for stories that can make the application easier to use. Care should be taken when soliciting user stories from technical types such as programmers and the people that manage them.  They will almost always give stories that are very technical in nature and may not have a direct benefit for the end user.  Stories are about adding value to the company.  If the stories don’t have direct benefit to the end user, the Product Owner should question whether or not the story should be implemented.  In general, technical stories should be included as tasks in User Stories.  Technical stories are often needed, but the ultimate value to the user is in user based functionality, so technical stories should be considered nothing more than overhead in providing that user functionality. Until the iteration prior to development, stories should be nothing more than short, one line placeholders. An exercise called Story Planning can be used to brainstorm and come up with stories.  I’ll save the description of this activity for another blog post. For more information on User Stories, please read the book User Stories Applied by Mike Cohn. Prioritizes Stories in the Product Backlog, Release Backlog and Iteration Backlog Prioritization of stories is one of the most difficult tasks that a Product Owner must do.  A key concept of Scrum done right is the need to have the team working from a single set of prioritized stories.  If the team does not have a single set of prioritized stories, Scrum will likely fail at your organization.  The Product Owner is the ONLY person who has the responsibility to prioritize that list.  The Product Owner must be very diplomatic and sincerely listen to the people around him so that he can get the priorities correct. Just listening will still not yield the proper priorities.  Care must also be taken to ensure that Return on Investment is also considered.  Ultimately, determining which stories give the most value to the company for the least cost is the most important factor in determining priorities.  Product Owners should be willing to look at cold, hard numbers to determine the order for stories.  Even when many people want a feature, if that features is costly to develop, it may not have as high of a return on investment as features that are cheaper, but not as popular. The act of prioritization often causes conflict in an environment.  Customer Service thinks that feature X is the most important, because it will stop people from calling.  Operations thinks that feature Y is the most important, because it will stop servers from crashing.  Developers think that feature Z is most important because it will make writing software much easier for them.  All of these are useful goals, but the team can have only one list of items, and each item must have a priority that is different from all other stories.  The Product Owner will determine which feature gives the best return on investment and the other features will have to wait their turn, which means that someone will not have their top priority feature implemented first. A weak Product Owner will refuse to do prioritization.  I’ve heard from multiple Product Owners the following phrase, “Well, it’s all got to be done, so what does it matter what order we do it in?”  If your product owner is using this phrase, you need a new Product Owner.  Order is VERY important.  In Scrum, every release is potentially shippable.  If the wrong priority items are developed, then the value added in each release isn’t what it should be.  Additionally, the Product Owner with this mindset doesn’t understand Agile.  A product is NEVER finished, until the company has decided that it is no longer a going concern and they are no longer going to sell the product.  Therefore, prioritization isn’t an event, its something that continues every day.  The logical extension of the phrase “It’s all got to be done” is that you will never ship your product, since a product is never “done.”  Once stories have been prioritized, assigning them to the Release Backlog and the Iteration Backlog becomes relatively simple.  The top priority items are copied into the respective backlogs in order and the task is complete.  The team does have the right to shuffle things around a little in the iteration backlog.  For example, they may determine that working on story C with story A is appropriate because they’re related, even though story B is technically a higher priority than story C.  Or they may decide that story B is too big to complete in the time available after Story A has tasks created, so they’ll work on Story C since it’s smaller.  They can’t, however, go deep into the backlog to pick stories to implement.  The team and the Product Owner should work together to determine what’s best for the company. Prioritization is time consuming, but its one of the most important things a Product Owner does. Determines Release Dates and Iteration Dates Product owners are responsible for determining release dates for a product.  A common misconception that Product Owners have is that every “release” needs to correspond with an actual release to customers.  This is not the case.  In general, releases should be no more than 3 months long.  You  may decide to release the product to the customers, and many companies do release the product to customers, but it may also be an internal release. If a release date is too far away, developers will fall into the trap of not feeling a sense of urgency.  The date is far enough away that they don’t need to give the release their full attention.  Additionally, important tasks, such as performance tuning, regression testing, user documentation, and release preparation, will not happen regularly, making them much more difficult and time consuming to do.  The more frequently you do these tasks, the easier they are to accomplish. The Product Owner will be a key participant in determining whether or not a release should be sent out to the customers.  The determination should be made on whether or not the features contained in the release are valuable enough  and complete enough that the customers will see real value in the release.  Often, some features will take more than three months to get them to a state where they qualify for a release or need additional supporting features to be released.  The product owner has the right to make this determination. In addition to release dates, the Product Owner also will help determine iteration dates.  In general, an iteration length should be chosen and the team should follow that iteration length for an extended period of time.  If the iteration length is changed every iteration, you’re not doing Scrum.  Iteration lengths help the team and company get into a rhythm of developing quality software.  Iterations should be somewhere between 2 and 4 weeks in length.  Any shorter, and significant software will likely not be developed.  Any longer, and the team won’t feel urgency and planning will become very difficult. Iterations may not be extended during the iteration.  Companies where Scrum isn’t really followed will often use this as a strategy to complete all stories.  They don’t want to face the harsh reality of what their true performance is, and looking good is more important than seeking visibility and improving the process and team.  Companies like this typically don’t allow failure.  This is unhealthy.  Failure is part of life and unless we learn from it, we can’t improve.  I would much rather see a team push out stories to the next iteration and then have healthy discussions about why they failed rather than extend the iteration and not deal with the core problems. If iteration length varies, retrospectives become more difficult.  For example, evaluating the performance of the team’s estimation efforts becomes much more difficult if the iteration length varies.  Also, the team must have a velocity measurement.  If the iteration length varies, measuring velocity becomes impossible and upper management no longer will have the ability to evaluate the teams performance.  People external to the team will no longer have the ability to determine when key features are likely to be developed.  Variable iterations cause the entire company to fail and likely cause Scrum to fail at an organization. Develops Story Details and Helps the Team Understand Those Details A key concept in Scrum is that the stories are nothing more than a placeholder for a conversation.  Stories should be nothing more than short, one line statements about the functionality.  The team will then converse with the Product Owner about the details about that story.  The product owner needs to have a very good idea about what the details of the story are and needs to be able to help the team understand those details. Too often, we see this requirement as being translated into the need for comprehensive documentation about the story, including old fashioned requirements documentation.  The team should only develop the documentation that is required and should not develop documentation that is only created because their is a process to do so. In general, what we see that works best is the iteration before a team starts development work on a story, the Product Owner, with other appropriate business analysts, will develop the details of that story.  They’ll figure out what business rules are required, potentially make paper prototypes or other light weight mock-ups, and they seek to understand the story and what is implied.  Note that the time allowed for this task is deliberately short.  The Product Owner only has a single iteration to develop all of the stories for the next iteration. If more than one iteration is used, I’ve found that teams will end up with Big Design Up Front and traditional requirements documents.  This is a waste of time, since the team will need to then have discussions with the Product Owner to figure out what the requirements document says.  Instead of this, skip making the pretty pictures and detailing the nuances of the requirements and build only what is minimally needed by the team to do development.  If something comes up during development, you can address it at that time and figure out what you want to do.  The goal is to keep things as light weight as possible so that everyone can move as quickly as possible. Helps QA to Develop Acceptance Tests In Scrum, no story can be counted until it is accepted by QA.  Because of this, acceptance tests are very important to the team.  In general, acceptance tests need to be developed prior to the iteration or at the very beginning of the iteration so that the team can make sure that the tasks that they develop will fulfill the acceptance criteria. The Product Owner will help the team, including QA, understand what will make the story acceptable.  Note that the Product Owner needs to be careful about specifying that the feature will work “Perfectly” at the end of the iteration.  In general, features are developed a little bit at a time, so only the bit that is being developed should be considered as necessary for acceptance. A weak Product Owner will make statements like “Do it right the first time.”  Not only are these statements damaging to the team (like they would try to do it WRONG the first time . . .), they’re also ignoring the iterative nature of Scrum.  Additionally, a weak product owner will seek to add scope in the acceptance testing.  For example, they will refuse to determine acceptance at the beginning of the iteration, and then, after the team has planned and committed to the iteration, they will expand scope by defining acceptance.  This often causes the team to miss the iteration because scope that wasn’t planned on is included.  There are ways that the team can mitigate this problem.  For example, include extra “Product Owner” time to deal with the uncertainty that you know will be introduced by the Product Owner.  This will slow the perceived velocity of the team and is not ideal, since they’ll be doing more work than they get credit for. Interact with the Customer to Make Sure that the Product is Meeting the Customer’s Needs Once development is complete, what the team has worked on should be put in front of real live people to see if it meets the needs of the customer.  One of the great things about Agile is that if something doesn’t work, we can revisit it in a future iteration!  This frees up the team to make the best decision now and know that if that decision proves to be incorrect, the team can revisit it and change that decision. Features are about adding value to the customer, so if the customer doesn’t find them useful, then having the team make tweaks is valuable.  In general, most software will be 80 to 90 percent “right” after the initial round and only minor tweaks are required.  If proper coding standards are followed, these tweaks are usually minor and easy to accomplish.  Product Owners that are doing a good job will encourage real users to see and use the software, since they know that they are trying to add value to the customer. Poor product owners will think that they know the answers already, that their customers are silly and do stupid things and that they don’t need customer input.  If you have a product owner that is afraid to show the team’s work to real customers, you probably need a different product owner. Up Next, “Who Makes a Good Product Owner.” Followed by, “Messing with the Team.” Technorati Tags: Scrum,Product Owner

    Read the article

  • Windows Azure Service Bus Splitter and Aggregator

    - by Alan Smith
    This article will cover basic implementations of the Splitter and Aggregator patterns using the Windows Azure Service Bus. The content will be included in the next release of the “Windows Azure Service Bus Developer Guide”, along with some other patterns I am working on. I’ve taken the pattern descriptions from the book “Enterprise Integration Patterns” by Gregor Hohpe. I bought a copy of the book in 2004, and recently dusted it off when I started to look at implementing the patterns on the Windows Azure Service Bus. Gregor has also presented an session in 2011 “Enterprise Integration Patterns: Past, Present and Future” which is well worth a look. I’ll be covering more patterns in the coming weeks, I’m currently working on Wire-Tap and Scatter-Gather. There will no doubt be a section on implementing these patterns in my “SOA, Connectivity and Integration using the Windows Azure Service Bus” course. There are a number of scenarios where a message needs to be divided into a number of sub messages, and also where a number of sub messages need to be combined to form one message. The splitter and aggregator patterns provide a definition of how this can be achieved. This section will focus on the implementation of basic splitter and aggregator patens using the Windows Azure Service Bus direct programming model. In BizTalk Server receive pipelines are typically used to implement the splitter patterns, with sequential convoy orchestrations often used to aggregate messages. In the current release of the Service Bus, there is no functionality in the direct programming model that implements these patterns, so it is up to the developer to implement them in the applications that send and receive messages. Splitter A message splitter takes a message and spits the message into a number of sub messages. As there are different scenarios for how a message can be split into sub messages, message splitters are implemented using different algorithms. The Enterprise Integration Patterns book describes the splatter pattern as follows: How can we process a message if it contains multiple elements, each of which may have to be processed in a different way? Use a Splitter to break out the composite message into a series of individual messages, each containing data related to one item. The Enterprise Integration Patterns website provides a description of the Splitter pattern here. In some scenarios a batch message could be split into the sub messages that are contained in the batch. The splitting of a message could be based on the message type of sub-message, or the trading partner that the sub message is to be sent to. Aggregator An aggregator takes a stream or related messages and combines them together to form one message. The Enterprise Integration Patterns book describes the aggregator pattern as follows: How do we combine the results of individual, but related messages so that they can be processed as a whole? Use a stateful filter, an Aggregator, to collect and store individual messages until a complete set of related messages has been received. Then, the Aggregator publishes a single message distilled from the individual messages. The Enterprise Integration Patterns website provides a description of the Aggregator pattern here. A common example of the need for an aggregator is in scenarios where a stream of messages needs to be combined into a daily batch to be sent to a legacy line-of-business application. The BizTalk Server EDI functionality provides support for batching messages in this way using a sequential convoy orchestration. Scenario The scenario for this implementation of the splitter and aggregator patterns is the sending and receiving of large messages using a Service Bus queue. In the current release, the Windows Azure Service Bus currently supports a maximum message size of 256 KB, with a maximum header size of 64 KB. This leaves a safe maximum body size of 192 KB. The BrokeredMessage class will support messages larger than 256 KB; in fact the Size property is of type long, implying that very large messages may be supported at some point in the future. The 256 KB size restriction is set in the service bus components that are deployed in the Windows Azure data centers. One of the ways of working around this size restriction is to split large messages into a sequence of smaller sub messages in the sending application, send them via a queue, and then reassemble them in the receiving application. This scenario will be used to demonstrate the pattern implementations. Implementation The splitter and aggregator will be used to provide functionality to send and receive large messages over the Windows Azure Service Bus. In order to make the implementations generic and reusable they will be implemented as a class library. The splitter will be implemented in the LargeMessageSender class and the aggregator in the LargeMessageReceiver class. A class diagram showing the two classes is shown below. Implementing the Splitter The splitter will take a large brokered message, and split the messages into a sequence of smaller sub-messages that can be transmitted over the service bus messaging entities. The LargeMessageSender class provides a Send method that takes a large brokered message as a parameter. The implementation of the class is shown below; console output has been added to provide details of the splitting operation. public class LargeMessageSender {     private static int SubMessageBodySize = 192 * 1024;     private QueueClient m_QueueClient;       public LargeMessageSender(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public void Send(BrokeredMessage message)     {         // Calculate the number of sub messages required.         long messageBodySize = message.Size;         int nrSubMessages = (int)(messageBodySize / SubMessageBodySize);         if (messageBodySize % SubMessageBodySize != 0)         {             nrSubMessages++;         }           // Create a unique session Id.         string sessionId = Guid.NewGuid().ToString();         Console.WriteLine("Message session Id: " + sessionId);         Console.Write("Sending {0} sub-messages", nrSubMessages);           Stream bodyStream = message.GetBody<Stream>();         for (int streamOffest = 0; streamOffest < messageBodySize;             streamOffest += SubMessageBodySize)         {                                     // Get the stream chunk from the large message             long arraySize = (messageBodySize - streamOffest) > SubMessageBodySize                 ? SubMessageBodySize : messageBodySize - streamOffest;             byte[] subMessageBytes = new byte[arraySize];             int result = bodyStream.Read(subMessageBytes, 0, (int)arraySize);             MemoryStream subMessageStream = new MemoryStream(subMessageBytes);               // Create a new message             BrokeredMessage subMessage = new BrokeredMessage(subMessageStream, true);             subMessage.SessionId = sessionId;               // Send the message             m_QueueClient.Send(subMessage);             Console.Write(".");         }         Console.WriteLine("Done!");     }} The LargeMessageSender class is initialized with a QueueClient that is created by the sending application. When the large message is sent, the number of sub messages is calculated based on the size of the body of the large message. A unique session Id is created to allow the sub messages to be sent as a message session, this session Id will be used for correlation in the aggregator. A for loop in then used to create the sequence of sub messages by creating chunks of data from the stream of the large message. The sub messages are then sent to the queue using the QueueClient. As sessions are used to correlate the messages, the queue used for message exchange must be created with the RequiresSession property set to true. Implementing the Aggregator The aggregator will receive the sub messages in the message session that was created by the splitter, and combine them to form a single, large message. The aggregator is implemented in the LargeMessageReceiver class, with a Receive method that returns a BrokeredMessage. The implementation of the class is shown below; console output has been added to provide details of the splitting operation.   public class LargeMessageReceiver {     private QueueClient m_QueueClient;       public LargeMessageReceiver(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public BrokeredMessage Receive()     {         // Create a memory stream to store the large message body.         MemoryStream largeMessageStream = new MemoryStream();           // Accept a message session from the queue.         MessageSession session = m_QueueClient.AcceptMessageSession();         Console.WriteLine("Message session Id: " + session.SessionId);         Console.Write("Receiving sub messages");           while (true)         {             // Receive a sub message             BrokeredMessage subMessage = session.Receive(TimeSpan.FromSeconds(5));               if (subMessage != null)             {                 // Copy the sub message body to the large message stream.                 Stream subMessageStream = subMessage.GetBody<Stream>();                 subMessageStream.CopyTo(largeMessageStream);                   // Mark the message as complete.                 subMessage.Complete();                 Console.Write(".");             }             else             {                 // The last message in the sequence is our completeness criteria.                 Console.WriteLine("Done!");                 break;             }         }                     // Create an aggregated message from the large message stream.         BrokeredMessage largeMessage = new BrokeredMessage(largeMessageStream, true);         return largeMessage;     } }   The LargeMessageReceiver initialized using a QueueClient that is created by the receiving application. The receive method creates a memory stream that will be used to aggregate the large message body. The AcceptMessageSession method on the QueueClient is then called, which will wait for the first message in a message session to become available on the queue. As the AcceptMessageSession can throw a timeout exception if no message is available on the queue after 60 seconds, a real-world implementation should handle this accordingly. Once the message session as accepted, the sub messages in the session are received, and their message body streams copied to the memory stream. Once all the messages have been received, the memory stream is used to create a large message, that is then returned to the receiving application. Testing the Implementation The splitter and aggregator are tested by creating a message sender and message receiver application. The payload for the large message will be one of the webcast video files from http://www.cloudcasts.net/, the file size is 9,697 KB, well over the 256 KB threshold imposed by the Service Bus. As the splitter and aggregator are implemented in a separate class library, the code used in the sender and receiver console is fairly basic. The implementation of the main method of the sending application is shown below.   static void Main(string[] args) {     // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Open the input file.     FileStream fileStream = new FileStream(AccountDetails.TestFile, FileMode.Open);       // Create a BrokeredMessage for the file.     BrokeredMessage largeMessage = new BrokeredMessage(fileStream, true);       Console.WriteLine("Sending: " + AccountDetails.TestFile);     Console.WriteLine("Message body size: " + largeMessage.Size);     Console.WriteLine();         // Send the message with a LargeMessageSender     LargeMessageSender sender = new LargeMessageSender(queueClient);     sender.Send(largeMessage);       // Close the messaging facory.     factory.Close();  } The implementation of the main method of the receiving application is shown below. static void Main(string[] args) {       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Create a LargeMessageReceiver and receive the message.     LargeMessageReceiver receiver = new LargeMessageReceiver(queueClient);     BrokeredMessage largeMessage = receiver.Receive();       Console.WriteLine("Received message");     Console.WriteLine("Message body size: " + largeMessage.Size);       string testFile = AccountDetails.TestFile.Replace(@"\In\", @"\Out\");     Console.WriteLine("Saving file: " + testFile);       // Save the message body as a file.     Stream largeMessageStream = largeMessage.GetBody<Stream>();     largeMessageStream.Seek(0, SeekOrigin.Begin);     FileStream fileOut = new FileStream(testFile, FileMode.Create);     largeMessageStream.CopyTo(fileOut);     fileOut.Close();       Console.WriteLine("Done!"); } In order to test the application, the sending application is executed, which will use the LargeMessageSender class to split the message and place it on the queue. The output of the sender console is shown below. The console shows that the body size of the large message was 9,929,365 bytes, and the message was sent as a sequence of 51 sub messages. When the receiving application is executed the results are shown below. The console application shows that the aggregator has received the 51 messages from the message sequence that was creating in the sending application. The messages have been aggregated to form a massage with a body of 9,929,365 bytes, which is the same as the original large message. The message body is then saved as a file. Improvements to the Implementation The splitter and aggregator patterns in this implementation were created in order to show the usage of the patterns in a demo, which they do quite well. When implementing these patterns in a real-world scenario there are a number of improvements that could be made to the design. Copying Message Header Properties When sending a large message using these classes, it would be great if the message header properties in the message that was received were copied from the message that was sent. The sending application may well add information to the message context that will be required in the receiving application. When the sub messages are created in the splitter, the header properties in the first message could be set to the values in the original large message. The aggregator could then used the values from this first sub message to set the properties in the message header of the large message during the aggregation process. Using Asynchronous Methods The current implementation uses the synchronous send and receive methods of the QueueClient class. It would be much more performant to use the asynchronous methods, however doing so may well affect the sequence in which the sub messages are enqueued, which would require the implementation of a resequencer in the aggregator to restore the correct message sequence. Handling Exceptions In order to keep the code readable no exception handling was added to the implementations. In a real-world scenario exceptions should be handled accordingly.

    Read the article

  • CodePlex Daily Summary for Saturday, November 12, 2011

    CodePlex Daily Summary for Saturday, November 12, 2011Popular ReleasesDynamic PagedCollection (Silverlight / WPF Pagination): PagedCollection: All classes which facilitate your dynamic pagination in Silverlight or WPF !Media Companion: MC 3.422b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Made the TV Shows folder list sorted. Re-visibled 'Manually Add Path' in Root Folders. Sorted list to process during new tv episode search Rebuild Movies now processes thru folders alphabetically Fix for issue #208 - Display Missing Episodes is not popu...DotSpatial: DotSpatial Release Candidate 1 (1.0.823): Supports loading extensions using System.ComponentModel.Composition. DemoMap compiled as x86 so that GDAL runs on x64 machines. How to: Use an Assembly from the WebBe aware that your browser may add an identifier to downloaded files which results in "blocked" dll files. You can follow the following link to learn how to "Unblock" files. Right click on the zip file before unzipping, choose properties, go to the general tab and click the unblock button. http://msdn.microsoft.com/en-us/library...XPath Visualizer: XPathVisualizer v1.3 Latest: This is v1.3.0.6 of XpathVisualizer. This is an update release for v1.3. These workitems have been fixed since v1.3.0.5: 7429 7432 7427MSBuild Extension Pack: November 2011: Release Blog Post The MSBuild Extension Pack November 2011 release provides a collection of over 415 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...Quick Performance Monitor: Version 1.8.0: Now you can add multiple counters/instances at once! Yes, I know its overdue...CODE Framework: 4.0.11110.0: Various minor fixes and tweaks.Extensions for Reactive Extensions (Rxx): Rxx 1.2: What's NewRelated Work Items Please read the latest release notes for details about what's new. Content SummaryRxx provides the following features. See the Documentation for details. Many IObservable<T> extension methods and IEnumerable<T> extension methods. Many useful types such as ViewModel, CommandSubject, ListSubject, DictionarySubject, ObservableDynamicObject, Either<TLeft, TRight>, Maybe<T> and others. Various interactive labs that illustrate the runtime behavior of the extensio...Player Framework by Microsoft: HTML5 Player Framework 1.0: Additional DownloadsHTML5 Player Framework Examples - This is a set of examples showing how to setup and initialize the HTML5 Player Framework. This includes examples of how to use the Player Framework with both the HTML5 video tag and Silverlight player. Note: Be sure to unblock the zip file before using. Note: In order to test Silverlight fallback in the included sample app, you need to run the html and xap files over http (e.g. over localhost). Silverlight Players - Visit the Silverlig...MapWindow 4: MapWindow GIS v4.8.6 - Final release - 64Bit: What’s New in 4.8.6 (Final release)A few minor issues have been fixed What’s New in 4.8.5 (Beta release)Assign projection tool. (Sergei Leschinsky) Projection dialects. (Sergei Leschinsky) Projections database converted to SQLite format. (Sergei Leschinsky) Basic code for database support - will be developed further (ShapefileDataClient class, IDataProvider interface). (Sergei Leschinsky) 'Export shapefile to database' tool. (Sergei Leschinsky) Made the GEOS library static. geos.dl...NewLife XCode ??????: XCode v8.2.2011.1107、XCoder v4.5.2011.1108: v8.2.2011.1107 ?IEntityOperate.Create?Entity.CreateInstance??????forEdit,????????(FindByKeyForEdit)???,???false ??????Entity.CreateInstance,????forEdit,???????????????????? v8.2.2011.1103 ??MS????,??MaxMin??(????????)、NotIn??(????)、?Top??(??NotIn)、RowNumber??(?????) v8.2.2011.1101 SqlServer?????????DataPath,?????????????????????? Oracle?????????DllPath,????OCI??,???????????ORACLE_HOME?? Oracle?????XCode.Oracle.IsUseOwner,???????????Ow...Facebook C# SDK: v5.3.2: This is a RTW release which adds new features and bug fixes to v5.2.1. Query/QueryAsync methods uses graph api instead of legacy rest api. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ (experimental) added support for early preview for .NET 4.5 (binaries not distributed in codeplex nor nuget.org, will need to manually build from Facebook-Net45.sln) added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added ne...Delete Inactive TS Ports: List and delete the Inactive TS Ports: UPDATEAdded support for windows 2003 servers and removed some null reference errors when the registry key was not present List and delete the Inactive TS Ports - The InactiveTSPortList.EXE accepts command line arguments The InactiveTSPortList.Standalone.WithoutPrompt.exe runs as a standalone exe without the need for any command line arguments.Ribbon Editor for Microsoft Dynamics CRM 2011: Ribbon Editor (0.1.2207.267): BUG FIXES: - Cannot add multiple JavaScript and Url under Actions - Cannot add <Or> node under <OrGroup> - Adding a rule under <Or> node put the new rule node at the wrong placeWF Designer Express: WF Designer Express v0.6: ?????(???????????) Multi Language(Now, Japanese and English)ClosedXML - The easy way to OpenXML: ClosedXML 0.60.0: Added almost full support for auto filters (missing custom date filters). See examples Filter Values, Custom Filters Fixed issues 7016, 7391, 7388, 7389, 7198, 7196, 7194, 7186, 7067, 7115, 7144Microsoft Research Boogie: Nightly builds: This download category contains automatically released nightly builds, reflecting the current state of Boogie's development. We try to make sure each nightly build passes the test suite. If you suspect that was not the case, please try the previous nightly build to see if that really is the problem. Also, please see the installation instructions.GoogleMap Control: GoogleMap Control 6.0: Major design changes to the control in order to achieve better scalability and extensibility for the new features comming with GoogleMaps API. GoogleMap control switched to GoogleMaps API v3 and .NET 4.0. GoogleMap control is 100% ScriptControl now, it requires ScriptManager to be registered on the pages where and before it is used. Markers, polylines, polygons and directions were implemented as ExtenderControl, instead of being inner properties of GoogleMap control. Better perfomance. Better...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: V2.1: Version 2.1 (click on the right) this uses V4.0 of .net Version 2.1 adds the following features: (apologize if I forget some, added a lot of little things) Manual Lookup with TV or Movie (finally huh!), you can look up a movie or TV episode directly, you can right click on anythign, and choose manual lookup, then will allow you to type anything you want to look up and it will assign it to the file you right clicked. No Rename: a very popular request, this is an option you can set so that t...SubExtractor: Release 1020: Feature: added "baseline double quotes" character to selector box Feature: added option to save SRT files as ANSI (instead of previous UTF-8 only) Feature: made "Save Sup files to Source directory" apply to both Sup and Idx source files. Fix: removed SDH text (...) or [...] that is split over 2 lines Fix: better decision-making in when to prefix a line with a '-' because SDH was removedNew Projects- Gemini - code branch: the branch for all Hzj_jie's code(MPLF) MediaPortal LoveFilm Plugin: MediaPortal LoveFilm (Love Film) Plugin. Browse films, add/remove from lists, streams etc._: _Batalha Naval em C#: Jogo de batalha naval para a disciplina de C#BusyHoliday - Making Outlook's Calendar Busy for Holidays: When users add holidays to their Outlook calendar these are added as an all day event. However, these are added as free time. With users in other countries, if they were to invite a user in a country with a public holiday, they won't be able to see that the user is on a public holiday or busy. BusyHoliday will update existing holidays as busy time. This may improve users ability to collaborate efficiently across geographical boundries. C# and MCU: C# and MCU and webCampo Minado C Sharp: C sharp source code of the game minefield.CP2011test: CP2011 TestDynamic PagedCollection (Silverlight / WPF Pagination): PagedCollection helps a developer to easily control the pagination of their collection. This collection doesnt need all datas in memory, it can be attached to RIA/SQL/XML data ... and load needed data one after the other. The PagedCollection can be used with Silverlight or WPF.Fast MVVM: The faster way to build your MVVM (Model, View, ViewModel) applications. It's a basic and educational way to learn and work with MVVM. We have more than just a pack of classes or templates. We have a new concept that turn it a little easier.Habitat: Core game services, frameworks and templates for desktop, mobile and web game developers.Historical Data Repository: Repository stores serializable timestamped data items in compressed files structured in a way that allows to quickly find data by time, quickly read and write data items in chronological order. Key features: - all data is stored under a single root folder; like file system, repository can contain a tree of folders each of which contains its own data stream; when reading, multiple streams can be combined into a single output stream maintaining chronological order; the tree structure can b...Immunity Buster: This game is for HIV. We will be making a virus which will help to spread the knowledge about the virus and have Fun among the peers.JoeCo Blog Application: JoeCo Blog Application for ECE 574jweis sso user center: jweis is a user center,distination is to use in a contry ,ssoKinect Shape Game Connect with Windows Phone: The classic Shape Game for Microsoft Kinect extended to allow players to join in the game and deploy shapes from their windows phone. Uses TCP sockets. For educational purposes. Free to download and distribute.Liekhus Behavior Driven DevExpress XAF: Using the BDD (Behavior Driven Development) techniques of SpecFlow and the EasyTest scriptability of DevExpress XAF (eXpress Application Framework), you can leverage test first style of application development in a speed never though imaginable.Linq to BBYOpen: C# Linq Provider to access the REST based BBYOpen services from BestBuy.Mi clima: This is the source code for the application "Mi clima" published in the Windows Phone Marketplace. MySqlQueryToolkit: This project bundles all of the common performance tests into a single handy user interface. Simply drop your query in and use the tools to make your query faster. Includes index suggestion, column data type and length suggestion and common profiling options.OData Validation Framework: Project to show how to validate data using client and server side OData/WCF Data ServicesOMR Table Data Syncronizer - Only Data: This tools compare and syncronize table data. No schema syncronizing included on this project. Data comparition is fastest! 1,000,000 data compare time is: ~200 ms.OrchardPo: This is the po file management module that is used on http://orchardproject.netResistance Calculator: A calculator for parallel and/or series resistanceSMS Team 5: h? th?ng g?i SMS t? d?ngSportsStore: Prueba en tfs de sports storeUDP Transport: This is an attempt at implementation of WCF UDP transport compliant with soa.udp OASIS Standard WebUtils: Biblioteca com funções genéricas úteis para uso com ASP NET MVC??a??e?a - Greek Government Open Data Program - Windows Mobile Client: ? efa?µ??? Open Government sa? d??e? ?µes? p??sßas? ap? t? ????t? sa? se ????? t??? ??µ???, ??e? t?? ap?f?se?? ?a? ??e? t?? dap??e? t?? ????????? ???t??? ?a? ???? t?? ?p??es??? t?? ????????? ??µ?s??? µ?s? t?? ??ße???t???? p?????µµat?? ??a??e?a.

    Read the article

  • Adjust timezone of an AVM Fritz!Box 7390

    It's been a while that I purchased an AVM Fritz!Box 7390 but since I'm using this 'PABX' here in Mauritius, I'm not really happy about the wrong time in the logs or handsets connected. Lately, I had some spare time to address this issue, and the following article describes how to adjust the timezone settings in general. The original idea came from an FAQ found in c't 21/11 (for a 7270 written in German language) but I added a couple of things based on other resources online. The following tutorial may be valid for other models, too. Use your common sense and think before you act. Brief introduction to AVM Fritz!Box devices The Fritz!Box series of AVM has been around for more than a decade and those little 'red boxes' have a high level of versatility for your small office or home. High-speed connections, secure WLAN and convenient telephony make a home network out of any network. Whether it's a computer, tablet or smartphone, any device can be connected to the FRITZ!Box. And best of all, installation is so simple that users will be online in a matter of minutes. If you want to have peace of your mind in your small network then a Fritz!Box is the easiest way to achieve that. I'm using my box primarly as WiFi access point, VoIP gateway and media server but only because it came in second after my Linux system. Limitations in the administrative Web UI Unfortunately, there are no possibilities to adjust the timezone settings in the Web UI at all - even not in Expert mode. I assume that this is part of the 'simplification' provided by AVM's design team. That's okay, as long as you reside in Central Europe, and the implicit time handling is correct for your location. Adjusting the timezone I got my device through an order at Amazon Germany already some time ago, and honestly I wasn't bothered too much about the pre-configured (fixed) timezone setting - CET or CEST depending on daylight saving. But you know, it's that kind of splinter at the back of your head that keeps nagging and bothering you indirectly. So, finally I sat down yesterday evening and did a quick research on how to change the timezone. Even though there are a number of results, I read the FAQ from the c't magazine first, as I consider this as a trusted and safe source of information. Of course, it is most important to avoid to 'brick' your device. You've been warned - No support Tinkering with the configuration of any AVM devices seems to be a violation of their official support channels. So, be warned and continue onlyin case that you're sure about what you are going to do. The following solutions are 'as-is' and they worked for my box flawlessly but may cause an issue in your case. Don't blame me... Solution 1 - Backup, modify and restore That's the way as described in the c't article and a couple of other forum postings I found online, mainly from Australia. Login the administrative Web UI and navigate to 'System => Einstellungen sichern' (System => Backup configuration) and store your current configuration to a local file on your machine. Despite some online postings it is not necessary to specify a password in order to secure or encrypt your backup. IMHO, this only adds another unnecessary layer of complexity to the process. Anyway, next you should create a another copy of your settings and keep it unmodified. That's our safety net to restore the current settings in case that we might have to issue a factory setting reset to the box. Now, open the configuration file with an advanced text editor which is capable to deal with Unix carriage returns properly - Windows Notepad doesn't do the job but Wordpad or Notepad++. Personally, I don't care and simply use geany, gedit or nano on Linux. In total there are 3 modifications that we have to apply to the configuration file - one new line and two adjustments. First, we have to add an instruction near the top of file that overrides the device internal checksum validation. Without this line, your settings won't be accepted. Caution: The drectives are case-sensitve and your outcome should read something like this: **** FRITZ!Box Fon WLAN 7390 CONFIGURATION EXPORTPassword=$$$$<ignore>FirmwareVersion=84.05.52CONFIG_INSTALL_TYPE=iks_16MB_xilinx_4eth_2ab_isdn_nt_te_pots_wlan_usb_host_dect_64415OEM=avmCountry=049Language=deNoChecks=yes**** CFGFILE:ar7.cfg/* * /var/flash/ar7.cfg * Mon Jul 29 10:49:18 2013 */ar7cfg {... Then search for the expression 'timezone' and you should find a section like this one (~ line 1113): timezone_manual {        enabled = no;        offset = 0;        dst_enabled = no;        TZ_string = "";        name = "";} We would like to manually handle the timezone setting in our device and therefore we have to enable it and set the proper value for Mauritius. The configuration block should like so afterwards: timezone_manual {        enabled = yes;        offset = 0;        dst_enabled = no;        TZ_string = "MUT-4";        name = "";} We specify the designation and the offset in hours of the timezone we would like to have. Caution: The offset indicates the value one has to add to the local time to arrive at UTC. More details are described in the Explanation of TZ strings. Mauritius has GMT+4 which means that we have to substract 4 hours from the local time to have UTC. Finally, we restore the modified configuration file via the administrative Web UI under 'System => Einstellungen sichern => Wiederherstellen' (System => Backup configuration => Restore). This triggers a reboot of the device, so please be patient and wait until the Web UI displays the login dialog again. Good luck! Solution 2 - Telnet A more elegant, read: technically interesting, way to adjust configuration settings in your Fritz!Box is to access it directly through Telnet. By default AVM disables that protocol channel and you have to enable it with a connected telephone. In order to activate the telnet service dial the following combination: #96*7* #96*8* (to disable telnet again after work has been completed) If you're using an AVM handset like the Fritz!Fon then you will receive a confirmation message on the display like so: telnetd ein Next, depending on your favourite operating system, you either launch a Command prompt in Windows or a terminal in Linux, get your Admin password ready, and you connect to your box like so: $ telnet fritz.box Trying 192.168.1.1...Connected to fritz.box.Escape character is '^]'.password: BusyBox v1.19.3 (2012-10-12 14:52:09 CEST) built-in shell (ash)Enter 'help' for a list of built-in commands.ermittle die aktuelle TTYtty is "/dev/pts/0"Console Ausgaben auf dieses Terminal umgelenkt# That's it, you are connected and we can continue to change the configuration manually. In order to adjust the timezone setting we have to open the ar7.cfg file. As we are now operating in a specialised environment, we only have limited capabilities at hand. One of those is a reduced version of vi - nvi. Let's open a second browser window with the fine manual page of nvi and start to edit our configuration file: # nvi /var/flash/ar7.cfg In our configuration file, we have to navigate to the timezone directives. The easiest way is to search for the expression 'timezone' by typing in the following: /timezone    (press Enter/Return) Now, we should see the exact lines of code like in the backed up version: timezone_manual {                                                                            enabled = no;                                                          offset = 0;                                                         dst_enabled = no;                                                   TZ_string = "";                                                     name = "";                                                        } And of course, we apply the same changes as described in the previous section: timezone_manual {                                                                            enabled = yes;                                                          offset = 0;                                                         dst_enabled = no;                                                   TZ_string = "MUT-4";                                                     name = "";                                                        } Finally, we have to write our changes back to the file and apply the new settings. :wq    (press Enter/Return) # ar7cfgchanged That's it! Finally, close the telnet session by pressing Ctrl+] and enter 'quit'. Additional ideas... There are a couple of more possibilities to enhance and to extend the usability of a Fritz!Box. There are lots of resources available on the net, but I'd like to name a few here. Especially for Linux users it is essential to be able to connect to any device remotely in a  safe and secure way. And the installation of a SSH server on the box would be a first step to improve this situation, also to avoid to run telnet after all. Sometimes, there might be problems in your VoIP connections, feel free to adjust the settings of codecs and connection handling, too. I guess, you'll get the idea... The only frontiers are in your mind.

    Read the article

  • FreeNas on Dell Powervault 745N: 2TB Limit?

    - by willoller
    I want to purchase 2 x 2TB drives, and install FreeNas on my Dell Powervault 745N. People on the internets seem to be having trouble with the MD3000 firmware, and I want to make sure I can solve any issues before buying the drives. Before I invest, I have 3 questions : Is there a partition size limit determined by the RAID controller? That is, could I have a striped 4TB partition? The spec sheets make me wonder if the RAID controller needs all 4 drives in order to work. Is there any reason this will have to run in RAID5? If I buy 4 matching drives, would the controller support a RAID6 configuration? I'm basically new to all this RAID stuff - sorry for any noob questions.

    Read the article

  • Any problems usinga GoDaddy SSL certificate on a Cisco ASA firewall?

    - by Richard West
    I need to purchase and install a SSL certificate on my Cisco ASA firewall. This will allow my VPN users to connect to my ASA without receiving the certificate error from the untrusted self assigned SSL certificate that is currently on the ASA. I had good experiences with the SSL certificates that GoDaddy sells. However, I'm concerned about using them. On my web servers I have to also install GoDaddy's "intermediate certificate bundle". On the ASA I do not think that I will be able to preform anything like this. I do not fully understand what the "intermediate certificate bundle" does, but obviously it's important. So my question is can I use a GoDaddy SSL certificate on an ASA without my users getting any type of warning or error about connecting to a site that using an untrusted SSL certificate. I need this to be as simple as possible for my end users and warning messages are always scary :) Thanks!

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >