Search Results

Search found 59230 results on 2370 pages for 'character set'.

Page 352/2370 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 2)

    - by Sadequl Hussain
    Continuing from Part 1  , our Migration Checklist continues: Step 5: Update statistics It is always a good idea to update the statistics of the database that you have just installed or migrated. To do this, run the following command against the target database: sp_updatestats The sp_updatestats system stored procedure runs the UPDATE STATISTICS command against every user and system table in the database.  However, a word of caution: running the sp_updatestats against a database with a compatibility level below 90 (SQL Server 2005) will reset the automatic UPDATE STATISTICS settings for every index and statistics of every table in the database. You may therefore want to change the compatibility mode before you run the command. Another thing you should remember to do is to ensure the new database has its AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS properties set to ON. You can do so using the ALTER DATABASE command or from the SSMS. Step 6: Set database options You may have to change the state of a database after it has been restored. If the database was changed to single-user or read-only mode before backup, the restored copy will also retain these settings. This may not be an issue when you are manually restoring from Enterprise Manager or the Management Studio since you can change the properties. However, this is something to be mindful of if the restore process is invoked by an automated job or script and the database needs to be written to immediately after restore. You may want to check the database’s status programmatically in such cases. Another important option you may want to set for the newly restored / attached database is PAGE_VERIFY. This option specifies how you want SQL Server to ensure the physical integrity of the data. It is a new option from SQL Server 2005 and can have three values: CHECKSUM (default for SQL Server 2005 and latter databases), TORN_PAGE_DETECTION (default when restoring a pre-SQL Server 2005 database) or NONE. Torn page detection was itself an option for SQL Server 2000 databases. From SQL Server 2005, when PAGE_VERIFY is set to CHECKSUM, the database engine calculates the checksum for a page’s contents and writes it to the page header before storing it in disk. When the page is read from the disk, the checksum is computed again and compared with the checksum stored in the header.  Torn page detection works much like the same way in that it stores a bit in the page header for every 512 byte sector. When data is read from the page, the torn page bits stored in the header is compared with the respective sector contents. When PAGE_VERIFY is set to NONE, SQL Server does not perform any checking, even if torn page data or checksums are present in the page header.  This may not be something you would want to set unless there is a very specific reason.  Microsoft suggests using the CHECKSUM page verify option as this offers more protection. Step 7: Map database users to logins A common database migration issue is related to user access. Windows and SQL Server native logins that existed in the source instance and had access to the database may not be present in the destination. Even if the logins exist in the destination, the mapping between the user accounts and the logins will not be automatic. You can use a special system stored procedure called sp_change_users_login to address these situations. The procedure needs to be run against the newly attached or restored database and can accept four parameters. Depending on what you want to do, you may be using less than four though. The first parameter, @Action, can take three values. When you specify @Action = ‘Report’, the system will provide you with a list of database users which are not mapped to any login. If you want to map a database user to an existing SQL Server login, the value for @Action will be ‘Update_One’. In this case, you will only need to provide the database user name and the login it will map to. So if your newly restored database has a user account called “bob” and there is already a SQL Server login with the same name and you want to map the user to the login, you will execute a query like the following: sp_change_users_login         @Action = ‘Update_One’,         @UserNamePattern = ‘bob’,         @LoginName = ‘bob’ If the login does not exist, you can instruct SQL Server to create the login with the same name. In this case you will need to provide a password for the login and the value of the @Action parameter will be ‘Auto_Fix’. If the login already exists, it will be automatically mapped to the user account. Unfortunately sp_change_users_login system stored procedure cannot be used to map database users to trusted logins (Windows accounts) in SQL Server. You will need to follow a manual process to re-map the database user accounts.  Continues…

    Read the article

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Task-It Webinar - Source Code

    Last week I presented a webinar called "Building a real-world application with RadControls for Silverlight 4". For those that didn't get to see the webinar, you can view it here: Building a read-world application with RadControls for Silverlight 4 Since the webinar I've received several requests asking if I could post the source code for the simple application I showed demonstrating some of the techniques used in the development of Task-It, such as MVVM, Commands and Internationalization. This source code is now available for downloadhere. After downloading the source: Extract it to the location of your choice on your hard-drive Open the solution Right-click ModuleProject.Web and selecte 'Set as StartUp Project'. Right-click ProjectTestPage.aspx and selected 'Set as Start Page' Create a database in SQL Server called WebinarProject. Navigate to the Database folder under the WebinarProject directory and run the .sql script against your WebinarProject database. The last two steps are necessary only for the Tasks page to work properly (using WCF RIA Services). Now some notes about each page: Code-behind This is not the way I recommend coding a line-of-business application in Silverlight, but simply wanted to show how the code-behind approach would look. Command This page introduces MVVM and Commands. You'll notice in the XAML that the Command property of theRadMenuItem and the Button are both bound to a SaveCommand. That comes from the view model. If you look in the code- behind of the user control you'll see that an instance of a CommandViewModel is instantiated and set as the DataContext of the UserControl.There is also a listener for the view model's SaveCompleted event. When this is fired, it tells the view (UserControl) to display the MessageBox. Internationalization This sample is similar to the previous one, but instead of using hard-coded strings in the UI, the strings are obtained via binding toview model properties. The view model gets the strings from the .resx files (Strings.resx or Strings.de.resx) under Assets/Resources. If you uncomment the call to ShowGerman() in App.xaml.cs's Application_Startup method and re-run the application, you will see the UI in German. Note that this code, which sets the CurrentCulture and CurrentUICulture on the current thread to "de" (German) is for testing purposes only. RadWindow Once again, very similar to the previous example.The difference is that we are now using a RadWindow to display the 'Saved' message instead of a MessageBox. The advantage here is that we do not have to hold on to a reference to the view model in our code behind so that we can get the 'Saved' message from it. The RadWindow's DataContext is now also bound to the view model, so within its XAML we can bind directly to properties in the view model. Much nicer, and cleaner. One other thing I introduced in this example is the use of spacer Rectangles. Rather than setting a width and/or height on the rectangles for spacing, I am now referencing a style in my ResourceDictionary called StandardSpacerStyle. I like doing this better than using margins or padding because now I have a reusable way to create space between elements, the Rectangle does not show (because I have not set its Fill color), and I can change my spacing throughout the user interface in one place if I'd like. Tasks This page is quite a bit different than the other four. It is a very simple, stripped-down version of the Tasks page in the Task-It application. The Tasks.xaml UserControl has a ContentControl, and the Content of that control is set based on whether we are looking at the list of tasks or editing a task. So it displays one of two child UserControls, which are called List and Details. List has the RadGridView, Details has the form. In the code-behind of the Tasks UserControl I am once again setting its DataContext to a view model class. The nice thing is, whichever child UserControl is being displayed (List or Details) inherits its DataContext from its parent control (Tasks), so I do not have to explicitly set it. The List UserControl simply displays a RadGridView whose ItemsSource is bound to a property in the view model called Tasks, and its SelectedItem property is bound to a property in the view model called SelectedItem. The SelectedItem binding must be TwoWay so that the view is notified when the SelectedItem changes in the view model, and the view model is notified when something changes in the view (like when a user changes the Name and/or DueDate in the form). You'll also notice that the form's TextBox and RadDatePicker are also TwoWay bound to the SelectedItem property in the view model. You can experiment with the binding by removing TwoWay and see how changes in the form do not show up in the RadGridView. So here we have an example of two different views (List and Details) that are both bound to the same view model...and actually, so is the Tasks UserControl, so it is really three views. WCF RIA Services By the way, I am using WCF RIA Services to retrieve data for the RadGridView and save the data when the user clicks the Save button in the form. I created a really simple ADO.NET Entity Data Model in WebinarProject.Web called DataModel.edmx. I also created a simple Domain Data Service called DataService that has methods for retrieving data, inserting, updating and deleting. However I am only using the retrieval and update methods in this sample. Note that I do not currently have any validation in place on the form, as I wanted to keep the sample as simple as possible. Wrap up Technically, I should move the calls to WCF RIA Services out of the view model and put them into a separate layer, but this works for now, and that is a topic for another day! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Item 2, Scott Myers Effective C++ question

    - by user619818
    In Item2 on page 16, (Prefer consts, enums, and inlines to #defines), Scott says: 'Also, though good compilers won't set aside storage for const objects of integer types'. I don't understand this. If I define a const object, eg const int myval = 5; then surely the compiler must set aside some memory (of int size) to store the value 5? Or is const data stored in some special way? This is more a question of computer storage I suppose. Basically, how does the computer store const objects so that no storage is set aside?

    Read the article

  • wcf web service in post method, object properties are null, although the object is not null

    - by Abdalhadi Kolayb
    i have this problem in post method when i send object parameter to the method, then the object is not null, but all its properties have the default values. here is data module: [DataContract] public class Products { [DataMember(Order = 1)] public int ProdID { get; set; } [DataMember(Order = 2)] public string ProdName { get; set; } [DataMember(Order = 3)] public float PrpdPrice { get; set; } } and here is the interface: [OperationContract] [WebInvoke( Method = "POST", UriTemplate = "AddProduct", ResponseFormat = WebMessageFormat.Json, BodyStyle = WebMessageBodyStyle.WrappedRequest, RequestFormat = WebMessageFormat.Json)] string AddProduct([MessageParameter(Name = "prod")]Products prod); public string AddProduct(Products prod) { ProductsList.Add(prod); return "return string"; } here is the json request: Content-type:application/json {"prod":[{"ProdID": 111,"ProdName": "P111","PrpdPrice": 111}]} but in the server the object received: {"prod":[{"ProdID": 0,"ProdName": NULL,"PrpdPrice": 0}]}

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • International Radio Operators Alphabet in F# &amp; Silverlight &ndash; Part 2

    - by MarkPearl
    So the brunt of my my very complex F# code has been done. Now it’s just putting the Silverlight stuff in. The first thing I did was add a new project to my solution. I gave it a name and VS2010 did the rest of the magic in creating the .Web project etc. In this instance because I want to take the MVVM approach and make use of commanding I have decided to make the frontend a Silverlight4 project. I now need move my F# code into a proper Silverlight Library. Warning – when you create the Silverlight Library VS2010 will ask you whether you want it to be based on Silverlight3 or Silverlight4. I originally went for Silverlight4 only to discover when I tried to compile my solution that I was given an error… Error 12 F# runtime for Silverlight version v4.0 is not installed. Please go to http://go.microsoft.com/fwlink/?LinkId=177463 to download and install matching.. After asking around I discovered that the Silverlight4 F# runtime is not available yet. No problem, the suggestion was to change the F# Silverlight Library to a Silverlight3 project however when going to the properties of the project file – even though I changed it to Silverlight3, VS2010 did not like it and kept reverting it to a Silverlight4 project. After a few minutes of scratching my head I simply deleted Silverlight4 F# Library project and created a new F# Silverlight Library project in Silverlight3 and VS2010 was happy. Now that the project structure is set up, rest is fairly simple. You need to add the Silverlight Library as a reference to the C# Silverlight Front End. Then setup your views, since I was following the MVVM pattern I made a Views & ViewModel folder and set up the relevant View and ViewModels. The MainPageViewModel file looks as follows using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Collections.ObjectModel; namespace IROAFrontEnd.ViewModels { public class MainPageViewModel : ViewModelBase { private string _iroaString; private string _inputCharacters; public string InputCharacters { get { return _inputCharacters; } set { if (_inputCharacters != value) { _inputCharacters = value; OnPropertyChanged("InputCharacters"); } } } public string IROAString { get { return _iroaString; } set { if (_iroaString != value) { _iroaString = value; OnPropertyChanged("IROAString"); } } } public ICommand MySpecialCommand { get { return new MyCommand(this); } } public class MyCommand : ICommand { readonly MainPageViewModel _myViewModel; public MyCommand(MainPageViewModel myViewModel) { _myViewModel = myViewModel; } public event EventHandler CanExecuteChanged; public bool CanExecute(object parameter) { return true; } public void Execute(object parameter) { var result = ModuleMain.ConvertCharsToStrings(_myViewModel.InputCharacters); var newString = ""; foreach (var Item in result) { newString += Item + " "; } _myViewModel.IROAString = newString.Trim(); } } } } One of the features I like in Silverlight4 is the new commanding. You will notice in my I have put the code under the command execute to reference to my F# module. At the moment this could be cleaned up even more, but will suffice for now.. public void Execute(object parameter) { var result = ModuleMain.ConvertCharsToStrings(_myViewModel.InputCharacters); var newString = ""; foreach (var Item in result) { newString += Item + " "; } _myViewModel.IROAString = newString.Trim(); } I then needed to set the view up. If we have a look at the MainPageView.xaml the xaml code will look like the following…. Nothing to fancy, but battleship grey for now… take careful note of the binding of the command in the button to MySpecialCommand which was created in the ViewModel. <UserControl x:Class="IROAFrontEnd.Views.MainPageView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400"> <Grid x:Name="LayoutRoot" Background="White"> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <TextBox Grid.Row="0" Text="{Binding InputCharacters, Mode=TwoWay}"/> <Button Grid.Row="1" Command="{Binding MySpecialCommand}"> <TextBlock Text="Generate"/> </Button> <TextBlock Grid.Row="2" Text="{Binding IROAString}"/> </Grid> </UserControl> Finally in the App.xaml.cs file we need to set the View and link it to the ViewModel. private void Application_Startup(object sender, StartupEventArgs e) { var myView = new MainPageView(); var myViewModel = new MainPageViewModel(); myView.DataContext = myViewModel; this.RootVisual = myView; }   Once this is done – hey presto – it worked. I typed in some “Test Input” and clicked the generate button and the correct Radio Operators Alphabet was generated. And that’s the end of my first very basic F# Silverlight application.

    Read the article

  • SQL SERVER – SmallDateTime and Precision – A Continuous Confusion

    - by pinaldave
    Some kinds of confusion never go away. Here is one of the ancient confusing things in SQL. The precision of the SmallDateTime is one concept that confuses a lot of people, proven by the many messages I receive everyday relating to this subject. Let me start with the question: What is the precision of the SMALLDATETIME datatypes? What is your answer? Write it down on your notepad. Now if you do not want to continue reading the blog post, head to my previous blog post over here: SQL SERVER – Precision of SMALLDATETIME. A Social Media Question Since the increase of social media conversations, I noticed that the amount of the comments I receive on this blog is a bit staggering. I receive lots of questions on facebook, twitter or Google+. One of the very interesting questions yesterday was asked on Facebook by Raghavendra. I am re-organizing his script and asking all of the questions he has asked me. Let us see if we could help him with his question: CREATE TABLE #temp (name VARCHAR(100),registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY registered DESC GO DROP TABLE #temp GO Now when the above script is ran, we will get the following result: Well, the expectation of the query was to have the following result. The row which was inserted last was expected to return as first row in result set as the ORDER BY descending. Side note: Because the requirement is to get the latest data, we can’t use any  column other than smalldatetime column in order by. If we use name column in the order by, we will get an incorrect result as it can be any name. My Initial Reaction My initial reaction was as follows: 1) DataType DateTime2: If file precision of the column is expected from the column which store date and time, it should not be smalldatetime. The precision of the column smalldatetime is One Minute (Read Here) for finer precision use DateTime or DateTime2 data type. Here is the code which includes above suggestion: CREATE TABLE #temp (name VARCHAR(100), registered datetime2) GO DECLARE @test datetime2 SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY registered DESC GO DROP TABLE #temp GO 2) Tie Breaker Identity: There are always possibilities that two rows were inserted at the same time. In that case, you may need a tie breaker. If you have an increasing identity column, you can use that as a tie breaker as well. CREATE TABLE #temp (ID INT IDENTITY(1,1), name VARCHAR(100),registered datetime2) GO DECLARE @test datetime2 SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY ID DESC GO DROP TABLE #temp GO Those two were the quick suggestions I provided. It is not necessary that you should use both advices. It is possible that one can use only DATETIME datatype or Identity column can have datatype of BIGINT or have another tie breaker. An Alternate NO Solution In the facebook thread this was also discussed as one of the solutions: CREATE TABLE #temp (name VARCHAR(100),registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO DROP TABLE #temp GO However, I believe it is not the solution and can be further misleading if used in a production server. Here is the example of why it is not a good solution: CREATE TABLE #temp (name VARCHAR(100) NOT NULL,registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO -- Before Index SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO -- Create Index ALTER TABLE #temp ADD CONSTRAINT [PK_#temp] PRIMARY KEY CLUSTERED (name DESC) GO -- After Index SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO DROP TABLE #temp GO Now let us examine the resultset. You will notice that an index which is created on the base table which is (indeed) schema change the table but can affect the resultset. As you can see, an index can change the resultset, so this method is not yet perfect to get the latest inserted resultset. No Schema Change Requirement After giving these two suggestions, I was waiting for the feedback of the asker. However, the requirement of the asker was there can’t be any schema change because the application was used by many other applications. I validated again, and of course, the requirement is no schema change at all. No addition of the column of change of datatypes of any other columns. There is no further help as well. This is indeed an interesting question. I personally can’t think of any solution which I could provide him given the requirement of no schema change. Can you think of any other solution to this? Need of Database Designer This question once again brings up another ancient question:  “Do we need a database designer?” I often come across databases which are facing major performance problems or have redundant data. Normalization is often ignored when a database is built fast under a very tight deadline. Often I come across a database which has table with unnecessary columns and performance problems. While working as Developer Lead in my earlier jobs, I have seen developers adding columns to tables without anybody’s consent and retrieving them as SELECT *.  There is a lot to discuss on this subject in detail, but for now, let’s discuss the question first. Do you have any suggestions for the above question? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: CodeProject, Developer Training, PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • How to make my screen brightness not change when I plug my laptop in

    - by user63985
    I do not want my laptop to change brightness when my laptop power is plugged in or unplugged. I set my brightness based on how bright my surroundings are. If I am in a dark room, I set my brightness very low and when I plug my laptop in the brightness gets set to maximum which feels like sticking my eyes in boiling lava. In System Settings ? Brightness and Lock the Dim screen to save power checkbox is unchecked. My laptop is an HP Mini 110 In case it is an acpi issue I have put my acpi-support file here http://paste.ubuntu.com/1008244/

    Read the article

  • Why am I getting cgi-sys/defaultwebpage.cgi coming up when I browse my webpage?

    - by CraigJ
    I've recently set up a website with a smaller hosting company. The plan has a dedicated IP. They sent me emails to say it's all set up, but now their support channels are all unresponsive even though they say it's open 24 hours. In the File Manager in the cpanel I've put an index.html file in the public_html directory. But when I point my browser to the IP address given to me, it comes up with the cgi-sys/defaultwebpage.cgi page. What is the problem? I haven't set the name-servers for my domain yet, but that shouldn't be a problem because I am using the IP address in the browser. Note: I don't think I have access to ssh.

    Read the article

  • Driver error when using multiple shaders

    - by Jinxi
    I'm using 3 different shaders: a tessellation shader to use the tessellation feature of DirectX11 :) a regular shader to show how it would look without tessellation and a text shader to display debug-info such as FPS, model count etc. All of these shaders are initialized at the beginning. Using the keyboard, I can switch between the tessellation shader and regular shader to render the scene. Additionally, I also want to be able toggle the display of debug-info using the text shader. Since implementing the tessellation shader the text shader doesn't work anymore. When I activate the DebugText (rendered using the text-shader) my screens go black for a while, and Windows displays the following message: Display Driver stopped responding and has recovered This happens with either of the two shaders used to render the scene. Additionally: I can start the application using the regular shader to render the scene and then switch to the tessellation shader. If I try to switch back to the regular shader I get the same error as with the text shader. What am I doing wrong when switching between shaders? What am I doing wrong when displaying text at the same time? What file can I post to help you help me? :) thx P.S. I already checked if my keyinputs interrupt at the wrong time (during render or so..), but that seems to be ok Testing Procedure Regular Shader without text shader Add text shader to Regular Shader by keyinput (works now, I built the text shader back to only vertex and pixel shader) (somthing with the z buffer is stil wrong...) Remove text shader, then change shader to Tessellation Shader by key input Then if I add the Text Shader or switch back to the Regular Shader Switching/Render Shader Here the code snipet from the Renderer.cpp where I choose the Shader according to the boolean "m_useTessellationShader": if(m_useTessellationShader) { // Render the model using the tesselation shader ecResult = m_ShaderManager->renderTessellationShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), worldMatrix, viewMatrix, projectionMatrix, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), (D3DXVECTOR3)m_Camera->getPosition(), TESSELLATION_AMOUNT); } else { // todo: loaded model depends on distance to camera // Render the model using the light shader. ecResult = m_ShaderManager->renderShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), lod_level, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), worldMatrix, viewMatrix, projectionMatrix); } And here the code snipet from the Mesh.cpp where I choose the Typology according to the boolean "useTessellationShader": // RenderBuffers is called from the Render function. The purpose of this function is to set the vertex buffer and index buffer as active on the input assembler in the GPU. Once the GPU has an active vertex buffer it can then use the shader to render that buffer. void Mesh::renderBuffers(ID3D11DeviceContext* deviceContext, bool useTessellationShader) { unsigned int stride; unsigned int offset; // Set vertex buffer stride and offset. stride = sizeof(VertexType); offset = 0; // Set the vertex buffer to active in the input assembler so it can be rendered. deviceContext->IASetVertexBuffers(0, 1, &m_vertexBuffer, &stride, &offset); // Set the index buffer to active in the input assembler so it can be rendered. deviceContext->IASetIndexBuffer(m_indexBuffer, DXGI_FORMAT_R32_UINT, 0); // Check which Shader is used to set the appropriate Topology // Set the type of primitive that should be rendered from this vertex buffer, in this case triangles. if(useTessellationShader) { deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_3_CONTROL_POINT_PATCHLIST); }else{ deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); } return; } RenderShader Could there be a problem using sometimes only vertex and pixel shader and after switching using vertex, hull, domain and pixel shader? Here a little overview of my architecture: TextClass: uses font.vs and font.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); RegularShader: uses vertex.vs and pixel.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); TessellationShader: uses tessellation.vs, tessellation.hs, tessellation.ds, tessellation.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-HSSetShader(m_hullShader, NULL, 0); deviceContext-DSSetShader(m_domainShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); ClearState I'd like to switch between 2 shaders and it seems they have different context parameters, right? In clearstate methode it says it resets following params to NULL: I found following in my Direct3D Class: depth-stencil state - m_deviceContext-OMSetDepthStencilState rasterizer state - m_deviceContext-RSSetState(m_rasterState); blend state - m_device-CreateBlendState viewports - m_deviceContext-RSSetViewports(1, &viewport); I found following in every Shader Class: input/output resource slots - deviceContext-PSSetShaderResources shaders - deviceContext-VSSetShader to - deviceContext-PSSetShader input layouts - device-CreateInputLayout sampler state - device-CreateSamplerState These two I didn't understand, where can I find them? predications - ? scissor rectangles - ? Do I need to store them all localy so I can switch between them, because it doesn't feel right to reinitialize the Direct3d and the Shaders by every switch (key input)?!

    Read the article

  • Calculated Columns in Entity Framework Code First Migrations

    - by David Paquette
    I had a couple people ask me about calculated properties / columns in Entity Framework this week.  The question was, is there a way to specify a property in my C# class that is the result of some calculation involving 2 properties of the same class.  For example, in my database, I store a FirstName and a LastName column and I would like a FullName property that is computed from the FirstName and LastName columns.  My initial answer was: 1: public string FullName 2: { 3: get { return string.Format("{0} {1}", FirstName, LastName); } 4: } Of course, this works fine, but this does not give us the ability to write queries using the FullName property.  For example, this query: 1: var users = context.Users.Where(u => u.FullName.Contains("anan")); Would result in the following NotSupportedException: The specified type member 'FullName' is not supported in LINQ to Entities. Only initializers, entity members, and entity navigation properties are supported. It turns out there is a way to support this type of behavior with Entity Framework Code First Migrations by making use of Computed Columns in SQL Server.  While there is no native support for computed columns in Code First Migrations, we can manually configure our migration to use computed columns. Let’s start by defining our C# classes and DbContext: 1: public class UserProfile 2: { 3: public int Id { get; set; } 4: 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: 8: [DatabaseGenerated(DatabaseGeneratedOption.Computed)] 9: public string FullName { get; private set; } 10: } 11: 12: public class UserContext : DbContext 13: { 14: public DbSet<UserProfile> Users { get; set; } 15: } The DatabaseGenerated attribute is needed on our FullName property.  This is a hint to let Entity Framework Code First know that the database will be computing this property for us. Next, we need to run 2 commands in the Package Manager Console.  First, run Enable-Migrations to enable Code First Migrations for the UserContext.  Next, run Add-Migration Initial to create an initial migration.  This will create a migration that creates the UserProfile table with 3 columns: FirstName, LastName, and FullName.  This is where we need to make a small change.  Instead of allowing Code First Migrations to create the FullName property, we will manually add that column as a computed column. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: CreateTable( 6: "dbo.UserProfiles", 7: c => new 8: { 9: Id = c.Int(nullable: false, identity: true), 10: FirstName = c.String(), 11: LastName = c.String(), 12: //FullName = c.String(), 13: }) 14: .PrimaryKey(t => t.Id); 15: Sql("ALTER TABLE dbo.UserProfiles ADD FullName AS FirstName + ' ' + LastName"); 16: } 17: 18: 19: public override void Down() 20: { 21: DropTable("dbo.UserProfiles"); 22: } 23: } Finally, run the Update-Database command.  Now we can query for Users using the FullName property and that query will be executed on the database server.  However, we encounter another potential problem. Since the FullName property is calculated by the database, it will get out of sync on the object side as soon as we make a change to the FirstName or LastName property.  Luckily, we can have the best of both worlds here by also adding the calculation back to the getter on the FullName property: 1: [DatabaseGenerated(DatabaseGeneratedOption.Computed)] 2: public string FullName 3: { 4: get { return FirstName + " " + LastName; } 5: private set 6: { 7: //Just need this here to trick EF 8: } 9: } Now we can both query for Users using the FullName property and we also won’t need to worry about the FullName property being out of sync with the FirstName and LastName properties.  When we run this code: 1: using(UserContext context = new UserContext()) 2: { 3: UserProfile userProfile = new UserProfile {FirstName = "Chanandler", LastName = "Bong"}; 4: 5: Console.WriteLine("Before saving: " + userProfile.FullName); 6: 7: context.Users.Add(userProfile); 8: context.SaveChanges(); 9:  10: Console.WriteLine("After saving: " + userProfile.FullName); 11:  12: UserProfile chanandler = context.Users.First(u => u.FullName == "Chanandler Bong"); 13: Console.WriteLine("After reading: " + chanandler.FullName); 14:  15: chanandler.FirstName = "Chandler"; 16: chanandler.LastName = "Bing"; 17:  18: Console.WriteLine("After changing: " + chanandler.FullName); 19:  20: } We get this output: It took a bit of work, but finally Chandler’s TV Guide can be delivered to the right person. The obvious downside to this implementation is that the FullName calculation is duplicated in the database and in the UserProfile class. This sample was written using Visual Studio 2012 and Entity Framework 5. Download the source code here.

    Read the article

  • Bridging: Loosing WLAN network connection with 4addr on option - Why?

    - by WitchCraft
    Question: For use with my Xen VM, I need to create a virtual network interface (vif) that is bridged to wlan0. If in /etc/network/interfaces I add auto xenbr0 iface xenbr0 inet dhcp And then later do brctl addif xenbr0 wlan0 I get this error message. can't add wlan0 to bridge xenbr0: Operation not supported I found out that Linux won't let you bridge a wireless interface in managed mode at all unless you enable the 4addr option (needed to recompile iw): iw dev wlan0 set 4addr on Afterwards brctl addif xenbr0 wlan0 works, and brctl show shows xenbr0 as bridged to wlan0. Unfortunately, as soon as I execute iw dev wlan0 set 4addr on my entire network connection is gone (no connection). As soon as then I execute iw dev wlan0 set 4addr off I reconnect and it works again. If I re-execute 4addr on, it breaks again, if I execute 4addr off, it works again. Unfortunately, I can't just turn 4addr on, activate the bridge and then turn it back off (error: device not ready). Does anybody know why I loose my connection ?

    Read the article

  • Shadow mapping with deffered shading for directional lights - shadow map projection problem

    - by Harry
    I'm trying to implement shadow mapping to my engine. I started with directional lights because they seemed to be the easiest one, but I was wrong :) I have implemented deferred shading and I retrieve position from depth. I think that there is the biggest problem but code looks ok for me. Now more about problem: Shadow map projected onto meshes looks bad scaled and translated and also some informations from shadow map texture aren't visible. You can see it on this screen: http://img5.imageshack.us/img5/2254/93dn.png Yelow frustum is light frustum and I have mixed shadow map preview and actual scene. As you can see shadows are in wrong place and shadow of cone and sphere aren't visible. Could you look at my codes and tell me where I have a mistake? // create shadow map if(!_shd)glGenTextures(1, &_shd); glBindTexture(GL_TEXTURE_2D, _shd); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT,NULL); // shadow map size glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, _shd, 0); glDrawBuffer(GL_NONE); // setting camera Vector dire=Vector(0,0,1); ACamera.setLookAt(dire,Vector(0)); ACamera.setPerspectiveView(60.0f,1,0.1f,10.0f); // currently needed for proper frustum corners calculation Vector min(ACamera._point[0]),max(ACamera._point[0]); for(int i=0;i<8;i++){ max=Max(max,ACamera._point[i]); min=Min(min,ACamera._point[i]); } ACamera.setOrthogonalView(min.x,max.x,min.y,max.y,-max.z,-min.z); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _s_buffer); // framebuffer for shadow map // rendering to depth buffer glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _g_buffer); Shaders["DirLight"].set(true); Matrix4 bias; bias.x.set(0.5,0.0,0.0,0.0); bias.y.set(0.0,0.5,0.0,0.0); bias.z.set(0.0,0.0,0.5,0.0); bias.w.set(0.5,0.5,0.5,1.0); Shaders["DirLight"].set("textureMatrix",ACamera.matrix*Projection3D*bias); // order of multiplications are 100% correct, everything gives mi the same result as using glm glActiveTexture(GL_TEXTURE5); glBindTexture(GL_TEXTURE_2D,_shd); lightDir(dir); // light calculations Vertex Shader makes nothing related to shadow calculatons Pixel shader function which calculates if pixel is in shadow or not: float readShadowMap(vec3 eyeDir) { // retrieve depth of pixel float z = texture2D(depth, gl_FragCoord.xy/screen).z; vec3 pos = vec3(gl_FragCoord.xy/screen, z); // transform by the projection and view inverse vec4 worldSpace = inverse(View)*inverse(ProjectionMatrix)*vec4(pos*2-1,1); worldSpace /= worldSpace.w; vec4 coord=textureMatrix*worldSpace; float vis=1.0f; if(texture2D(shadow, coord.xy).z < coord.z-0.001)vis=0.2f; return vis; } I also have question about shadows specifically for directional light. Currently I always look at 0,0,0 position and in further implementation I have to move light frustum along to camera frustum. I've found how to do this here: http://www.gamedev.net/topic/505893-orthographic-projection-for-shadow-mapping/ but it doesn't give me what I want. Maybe because of problems mentioned above, but I want know your opinion. EDIT: vec4 worldSpace is position read from depht of the scene (not shadow map). Maybe I wasn't precise so I'll try quick explain what is what: View is camera view matrix, ProjectionMatrix is camera projection,. First I try to get world space position from depth map and then multiply it by textureMatrix which is light view *light projection*bias. Rest of code is the same as in many tutorials. I can't use vertex shader to make something like gl_Position=textureMatrix*gl_Vertex and get it interpolated in fragment shader because of deffered rendering use so I want get it from depht buffer. EDIT2: I also tried make it as in Coding Labs tutorial about Shadow Mapping with Deferred Rendering but unfortunately this either works wrong.

    Read the article

  • Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error on localhost

    - by Ne0
    Background: I set up a cloud server and have have a website running SSL, it was all pretty strait forward following these instructions and following the instructions given by the SSL certificate issuer. I then went to set up development site on my local machine the same way but using self signed certs using these instructions. I have checked that port 443 is open and this post suggests it is a bad configuration on the server. I have gone through the set up process twice, yet I have been unable to find out what I have done wrong or missed. Does anyone else know what I may have have missed to get this error? Note: As the links suggest this is on 12.04.

    Read the article

  • MySQL for Excel 1.1.0 GA has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.1.0 GA, one of our newest products contained in the MySQL Installer suite. You can download it from our official Downloads page at http://dev.mysql.com/downloads/installer/. The 1.1.0 release of MySQL for Excel introduces the following features: Edit MySQL Data. Edit MySQL Data This may be the coolest feature so far; users will be able to edit the data in a MySQL table using MS Excel in a very friendly and intuitive way.  Edit Data supports inserting new rows, deleting existing rows and updating existing data as easy as playing with data in an Excel’s spreadsheet and pushing changes back to the server.  Also this version contains the following bug fixes: Enabled the following checkboxes in the Append Data's Advanced Options dialog and added code in the Append Data dialog to use the checkboxes as follows: Automatically store the column mapping for the given table     If checked the current mapping will be stored automatically after clicking the Append button if the append operation is successful and there is no mapping for the current connection.schema.table already; the new mapping is stored with a proposed name of Mapping. Reload stored column mapping for the selected table automatically     If checked the first Stored Mapping found where all column names in the source grid match all column names in the target grid is automatically selected and applied when the Append Data dialog is loaded. Fixed code in Append Data that applies a stored column mapping to skip target columns where the associated mapping is empty (saved as a -1). Enclosed the Add-In's startup code in a try-catch block in order to log any possible error thrown during startup; and added information messages to the log at the beginning of the Add-In's startup code and at the end of the shutdown code.  Also changed the wrapper method that calls the MySQLUtility to write messages to the log to make logging easier, thus changed the log call throughout all the code that contains a try-catch block. Added code to the main wix configuration file to check if a newer version is already installed and if so abort the installation Fixed code to refresh the Import Procedure Form's preview grid's data source to repaint its contents every time the Call button is pressed. Added code to re-pull connections after connections are migrated from Excel to Workbench. Fixed code so when the Append Data's Automatic Mapping is performed any subsequent change on a mapping resets the mapping to a Manual Mapping. Added code to the InfoDialog class to set the button text to "Show Details" or "Hide Details" depending on the status of the Details text container. Fixed a GUID in the main wix configuration file so now previous versions are uninstalled during a new installation. Added an option to the Export Data's Advanced Options dialog to remove columns with no data, by default the Export Dialog will only flag those columns as Excluded. Added code to display a warning and paint a column red if the column name in the Export Data dialog is not set, display a warning if the table name is not set, and stack warnings but not display them if a column is Excluded, warnings are displayed normally for columns if they are not Excluded anymore.  Added code to prevent the Append and Export of Data if more than 1 selection is made (selecting more than 1 area holding the Ctrl key while selecting Excel cells). Fixed problem that prevented MySQL for Excel from loading when Display settings in Windows 7 is set to Adjust to Best Performance (Oracle bug 14521405 - UNHANDLED EXCEPTION IS THROWN WHEN LOADING MYSQL FOR EXCEL). Fixed code that renames the auto-generated Primary Key column when the Table name changes since it was not detecting if a column with the same name already existed in the table. The column duplication was not actually happening, it looked that way because the automatically generated PK column was not detecting a column had that same name. Fixed code in Export Data dialog to always set an empty string instead of null to the MySQLDataColumn properties that stores MySQL data types (MySQLDataType, RowsFrom1stDataType and RowsFrom2ndDataType). Added code to display a warning and color red a column which Data Type has not been set by the user or has been manually cleared. Added code to output to the application log exception messages consistently in all places where exceptions are catched. A series of blog posts explaining the new Edit MySQL Data feature and the other existing features are coming in this blog. You can access the MySQL for Excel documentation at http://dev.mysql.com/doc/refman/5.5/en/mysql-for-excel.html You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Returning a mock object from a mock object

    - by Songo
    I'm trying to return an object when mocking a parser class. This is the test code using PHPUnit 3.7 //set up the result object that I want to be returned from the call to parse method $parserResult= new ParserResult(); $parserResult->setSegment('some string'); //set up the stub Parser object $stubParser=$this->getMock('Parser'); $stubParser->expects($this->any()) ->method('parse') ->will($this->returnValue($parserResult)); //injecting the stub to my client class $fileHeaderParser= new FileWriter($stubParser); $output=$fileParser->writeStringToFile(); Inside my writeStringToFile() method I'm using $parserResult like this: writeStringToFile(){ //Some code... $parserResult=$parser->parse(); $segment=$parserResult->getSegment();//that's why I set the segment in the test. } Should I mock ParserResult in the first place, so that the mock returns a mock? Is it good design for mocks to return mocks? Is there a better approach to do this all?!

    Read the article

  • Direct3D9 application won't write to depth buffer

    - by DeadMG
    I've got an application written in D3D9 which will not write any values to the depth buffer, resulting in incorrect values for the depth test. Things I've checked so far: D3DRS_ZENABLE, set to TRUE D3DRS_ZWRITEENABLE, set to TRUE D3DRS_ZFUNC, set to D3DCMP_LESSEQUAL The depth buffer is definitely bound to the pipeline at the relevant time The depth buffer was correctly cleared before use. I've used PIX to confirm that all of these things occurred as expected. For example, if I clear the depth buffer to 0 instead of 1, then correctly nothing is drawn, and PIX confirms that all the pixels failed the depth test. But I've also used PIX to confirm that my submitted geometry does not write to the depth buffer and so is not correctly rendered. Any other suggestions?

    Read the article

  • Setting Meta tags for a website

    - by Pankaj Upadhyay
    I have made an Asp.net MVC website and am not well versed about SEO techniques, so I want a little guidance in setting the appropriate meta tags for the website. My website is dynamic and has two types of Pages: Category and Product There are two tables in the database for Category and Product. Looking into the future, I added these fields beforehand to both the tables : -- MetaTitle--MetaDescription--MetaKeywords On both the Category and Product pages, these values are retrieved and set as following <meta name="description" content="@ViewBag.MetaDescription" /> <meta name="title" content="@ViewBag.MetaTitle" /> <meta name="keywords" content="@ViewBag.MetaKeywords" /> For better SEO how will you set up these meta tags. Will you include the site name in all three fields ? Right now, the Product page's MetaTitle, MetaDescription and MetaKeywords don't include the website name. If possible, can you provide me sample values that should be set for better SEO performance keeping the business name in mind.

    Read the article

  • Why won't cron run my sh script?

    - by Dmitry Narkevich
    Used gnome-schedule to create a script to set my headset as the fallback audio device because it keeps unsetting it when the headset gets disconnected or pc goes into sleep mode. Anyway, crontab is this: SHELL=/bin/sh PATH=/bin:/sbin:/usr/bin:/usr/sbin:/home/dmitry/bin * * * * * headsetfix /home/dmitry/bin/headsetfix is #!/bin/sh pacmd set-default-sink alsa_output.usb-Logitech_Inc_Logitech_USB_Headset_H540_00000000-00-H540.analog-stereo pacmd set-default-source alsa_input.usb-Logitech_Inc_Logitech_USB_Headset_H540_00000000-00-H540.analog-stereo It runs fine from the terminal. I've made sure it's chmodded to be executable, and "which headsetfix", run from cron, outputs "/home/dmitry/bin/headsetfix" so not sure what the problem is.

    Read the article

  • Create dynamic buffer SharpDX

    - by fedab
    I want to set a buffer that is updated every frame but can't figure it out, what i have to do. The only working thing i have is this: mdexcription = new BufferDescription(Matrix.SizeInBytes * Matrices.Length, ResourceUsage.Dynamic, BindFlags.VertexBuffer, CpuAccessFlags.Write, ResourceOptionFlags.None, 0); instanceBuffer = SharpDX.Direct3D11.Buffer.Create(Device, Matrices, mdexcription); vBB = new VertexBufferBinding(instanceBuffer, Matrix.SizeInBytes, 0); DeviceContext.InputAssembler.SetVertexBuffers(1, vBB); Draw: //Change Matrices (Matrix[]) every frame... instanceBuffer.Dispose(); instanceBuffer = SharpDX.Direct3D11.Buffer.Create(Device, Matrices, mdexcription); vBB = new VertexBufferBinding(instanceBuffer, Matrix.SizeInBytes, 0); DeviceContext.InputAssembler.SetVertexBuffers(1, vBB); I guess Dispose() and creating a new buffer is slow and can be done much faster. I've read about DataStream but i do not know, how to set this up properly. What steps do i have to do to set up a DataStream to achieve fast every-frame update?

    Read the article

  • ResponseStatusLine protocol violation

    - by Tom Hines
    I parse/scrape a few web page every now and then and recently ran across an error that stated: "The server committed a protocol violation. Section=ResponseStatusLine".   After a few web searches, I found a couple of suggestions – one of which said the problem could be fixed by changing the HttpWebRequest ProtocolVersion to 1.0 with the command: 1: HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(strURI); 2: req.ProtocolVersion = HttpVersion.Version10;   …but that did not work in my particular case.   What DID work was the next suggestion I found that suggested the use of the setting: “useUnsafeHeaderParsing” either in the app.config file or programmatically. If added to the app.config, it would be: 1: <!-- after the applicationSettings --> 2: <system.net> 3: <settings> 4: <httpWebRequest useUnsafeHeaderParsing ="true"/> 5: </settings> 6: </system.net>   If done programmatically, it would look like this: C++: 1: // UUHP_CPP.h 2: #pragma once 3: using namespace System; 4: using namespace System::Reflection; 5:   6: namespace UUHP_CPP 7: { 8: public ref class CUUHP_CPP 9: { 10: public: 11: static bool UseUnsafeHeaderParsing(String^% strError) 12: { 13: Assembly^ assembly = Assembly::GetAssembly(System::Net::Configuration::SettingsSection::typeid); //__typeof 14: if (nullptr==assembly) 15: { 16: strError = "Could not access Assembly"; 17: return false; 18: } 19:   20: Type^ type = assembly->GetType("System.Net.Configuration.SettingsSectionInternal"); 21: if (nullptr==type) 22: { 23: strError = "Could not access internal settings"; 24: return false; 25: } 26:   27: Object^ obj = type->InvokeMember("Section", 28: BindingFlags::Static | BindingFlags::GetProperty | BindingFlags::NonPublic, 29: nullptr, nullptr, gcnew array<Object^,1>(0)); 30:   31: if(nullptr == obj) 32: { 33: strError = "Could not invoke Section member"; 34: return false; 35: } 36:   37: FieldInfo^ fi = type->GetField("useUnsafeHeaderParsing", BindingFlags::NonPublic | BindingFlags::Instance); 38: if(nullptr == fi) 39: { 40: strError = "Could not access useUnsafeHeaderParsing field"; 41: return false; 42: } 43:   44: if (!(bool)fi->GetValue(obj)) 45: { 46: fi->SetValue(obj, true); 47: } 48:   49: return true; 50: } 51: }; 52: } C# (CSharp): 1: using System; 2: using System.Reflection; 3:   4: namespace UUHP_CS 5: { 6: public class CUUHP_CS 7: { 8: public static bool UseUnsafeHeaderParsing(ref string strError) 9: { 10: Assembly assembly = Assembly.GetAssembly(typeof(System.Net.Configuration.SettingsSection)); 11: if (null == assembly) 12: { 13: strError = "Could not access Assembly"; 14: return false; 15: } 16:   17: Type type = assembly.GetType("System.Net.Configuration.SettingsSectionInternal"); 18: if (null == type) 19: { 20: strError = "Could not access internal settings"; 21: return false; 22: } 23:   24: object obj = type.InvokeMember("Section", 25: BindingFlags.Static | BindingFlags.GetProperty | BindingFlags.NonPublic, 26: null, null, new object[] { }); 27:   28: if (null == obj) 29: { 30: strError = "Could not invoke Section member"; 31: return false; 32: } 33:   34: // If it's not already set, set it. 35: FieldInfo fi = type.GetField("useUnsafeHeaderParsing", BindingFlags.NonPublic | BindingFlags.Instance); 36: if (null == fi) 37: { 38: strError = "Could not access useUnsafeHeaderParsing field"; 39: return false; 40: } 41:   42: if (!Convert.ToBoolean(fi.GetValue(obj))) 43: { 44: fi.SetValue(obj, true); 45: } 46:   47: return true; 48: } 49: } 50: }   F# (FSharp): 1: namespace UUHP_FS 2: open System 3: open System.Reflection 4: module CUUHP_FS = 5: let UseUnsafeHeaderParsing(strError : byref<string>) : bool = 6: // 7: let assembly : Assembly = Assembly.GetAssembly(typeof<System.Net.Configuration.SettingsSection>) 8: if (null = assembly) then 9: strError <- "Could not access Assembly" 10: false 11: else 12: 13: let myType : Type = assembly.GetType("System.Net.Configuration.SettingsSectionInternal") 14: if (null = myType) then 15: strError <- "Could not access internal settings" 16: false 17: else 18: 19: let obj : Object = myType.InvokeMember("Section", BindingFlags.Static ||| BindingFlags.GetProperty ||| BindingFlags.NonPublic, null, null, Array.zeroCreate 0) 20: if (null = obj) then 21: strError <- "Could not invoke Section member" 22: false 23: else 24: 25: // If it's not already set, set it. 26: let fi : FieldInfo = myType.GetField("useUnsafeHeaderParsing", BindingFlags.NonPublic ||| BindingFlags.Instance) 27: if(null = fi) then 28: strError <- "Could not access useUnsafeHeaderParsing field" 29: false 30: else 31: 32: if (not(Convert.ToBoolean(fi.GetValue(obj)))) then 33: fi.SetValue(obj, true) 34: 35: // Now return true 36: true VB (Visual Basic): 1: Option Explicit On 2: Option Strict On 3: Imports System 4: Imports System.Reflection 5:   6: Public Class CUUHP_VB 7: Public Shared Function UseUnsafeHeaderParsing(ByRef strError As String) As Boolean 8:   9: Dim assembly As [Assembly] 10: assembly = [assembly].GetAssembly(GetType(System.Net.Configuration.SettingsSection)) 11:   12: If (assembly Is Nothing) Then 13: strError = "Could not access Assembly" 14: Return False 15: End If 16:   17: Dim type As Type 18: type = [assembly].GetType("System.Net.Configuration.SettingsSectionInternal") 19: If (type Is Nothing) Then 20: strError = "Could not access internal settings" 21: Return False 22: End If 23:   24: Dim obj As Object 25: obj = [type].InvokeMember("Section", _ 26: BindingFlags.Static Or BindingFlags.GetProperty Or BindingFlags.NonPublic, _ 27: Nothing, Nothing, New [Object]() {}) 28:   29: If (obj Is Nothing) Then 30: strError = "Could not invoke Section member" 31: Return False 32: End If 33:   34: ' If it's not already set, set it. 35: Dim fi As FieldInfo 36: fi = [type].GetField("useUnsafeHeaderParsing", BindingFlags.NonPublic Or BindingFlags.Instance) 37: If (fi Is Nothing) Then 38: strError = "Could not access useUnsafeHeaderParsing field" 39: Return False 40: End If 41:   42: If (Not Convert.ToBoolean(fi.GetValue(obj))) Then 43: fi.SetValue(obj, True) 44: End If 45:   46: Return True 47: End Function 48: End Class   Technorati Tags: C++,CPP,VB,Visual Basic,F#,FSharp,C#,CSharp,ResponseStatusLine,protocol violation

    Read the article

  • Modifying Service URLs with LINQ to Twitter

    - by Joe Mayo
    It’s funny that two posts so close together speak about flexibility with the LINQ to Twitter provider.  There are certain things you know from experience on when to make software more flexible and when to save time.  This is another one of those times when I got lucky and made the right choice up front. I’m talking about the ability to switch URLs. It only makes sense that Twitter should begin versioning their API as it matures.  In fact, most of the entire API has moved to the v1 URL at “https://api.twitter.com/1/”, except for search and trends.  Recently, Twitter introduced the available and local trends, but hung them off the new v1, and left the rest of the trends API on the old URL. To implement this, I muscled my way into the expression tree during CreateRequestProcessor to figure out which trend I was dealing with; perhaps not elegant, but the code is in the right place and that’s what factories are for.  Anyway, the point is that I wouldn’t have to do this kind of stuff (as much fun as it is), if Twitter would have more consistency. Having went to Chirp last week and seeing the evolution of the API, it looks like my wish is coming true.  …now if they would just get their stuff together on the mess they made with geo-location and places… but again, that’s all transparent if your using LINQ to Twitter because I pulled all of that together in a consistent way so that you don’t have to. Normally, when Twitter makes a change, code breaks and I have to scramble to get the fixes in-place.  This time, in the case of a URL change, the adjustment is easy and no-one has to wait for me.  Essentially, all you need to do is change the URL passed to the TwitterContext constructor.  Here’s an example of instantiating a TwitterContext now: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) The third parameter constructor is the SearchUrl, which is used for Search and Trend APIs. You probably know what’s coming next; another constructor, but with the SearchUrl parameter set to the new URL as follows: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://api.twitter.com/1/")) One consequence of setting the URL this way is that you set the URL for both Trends and Search.  Since Search is still using the old URL, this is going to break for Search queries. You could always instantiate a special TwitterContext instance for Search queries, with the old URL set. Alternatively, you can use the TwitterContext’s SearchUrl property. Here’s an example: twitterCtx.SearchUrl = "https://api.twitter.com/1/"; var trends = (from trend in twitterCtx.Trends where trend.Type == TrendType.Daily && trend.Date == DateTime.Now.AddDays(-2).Date select trend) .ToList(); Notice how I set the SearchUrl property just-in-time for the query. This allows you to target the URL for each specific query. Whichever way you prefer to configure the URL, it’s your choice. So, now you know how to set the URL to be used for Trend queries and how to prevent whacking your Search queries. I’ll be updating the Trend API to use same URL as all other APIs soon, so the only API left to use the SearchUrl will be Search, but for the short term, it’s Trends and Search. Until I make this change, you’ll have a viable work-around by setting the URL yourself, as explained above. These were the Search and Trend URLs, but you might be curious about the second parameter of the TwitterContext constructor; that’s the URL for all other APIs (the BaseUrl), except for Trend and Search. Similarly, you can use the TwitterContext’s BaseUrl property to set the BaseUrl. Setting the BaseUrl can be useful when communicating with other services. In addition to Twitter changing URLs, the Twitter API has been adopted by other companies, such as Identi.ca, Tumblr, and  WordPress.  This capability lets you use LINQ to Twitter with any of these services.  This is a testament to the success of the Twitter API and it’s popularity. No doubt we’ll have hills and valleys to traverse as the Twitter API matures, but hopefully there will be enough flexibility in LINQ to Twitter to make these changes as transparent as possible for you. @JoeMayo

    Read the article

  • Enable DreamScene in Any Version of Vista or Windows 7

    - by DigitalGeekery
    Windows DreamScene was a utility available for Vista Ultimate that allowed users to set video as desktop wallpaper. It was dropped in Windows 7, but we’ll take a look at how to play DreamScenes in all versions of Windows 7 or Vista. Downloading DreamScenes First, you’ll need to find some DreamScenes to download. We’ve found some nice ones at both DreamScene.org and DeviantArt. You can find those download links at the end of the article. They’ll come as compressed files, so you’ll need to extract them after downloading. Windows 7 DreamScene Activator If you are running Windows 7 you can use Windows 7 DreamScene Activator. This free portable utility enables DreamScene in both 32 & 64 bit versions of Windows 7. Users can then set either MPG or WMV files as desktop wallpaper. Download and extract the Windows 7 DreamScene Activator (link below). Once extracted, you’ll need to run the application as administrator. Right-click on the .exe and select Run as administrator. Click on Enable DreamScene. This will also restart Windows Explorer if it is open. To play your DreamScene, browse for the file in Windows Explorer, right-click the file and select Set as Desktop Background. Enjoy your new Windows 7 DreamScene.   Although it says it is for Windows 7 only, we were able to get it to work with no problems on Vista Home Premium x32 as well.   You can Pause the DreamScene at anytime by right-clicking on the desktop and selecting Pause DreamScene.   When you are ready for a change, click Disable DreamScene and switch back to your previous wallpaper. Using VLC Media Player Users of all versions of Windows 7 & Vista can enable a DreamScene using VLC. Recently, we showed you how to set a video as your desktop wallpaper in VLC.  Since DreamScenes are in MPEG or WMV format, we will use the same tactic to display them as desktop wallpaper. We’ll just need to make a few additional tweaks to the VLC settings. You’ll need to download and install VLC media player if you don’t already have it. You can find the download link below. Next, select Tools > Preferences from the Menu. Select the Video button on the left and then choose DirectX video output from the Output dropdown list. Next, select All under Show Settings at the lower left, then select the Video button on the left pane. Uncheck Show media title on video. This will prevent VLC from constantly showing the title of the video on the screen each time the video loops. Click Save and the restart VLC.   Now we will add the video to our playlist and set it to continuously loop. Select View > Playlist from the Menu. Select the Add file button from the bottom of the Playlist window and select Add file.   Browse for your file and click Open.   Click the Loop button at the bottom so the video plays in a continuous loop.   Now, we’re ready to play the video. After the video starts playing, select Video > DirectX Wallpaper from the Menu, then minimize VLC.   If you’re using Aero Themes, you may get a pop-up warning and Windows will switch automatically to a basic theme.   If looping one video gets to be a little repetitive, you can add multiple videos to your playlist in VLC and loop the entire playlist. Just make sure you toggle the Loop button on the playlist window to Loop All. Now you’ve got a nice DreamScene playing on your desktop. Another cool trick you can do with VLC is take snapshots of favorite movie scenes and set them as backgrounds. When you’re ready to go back to your old wallpaper, maximize VLC, select Video and click DirectX Wallpaper again to turn it off the video background. Occasionally we were left with a black screen and had to manually change our wallpaper back to normal even after turning off the DirectX Wallpaper. Note: Keep in mind that using the VLC method takes up a lot of resources so if you try to run it on older hardware, or say a netbook, you’re not going to get good results. We also tried to use the VLC method in XP, but couldn’t get it to work. If you have leave a comment and let us know. While the DreamScene feature never really caught on in Vista, we find them to be a cool way to pump a little life into your desktop on any version of Vista or Windows 7. Downloads DreamScenes from Dreamscene.org DreamScenes from DeviantArt Download VLC media player Windows 7 DreamScene Activator Similar Articles Productive Geek Tips Wait, How do I Turn on DreamScene Again?Enable Run Command on Windows 7 or Vista Start MenuEnable or Disable UAC From the Windows 7 / Vista Command LineUnderstanding Windows Vista Aero Glass RequirementsEnable Mapping to \HostnameC$ Share on Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter Download Songs From MySpace

    Read the article

  • OBIEE 11g 11.1.1.7.1 is Available For BI Enterprise

    - by p.anda
    (in via Ian) The Business Intelligence Enterprise Edition (OBIEE) 11.1.1.7.1 patch set has been released.  This patch set is available for all customers who are using Oracle Business Intelligence Enterprise Edition 11.1.1.7.0. Now available to download from My Oracle Support Patch 16556157: OBIEE BUNDLE PATCH 11.1.1.7.1 This single OBIEE 11.1.1.7.1 patch set download is comprised of the following: 1 of 6 Oracle Business Intelligence Installer (BIINST) 2 of 6 Oracle Business Intelligence Publisher (BIP) 3 of 6 EPM Components Installed from BI Installer 11.1.1.7.0 (BIFNDNEPM)) 4 of 6 Oracle Business Intelligence Server (BIS) 5 of 6 Oracle Business Intelligence Presentation Services (BIPS) 6 of 6 Oracle Business Intelligence Platform Client Installers and MapViewer Ensure to review the readme file on the Installer download for important installation instructions The following is also required to be downloaded and applied: Patch 16569379: Dynamic Monitoring Service patch Additional important notes are available in  the following document: Document 1566124.1: OBIEE 11g 11.1.1.7.1 is Available for Oracle Business Intelligence Enterprise Edition

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >