Search Results

Search found 10194 results on 408 pages for 'raw types'.

Page 12/408 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Whats the most efficient MySQL column types for this data?

    - by AlabamaKush
    I have several tables with some pretty standard data in each. Can somebody help me optimize them by telling me the best column types for this data. Whats beside them is what I have currently. Number (max length 7) --> MEDIUMINT(8) Unsigned Text (max length 30) --> VARCHAR(30) Text (max length 200) --> VARCHAR(200) Number (max length 4) --> SMALLINT(5) Unsigned Number (either 0 or 1) --> TINYINT(1) Unsigned Text (max length 500) --> TEXT Any suggestions? I'm just guessing with this so I know some of them are wrong...

    Read the article

  • MSSQL: Primary Key Schema Largely Guid but Sometimes Integer Types...

    - by Code Sherpa
    OK, this may be a silly question but... I have inherited a project and am tasked with going over the primary key relationships. The project largely uses Guids. I say "largely" because there are examples where tables use integral types to reflect enumerations. For example, dbo.MessageFolder has MessageFolderId of type int to reflect public emum MessageFolderTypes { inbox = 1, sent = 2, trash = 3, etc... } This happens a lot. There are tables with primary keys of type int which is unavoidable because of their reliance on enumerations and tables with primary keys of type Guid which reflect the primary key choice on the part of the previous programmer. Should I care that the PK schema is spotty like this? It doesn't feel right but does it really matter? If this could create a problem, how do I get around it (I really can't move all PKs to type int without serious legwork and I have never heard of enumerations that have guid values)? Thanks.

    Read the article

  • Pass Types as arguments to a function in Haskell?

    - by Charles Peng
    The following two functions are extremely similar. They read from a [String] n elements, either [Int] or [Float]. How can I factor the common code out? I don't know of any mechanism in Haskell that supports passing types as arguments. readInts n stream = foldl next ([], stream) [1..n] where next (lst, x:xs) _ = (lst ++ [v], xs) where v = read x :: Int readFloats n stream = foldl next ([], stream) [1..n] where next (lst, x:xs) _ = (lst ++ [v], xs) where v = read x :: Float I am at a beginner level of Haskell, so any comments on my code are welcome.

    Read the article

  • SQL Server: Primary Key Schema Largely Guid but Sometimes Integer Types...

    - by Code Sherpa
    OK, this may be a silly question but... I have inherited a project and am tasked with going over the primary key relationships. The project largely uses Guids. I say "largely" because there are examples where tables use integral types to reflect enumerations. For example, dbo.MessageFolder has MessageFolderId of type int to reflect public emum MessageFolderTypes { inbox = 1, sent = 2, trash = 3, etc... } This happens a lot. There are tables with primary keys of type int which is unavoidable because of their reliance on enumerations and tables with primary keys of type Guid which reflect the primary key choice on the part of the previous programmer. Should I care that the PK schema is spotty like this? It doesn't feel right but does it really matter? If this could create a problem, how do I get around it (I really can't move all PKs to type int without serious legwork and I have never heard of enumerations that have guid values)? Thanks.

    Read the article

  • .NET security mechanism to restrict access between two Types in the same Website project?

    - by jdk
    Question: Is there a mechanism in the .NET Framework to hide one custom Type from another without using separate projects/assemblies? I'm using C# with ASP.NET in a Website project (Note: Not a Web Application). Obviously there's not a way to enforce this restriction using language-specific OO keywords so I am looking for something else, for example: maybe a permission framework or code access mechanism, maybe something that uses meta data like Attributes. I'm unsure. I don't really care whether the solution actually hides classes from each other or just makes them inaccessible, etc. A runtime or design time answer will suffice. Looking for something easy to implement otherwise it's not worth the effort ... Background: I'm working in an ASP.NET Website project and the team has decided not to use separate project assemblies for different software layers. Therefore I'm looking for a way to have, for example, a DataAccess/ folder of which I disallow its classes to access other Types in the ASP.NET Website project.

    Read the article

  • VirtualBox doesn't see raw partitions

    - by smbear
    What I want to achieve is to set up virtual machine with VirtualBox. Host OS is Windows 7 Home Premium, guest will be (k)Ubuntu 12.04 on a raw partition. The first problem is that when I issue following command: VBoxManage.exe internalcommands listpartitions -rawdisk \\.\PhysicalDrive0 I get following result: Number Type StartCHS EndCHS Size (MiB) Start (Sect) 1 0xee 0 /0 /1 1023/254/63 715404 1 I'm guessing that VirtualBox is unable to see my partitions. If I use diskpart tool, then all partitions are listed correctly (note Polish language version of Windows): DISKPART> select disk 0 Obecnie wybranym dyskiem jest dysk 0. DISKPART> list partition Partycja ### Typ Rozmiar Przesuniecie ------------- ---------------- ------- ------------ Partycja 1 System 200 MB 1024 KB Partycja 2 Zarezerwowany 128 MB 201 MB Partycja 3 Podstawowy 139 GB 329 MB Partycja 5 Nieznany 4883 KB 140 GB Partycja 6 Podstawowy 50 GB 140 GB Partycja 7 Podstawowy 484 GB 190 GB Partycja 4 Odzyskiwanie 24 GB 674 GB Additional note: my PC is using EFI to boot OS. Basing on the results listed above, I believe that: I messed up with my partition table. Something is wrong with VirtualBox. Can anyone help with this issue?

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Usual Solution – Wait Type – Day 6 of 28

    - by pinaldave
    CXPACKET has to be most popular one of all wait stats. I have commonly seen this wait stat as one of the top 5 wait stats in most of the systems with more than one CPU. Books On-Line: Occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if contention on this wait type becomes a problem. CXPACKET Explanation: When a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. There is an organizer/coordinator thread (thread 0), which takes waits for all the threads to complete and gathers result together to present on the client’s side. The organizer thread has to wait for the all the threads to finish before it can move ahead. The Wait by this organizer thread for slow threads to complete is called CXPACKET wait. Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query. Reducing CXPACKET wait: We cannot discuss about reducing the CXPACKET wait without talking about the server workload type. OLTP: On Pure OLTP system, where the transactions are smaller and queries are not long but very quick usually, set the “Maximum Degree of Parallelism” to 1 (one). This way it makes sure that the query never goes for parallelism and does not incur more engine overhead. EXEC sys.sp_configure N'cost threshold for parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Data-warehousing / Reporting server: As queries will be running for long time, it is advised to set the “Maximum Degree of Parallelism” to 0 (zero). This way most of the queries will utilize the parallel processor, and long running queries get a boost in their performance due to multiple processors. EXEC sys.sp_configure N'cost threshold for parallelism', N'0' GO RECONFIGURE WITH OVERRIDE GO Mixed System (OLTP & OLAP): Here is the challenge. The right balance has to be found. I have taken a very simple approach. I set the “Maximum Degree of Parallelism” to 2, which means the query still uses parallelism but only on 2 CPUs. However, I keep the “Cost Threshold for Parallelism” very high. This way, not all the queries will qualify for parallelism but only the query with higher cost will go for parallelism. I have found this to work best for a system that has OLTP queries and also where the reporting server is set up. Here, I am setting ‘Cost Threshold for Parallelism’ to 25 values (which is just for illustration); you can choose any value, and you can find it out by experimenting with the system only. In the following script, I am setting the ‘Max Degree of Parallelism’ to 2, which indicates that the query that will have a higher cost (here, more than 25) will qualify for parallel query to run on 2 CPUs. This implies that regardless of the number of CPUs, the query will select any two CPUs to execute itself. EXEC sys.sp_configure N'cost threshold for parallelism', N'25' GO EXEC sys.sp_configure N'max degree of parallelism', N'2' GO RECONFIGURE WITH OVERRIDE GO Read all the post in the Wait Types and Queue series. Additionally a must read comment of Jonathan Kehayias. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest you all to read the online book for further clarification. All the discussion of Wait Stats over here is generic and it varies from system to system. It is recommended that you test this on the development server before implementing on the production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • How to parse a raw HTTP response?

    - by Ed
    If I have a raw HTTP response as a string: HTTP/1.1 200 OK Date: Tue, 11 May 2010 07:28:30 GMT Expires: -1 Cache-Control: private, max-age=0 Content-Type: text/html; charset=UTF-8 Server: gws X-XSS-Protection: 1; mode=block Connection: close <!doctype html><html>...</html> Is there an easy way I can parse it into an HttpListenerResponse object? Or at least some kind .NET object so I don't have to work with raw responses. What I'm doing currently is extracting the header key/value pairs and setting them on the HttpListenerResponse. But some headers can't be set, and then I have to cut out the body of the response and write it to the OutputStream. But the body could be gzipped, or it could be an image, which I can't get to work yet. And some responses contain random characters everywhere, which looks like an encoding problem. It's a lot of trouble. I'm getting a raw response because I'm using SOCKS to send an HTTP request. The program I'm working on is basically an HTTP proxy that can route requests through a SOCKS proxy, like Privoxy does.

    Read the article

  • How to specialize a type parameterized argument to multiple different types for in Scala?

    - by jmount
    I need a back-check (please). In an article ( http://www.win-vector.com/blog/2010/06/automatic-differentiation-with-scala/ ) I just wrote I stated that it is my belief in Scala that you can not specify a function that takes an argument that is itself a function with an unbound type parameter. What I mean is you can write: def g(f:Array[Double]=>Double,Array[Double]):Double but you can not write something like: def g(f[Y]:Array[Y]=>Double,Array[Double]):Double because Y is not known. The intended use is that inside g() I will specialize fY to multiple different types at different times. You can write: def g[Y](f:Array[Y]=>Double,Array[Double]):Double but then f() is of a single type per call to g() (which is exactly what we do not want). However, you can get all of the equivalent functionality by using a trait extension instead insisting on passing around a function. What I advocated in my article was: 1) Creating a trait that imitates the structure of Scala's Function1 trait. Something like: abstract trait VectorFN { def apply[Y](x:Array[Y]):Y } 2) declaring def g(f:VectorFN,Double):Double (using the trait is the type). This works (people here on StackOverflow helped me find it, and I am happy with it)- but am I mis-representing Scala by missing an even better solution?

    Read the article

  • Generic Event Generator and Handler from User Supplied Types?

    - by JaredBroad
    I'm trying to allow the user to supply custom data and manage the data with custom types. The user's algorithm will get time synchronized events pushed into the event handlers they define. I'm not sure if this is possible but here's the "proof of concept" code I'd like to build. It doesn't detect T in the for loop: "The type or namespace name 'T' could not be found" class Program { static void Main(string[] args) { Algorithm algo = new Algorithm(); Dictionary<Type, string[]> userDataSources = new Dictionary<Type, string[]>(); // "User" adding custom type and data source for algorithm to consume userDataSources.Add(typeof(Weather), new string[] { "temperature data1", "temperature data2" }); for (int i = 0; i < 2; i++) { foreach (Type T in userDataSources.Keys) { string line = userDataSources[typeof(T)][i]; //Iterate over CSV data.. var userObj = new T(line); algo.OnData < typeof(T) > (userObj); } } } //User's algorithm pattern. interface IAlgorithm<TData> where TData : class { void OnData<TData>(TData data); } //User's algorithm. class Algorithm : IAlgorithm<Weather> { //Handle Custom User Data public void OnData<Weather>(Weather data) { Console.WriteLine(data.date.ToString()); Console.ReadKey(); } } //Example "user" custom type. public class Weather { public DateTime date = new DateTime(); public double temperature = 0; public Weather(string line) { Console.WriteLine("Initializing weather object with: " + line); date = DateTime.Now; temperature = -1; } } }

    Read the article

  • WCF – interchangeable data-contract types

    - by nmarun
    In a WSDL based environment, unlike a CLR-world, we pass around the ‘state’ of an object and not the reference of an object. Well firstly, what does ‘state’ mean and does this also mean that we can send a struct where a class is expected (or vice-versa) as long as their ‘state’ is one and the same? Let’s see. So I have an operation contract defined as below: 1: [ServiceContract] 2: public interface ILearnWcfServiceExtend : ILearnWcfService 3: { 4: [OperationContract] 5: Employee SaveEmployee(Employee employee); 6: } 7:  8: [ServiceBehavior] 9: public class LearnWcfService : ILearnWcfServiceExtend 10: { 11: public Employee SaveEmployee(Employee employee) 12: { 13: employee.EmployeeId = 123; 14: return employee; 15: } 16: } Quite simplistic operation there (which translates to ‘absolutely no business value’). Now, the data contract Employee mentioned above is a struct. 1: public struct Employee 2: { 3: public int EmployeeId { get; set; } 4:  5: public string FName { get; set; } 6: } After compilation and consumption of this service, my proxy (in the Reference.cs file) looks like below (I’ve ignored the rest of the details just to avoid unwanted confusion): 1: public partial struct Employee : System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged I call the service with the code below: 1: private static void CallWcfService() 2: { 3: Employee employee = new Employee { FName = "A" }; 4: Console.WriteLine("IsValueType: {0}", employee.GetType().IsValueType); 5: Console.WriteLine("IsClass: {0}", employee.GetType().IsClass); 6: Console.WriteLine("Before calling the service: {0} - {1}", employee.EmployeeId, employee.FName); 7: employee = LearnWcfServiceClient.SaveEmployee(employee); 8: Console.WriteLine("Return from the service: {0} - {1}", employee.EmployeeId, employee.FName); 9: } The output is: I now change my Employee type from a struct to a class in the proxy class and run the application: 1: public partial class Employee : System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged { The output this time is: The state of an object implies towards its composition, the properties and the values of these properties and not based on whether it is a reference type (class) or a value type (struct). And as shown above, we’re actually passing an object by its state and not by reference. Continuing on the same topic of ‘type-interchangeability’, WCF treats two data contracts as equivalent if they have the same ‘wire-representation’. We can do so using the DataContract and DataMember attributes’ Name property. 1: [DataContract] 2: public struct Person 3: { 4: [DataMember] 5: public int Id { get; set; } 6:  7: [DataMember] 8: public string FirstName { get; set; } 9: } 10:  11: [DataContract(Name="Person")] 12: public class Employee 13: { 14: [DataMember(Name = "Id")] 15: public int EmployeeId { get; set; } 16:  17: [DataMember(Name="FirstName")] 18: public string FName { get; set; } 19: } I’ve created two data contracts with the exact same wire-representation. Just remember that the names and the types of data members need to match to be considered equivalent. The question then arises as to what gets generated in the proxy class. Despite us declaring two data contracts (Person and Employee), only one gets emitted – Person. This is because we’re saying that the Employee type has the same wire-representation as the Person type. Also that the signature of the SaveEmployee operation gets changed on the proxy side: 1: [System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")] 2: [System.ServiceModel.ServiceContractAttribute(ConfigurationName="ServiceProxy.ILearnWcfServiceExtend")] 3: public interface ILearnWcfServiceExtend 4: { 5: [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/ILearnWcfServiceExtend/SaveEmployee", ReplyAction="http://tempuri.org/ILearnWcfServiceExtend/SaveEmployeeResponse")] 6: ClientApplication.ServiceProxy.Person SaveEmployee(ClientApplication.ServiceProxy.Person employee); 7: } But, on the service side, the SaveEmployee still accepts and returns an Employee data contract. 1: [ServiceBehavior] 2: public class LearnWcfService : ILearnWcfServiceExtend 3: { 4: public Employee SaveEmployee(Employee employee) 5: { 6: employee.EmployeeId = 123; 7: return employee; 8: } 9: } Despite all these changes, our output remains the same as the last one: This is type-interchangeability at work! Here’s one more thing to ponder about. Our Person type is a struct and Employee type is a class. Then how is it that the Person type got emitted as a ‘class’ in the proxy? It’s worth mentioning that WSDL describes a type called Employee and does not say whether it is a class or a struct (see the SOAP message below): 1: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" 2: xmlns:tem="http://tempuri.org/" 3: xmlns:ser="http://schemas.datacontract.org/2004/07/ServiceApplication"> 4: <soapenv:Header/> 5: <soapenv:Body> 6: <tem:SaveEmployee> 7: <!--Optional:--> 8: <tem:employee> 9: <!--Optional:--> 10: <ser:EmployeeId>?</ser:EmployeeId> 11: <!--Optional:--> 12: <ser:FName>?</ser:FName> 13: </tem:employee> 14: </tem:SaveEmployee> 15: </soapenv:Body> 16: </soapenv:Envelope> There are some differences between how ‘Add Service Reference’ and the svcutil.exe generate the proxy class, but turns out both do some kind of reflection and determine the type of the data contract and emit the code accordingly. So since the Employee type is a class, the proxy ‘Person’ type gets generated as a class. In fact, reflecting on svcutil.exe application, you’ll see that there are a couple of places wherein a flag actually determines a type as a class or a struct. One example is in the ExportISerializableDataContract method in the System.Runtime.Serialization.CodeExporter class. Seems like these flags have a say in deciding whether the type gets emitted as a struct or a class. This behavior is different if you use the WSDL tool though. WSDL tool does not do any kind of reflection of the data contract / serialized type, it emits the type as a class by default. You can check this using the two command lines below:   Note to self: Remember ‘state’ and type-interchangeability when traversing through the WSDL planet!

    Read the article

  • Checking type sizes in C with macros.

    - by Seisatsu
    I'm writing a program that needs to have unsigned types with definite sizes. I need a uint8, uint16, uint32, and uint64, and I need them defined in types.h, in a way that they will always be defined correctly regardless of platform. My question is, how can I check the sizes of different types on each platform using preprocessor macros, so that I can define my custom types correctly in the types.h header?

    Read the article

  • SQL SERVER – DMV – sys.dm_os_waiting_tasks and sys.dm_exec_requests – Wait Type – Day 4 of 28

    - by pinaldave
    Previously, we covered the DMV sys.dm_os_wait_stats, and also saw how it can be useful to identify the major resource bottleneck. However, at the same time, we discussed that this is only useful when we are looking at an instance-level picture. Quite often we want to know about the processes going in our server at the given instant. Here is the query for the same. This DMV is written taking the following into consideration: we want to analyze the queries that are currently running or which have recently ran and their plan is still in the cache. SELECT dm_ws.wait_duration_ms, dm_ws.wait_type, dm_es.status, dm_t.TEXT, dm_qp.query_plan, dm_ws.session_ID, dm_es.cpu_time, dm_es.memory_usage, dm_es.logical_reads, dm_es.total_elapsed_time, dm_es.program_name, DB_NAME(dm_r.database_id) DatabaseName, -- Optional columns dm_ws.blocking_session_id, dm_r.wait_resource, dm_es.login_name, dm_r.command, dm_r.last_wait_type FROM sys.dm_os_waiting_tasks dm_ws INNER JOIN sys.dm_exec_requests dm_r ON dm_ws.session_id = dm_r.session_id INNER JOIN sys.dm_exec_sessions dm_es ON dm_es.session_id = dm_r.session_id CROSS APPLY sys.dm_exec_sql_text (dm_r.sql_handle) dm_t CROSS APPLY sys.dm_exec_query_plan (dm_r.plan_handle) dm_qp WHERE dm_es.is_user_process = 1 GO You can change CROSS APPLY to OUTER APPLY if you want to see all the details which are omitted because of the plan cache. Let us analyze the result of the above query and see how it can be helpful to identify the query and the kind of wait type it creates. Click to Enlarage The above query will return various columns. There are various columns that provide very important details. e.g. wait_duration_ms – it indicates current wait for the query that executes at that point of time. wait_type – it indicates the current wait type for the query text – indicates the query text query_plan – when clicked on the same, it will display the query plans There are many other important information like CPU_time, memory_usage, and logical_reads, which can be read from the query as well. In future posts on this series, we will see how once identified wait type we can attempt to reduce the same. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQLAuthority News – I am Presenting 2 Sessions at TechEd India

    - by pinaldave
    TechED is the event which I am always excited about. It is one of the largest technology in India. Microsoft Tech Ed India 2011 is the premier technical education and networking event for tech professionals interested in learning, connecting and exploring a broad set of current and soon-to-be released Microsoft technologies, tools, platforms and services. I am going to speak at the TechED on two very interesting and advanced subjects. Venue: The LaLiT Ashok Kumara Krupa High Grounds Bangalore – 560001, Karnataka, India Sessions Date: March 25, 2011 Understanding SQL Server Behavioral Pattern – SQL Server Extended Events Date and Time: March 25, 2011 12:00 PM to 01:00 PM History repeats itself! SQL Server 2008 has introduced a very powerful, yet very minimal reoccurring feature called Extended Events. This advanced session will teach experienced administrators’ capabilities that were not possible before. From T-SQL error to CPU bottleneck, error login to deadlocks –Extended Event can detect it for you. Understanding the pattern of events can prevent future mistakes. SQL Server Waits and Queues – Your Gateway to Perf. Troubleshooting Date and Time: March 25, 2011 04:15 PM to 05:15 PM Just like a horoscope, SQL Server Waits and Queues can reveal your past, explain your present and predict your future. SQL Server Performance Tuning uses the Waits and Queues as a proven method to identify the best opportunities to improve performance. A glance at Wait Types can tell where there is a bottleneck. Learn how to identify bottlenecks and potential resolutions in this fast paced, advanced performance tuning session. My session will be on the third day of the event and I am very sure that everybody will be in groove to learn new interesting subjects. I will have few give-away during and at the end of the session. I will not tell you what I will have but it will be for sure something you will love to have. Please make a point and reserve above time slots to attend my session. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology Tagged: SQL Extended Events

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Drawing RAW buffer to CGBitmapContext

    - by Raj
    Hi all, I have a raw image buffer in the RGB format. I need to draw it to CGContext so that I get a new buffer of the format ARGB. I accomplish this in the following way: Create a data provider out of raw buffer using CGDataProviderCreateWithData and then create image out of the data provider with the api: CGImageCreate. Now if I write this image back to the CGBitmapContext using CGContextImageDraw. Instead of creating an intermediate image, is there any way of writing the buffer directly to CGContext so that I can avoid the image creation phase? Thanks

    Read the article

  • Remove HTTP headers from a raw response

    - by Ed
    Let's say we make a request to a URL and get back the raw response, like this: HTTP/1.1 200 OK Date: Wed, 28 Apr 2010 14:39:13 GMT Expires: -1 Cache-Control: private, max-age=0 Content-Type: text/html; charset=ISO-8859-1 Set-Cookie: PREF=ID=e2bca72563dfffcc:TM=1272465553:LM=1272465553:S=ZN2zv8oxlFPT1BJG; expires=Fri, 27-Apr-2012 14:39:13 GMT; path=/; domain=.google.co.uk Server: gws X-XSS-Protection: 1; mode=block Connection: close <!doctype html><html><head>...</head><body>...</body></html> What would be the best way to remove the HTTP headers from the response in C#? With regexes? Parsing it into some kind of HTTPResponse object and using only the body? EDIT: I'm using SOCKS to make the request, that's why I get the raw response.

    Read the article

  • Raw socket sendto() failure in OS X

    - by user37278
    When I open a raw socket is OS X, construct my own udp packet (headers and data), and call sendto(), I get the error "Invalid Argument". Here is a sample program "rawudp.c" from the web site http://www.tenouk.com/Module43a.html that demonstrates this problem. The program (after adding string and stdlib #includes) runs under Fedora 10 but fails with "Invalid Argument" under OS X. Can anyone suggest why this fails in OS X? I have looked and looked and looked at the sendto() call, but all the parameters look good. I'm running the code as root, etc. Is there perhaps a kernel setting that prevents even uid 0 executables from sending packets through raw sockets in OS X Snow Leopard? Thanks.

    Read the article

  • C error conflicting types

    - by ambika
    i have following error error : conflicting types for 'sprintf' error : conflicting types for 'vsprintf' error : conflicting types for 'vprintf' error : conflicting types for 'select' in my header file, the code is extern char *sprintf(char*,const char*,... )

    Read the article

  • parsing raw email in php

    - by Uberfuzzy
    i'm looking for good/working/simple to use php code for parsing raw email into parts. i've written a couple of brute force solutions, but everytime, one small change/header/space/something comes along and my whole parser fails and the project falls apart. and before i get pointed at PEAR/PECL, i need actual code. my host has some screwy config or something, i can never seem to get the .so's to build right. if i do get the .so made, some difference in path/environment/php.ini doesnt always make it available (apache vs cron vs cli) oh, and one last thing, i'm parsing the raw email text, NOT pop3, and NOT imap. its being piped into the php script via a .qmail email redirect. i'm not expecting SOF to write it for me, i'm looking for some tips/starting points on doing it "right". this is one of those "wheel" problems that i know has already been solved.

    Read the article

  • RAW key generation in IOS with AES

    - by sudheer
    I am novice to the cryptography in ios . I got a requirement to convert android encryption code into IOS . Below is the android code part. I need that in IOS . Can any one please help me out in this . I need this below android code in IOS. private static byte[] getRawKey(byte[] seed) throws Exception { KeyGenerator kgen = KeyGenerator.getInstance("AES"); SecureRandom sr = SecureRandom.getInstance("SHA1PRNG"); sr.setSeed(seed); kgen.init(256, sr); SecretKey skey = kgen.generateKey(); byte[] raw = skey.getEncoded(); return raw; }

    Read the article

  • Are raw C++ pointers first class objects?

    - by Shailesh Kumar
    According to Wikipedia: An object is first-class when it: can be stored in variables and data structures can be passed as a parameter to a subroutine can be returned as the result of a subroutine can be constructed at runtime has intrinsic identity (independent of any given name) Somebody had once told me that raw pointers are not first class objects while smart pointers like std::auto_ptr are. But to me, a raw pointer (to an object or to a function) in C++ does seem to me to satisfy the conditions stated above to qualify as a first class object. Am I missing something?

    Read the article

  • Doxygen autolink not working to global enum types

    - by MeThinks
    I am trying to use Doxygen Automatic link generation to document some enum types. However, it is not generating links for the global enum types. It does generates links for the global struct types. Is there something I am missing? I am using the example provided on the link above. As required, I have documented the file in which the types are defined. update1: I am using Doxygen version 1.6.3 update2: global structs are ok

    Read the article

  • Sockets: RAW or STREAM

    - by user1415536
    May be the question is a bit stupid, but I'll ask it. I read a lot about raw sockets in network, have seen several examples. So, basically with raw sockets it's possible to build own stack of headers, like stack = IP + TCP/UDP + OWN_HEADER. My question is, is it possible to get some kind of ready frame of first two(IP + TCP/UDP) from the linux kernel and then just append own header to them? The operating system in question is linux and the language is C. I cannot find any function which can do such a thing, but may be I'm digging in a wrong direction.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >