Search Results

Search found 5377 results on 216 pages for 'explicit cast operator'.

Page 69/216 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Templates, Function Pointers and C++0x

    - by user328543
    One of my personal experiments to understand some of the C++0x features: I'm trying to pass a function pointer to a template function to execute. Eventually the execution is supposed to happen in a different thread. But with all the different types of functions, I can't get the templates to work. #include `<functional`> int foo(void) {return 2;} class bar { public: int operator() (void) {return 4;}; int something(int a) {return a;}; }; template <class C> int func(C&& c) { //typedef typename std::result_of< C() >::type result_type; typedef typename std::conditional< std::is_pointer< C >::value, std::result_of< C() >::type, std::conditional< std::is_object< C >::value, std::result_of< typename C::operator() >::type, void> >::type result_type; result_type result = c(); return result; } int main(int argc, char* argv[]) { // call with a function pointer func(foo); // call with a member function bar b; func(b); // call with a bind expression func(std::bind(&bar::something, b, 42)); // call with a lambda expression func( [](void)->int {return 12;} ); return 0; } The result_of template alone doesn't seem to be able to find the operator() in class bar and the clunky conditional I created doesn't compile. Any ideas? Will I have additional problems with const functions?

    Read the article

  • F# How to tokenise user input: separating numbers, units, words?

    - by David White
    I am fairly new to F#, but have spent the last few weeks reading reference materials. I wish to process a user-supplied input string, identifying and separating the constituent elements. For example, for this input: XYZ Hotel: 6 nights at 220EUR / night plus 17.5% tax the output should resemble something like a list of tuples: [ ("XYZ", Word); ("Hotel:", Word); ("6", Number); ("nights", Word); ("at", Operator); ("220", Number); ("EUR", CurrencyCode); ("/", Operator); ("night", Word); ("plus", Operator); ("17.5", Number); ("%", PerCent); ("tax", Word) ] Since I'm dealing with user input, it could be anything. Thus, expecting users to comply with a grammar is out of the question. I want to identify the numbers (could be integers, floats, negative...), the units of measure (optional, but could include SI or Imperial physical units, currency codes, counts such as "night/s" in my example), mathematical operators (as math symbols or as words including "at" "per", "of", "discount", etc), and all other words. I have the impression that I should use active pattern matching -- is that correct? -- but I'm not exactly sure how to start. Any pointers to appropriate reference material or similar examples would be great.

    Read the article

  • Checked and Unchecked operators don't seem to be working when...

    - by flockofcode
    1) Is UNCHECKED operator in effect only when expression inside UNCHECKED context uses an explicit cast ( such as byte b1=unchecked((byte)2000); ) and when conversion to particular type can happen implicitly? I’m assuming this since the following expression throws a compile time error: byte b1=unchecked(2000); //compile time error 2) a) Do CHECKED and UNCHECKED operators work only when resulting value of an expression or conversion is of an integer type? I’m assuming this since in the first example ( where double type is being converted to integer type ) CHECKED operator works as expected: double m = double.MaxValue; b=checked((byte)m); // reports an exception , while in second example ( where double type is being converted to a float type ) CHECKED operator doesn’t seem to be working. since it doesn't throw an exception: double m = double.MaxValue; float f = checked((float)m); // no exception thrown b) Why don’t the two operators also work with expressions where type of a resulting value is of floating-point type? 2) Next quote is from Microsoft’s site: The unchecked keyword is used to control the overflow-checking context for integral-type arithmetic operations and conversions I’m not sure I understand what exactly have expressions and conversions such as unchecked((byte)(100+200)); in common with integrals? Thank you

    Read the article

  • Fixing an SQL where statement that is ugly and confusing

    - by Mike Wills
    I am directly querying the back-end MS SQL Server for a software package. The key field (vehicle number) is defined as alpha though we are entering numeric value in the field. There is only one exception to this, we place an "R" before the number when the vehicle is being retired (which means we sold it or the vehicle is junked). Assuming the users do this right, we should not run into a problem using this method. (Right or wrong isn't the issue here) Fast forward to now. I am trying to query a subset of these vehicle numbers (800 - 899) for some special processing. By doing a range of '800' to '899' we also get 80, 81, etc. If I cast the vehicle number into an INT, I should be able to get the right range. Except that these "R" vehicles are kicking me in the butt now. I have tried where vehicleId not like 'R%' and cast(vehicleId as int) between 800 and 899 however, I get a casting error on one of these "R" vehicles. What does work is where vehicleId not between '800' and '899' and cast(vehicleId as int) between 800 and 899', but I feel there has to be a better way and less confusing way. I have also tried other variations with HAVING and a sub-query all producing a casting error.

    Read the article

  • short-cutting equality checking in F#?

    - by John Clements
    In F#, the equality operator (=) is generally extensional, rather than intensional. That's great! Unfortunately, it appears to me that F# does not use pointer equality to short-cut these extensional comparisons. For instance, this code: type Z = MT | NMT of Z ref // create a Z: let a = ref MT // make it point to itself: a := NMT a // check to see whether it's equal to itself: printf "a = a: %A\n" (a = a) ... gives me a big fat segmentation fault[*], despite the fact that 'a' and 'a' both evaluate to the same reference. That's not so great. Other functional languages (e.g. PLT Scheme) get this right, using pointer comparisons conservatively, to return 'true' when it can be determined using a pointer comparison. So: I'll accept the fact that F#'s equality operator doesn't use short-cutting; is there some way to perform an intensional (pointer-based) equality check? The (==) operator is not defined on my types, and I'd love it if someone could tell me that it's available somehow. Or tell me that I'm wrong in my analysis of the situation: I'd love that, too... [*] That would probably be a stack overflow on Windows; there are things about Mono that I'm not that fond of...

    Read the article

  • Another boost error

    - by user1676605
    On this code I get the enourmous error static void ParseTheCommandLine(int argc, char *argv[]) { int count; int seqNumber; namespace po = boost::program_options; std::string appName = boost::filesystem::basename(argv[0]); po::options_description desc("Generic options"); desc.add_options() ("version,v", "print version string") ("help", "produce help message") ("sequence-number", po::value<int>(&seqNumber)->default_value(0), "sequence number") ("pem-file", po::value< vector<string> >(), "pem file") ; po::positional_options_description p; p.add("pem-file", -1); po::variables_map vm; po::store(po::command_line_parser(argc, argv). options(desc).positional(p).run(), vm); po::notify(vm); if (vm.count("pem file")) { cout << "Pem files are: " << vm["pem-file"].as< vector<string> >() << "\n"; } cout << "Sequence number is " << seqNumber << "\n"; exit(1); ../../../FIXMarketDataCommandLineParameters/FIXMarketDataCommandLineParameters.hpp|98|error: no match for ‘operator<<’ in ‘std::operator<< [with _Traits = std::char_traits](((std::basic_ostream &)(& std::cout)), ((const char*)"Pem files are: ")) << ((const boost::program_options::variable_value*)vm.boost::program_options::variables_map::operator[](((const std::string&)(& std::basic_string, std::allocator (((const char*)"pem-file"), ((const std::allocator&)((const std::allocator*)(& std::allocator()))))))))-boost::program_options::variable_value::as with T = std::vector, std::allocator , std::allocator, std::allocator ’|

    Read the article

  • Alert visualization recipe: Get out your blender, drop in some sp_send_dbmail, Google Charts API, add your favorite colors and sprinkle with html. Blend till it’s smooth and looks pretty enough to taste.

    - by Maria Zakourdaev
      I really like database monitoring. My email inbox have a constant flow of different types of alerts coming from our production servers with all kinds of information, sometimes more useful and sometimes less useful. Usually database alerts look really simple, it’s usually a plain text email saying “Prod1 Database data file on Server X is 80% used. You’d better grow it manually before some query triggers the AutoGrowth process”. Imagine you could have received email like the one below.  In addition to the alert description it could have also included the the database file growth chart over the past 6 months. Wouldn’t it give you much more information whether the data growth is natural or extreme? That’s truly what data visualization is for. Believe it or not, I have sent the graph below from SQL Server stored procedure without buying any additional data monitoring/visualization tool.   Would you like to visualize your database alerts like I do? Then like myself, you’d love the Google Charts. All you need to know is a little HTML and have a mail profile configured on your SQL Server instance regardless of the SQL Server version. First of all, I hope you know that the sp_send_dbmail procedure has a great parameter @body_format = ‘HTML’, which allows us to send rich and colorful messages instead of boring black and white ones. All that we need is to dynamically create HTML code. This is how, for instance, you can create a table and populate it with some data: DECLARE @html varchar(max) SET @html = '<html>' + '<H3><font id="Text" style='color: Green;'>Top Databases: </H3>' + '<table border="1" bordercolor="#3300FF" style='background-color:#DDF8CC' width='70%' cellpadding='3' cellspacing='3'>' + '<tr><font color="Green"><th>Database Name</th><th>Size</th><th>Physical Name</th></tr>' + CAST( (SELECT TOP 10                             td = name,'',                             td = size * 8/1024 ,'',                             td = physical_name              FROM sys.master_files               ORDER BY size DESC             FOR XML PATH ('tr'),TYPE ) AS VARCHAR(MAX)) + '</table>' EXEC msdb.dbo.sp_send_dbmail @recipients = '[email protected]', @subject ='Top databases', @body = @html, @body_format = 'HTML' This is the result:   If you want to add more visualization effects, you can use Google Charts Tools https://google-developers.appspot.com/chart/interactive/docs/index which is a free and rich library of data visualization charts, they’re also easy to populate and embed. There are two versions of the Google Charts Image based charts: https://google-developers.appspot.com/chart/image/docs/gallery/chart_gall This is an old version, it’s officially deprecated although it will be up for a next few years or so. I really enjoy using this one because it can be viewed within the email body. For mobile devices you need to change the “Load remote images” property in your email application configuration.           Charts based on JavaScript classes: https://google-developers.appspot.com/chart/interactive/docs/gallery This API is newer, with rich and highly interactive charts, and it’s much more easier to understand and configure. The only downside of it is that they cannot be viewed within the email body. Outlook, Gmail and many other email clients, as part of their security policy, do not run any JavaScript that’s placed within the email body. However, you can still enjoy this API by sending the report as an email attachment. Here is an example of the old version of Google Charts API, sending the same top databases report as in the previous example but instead of a simple table, this script is using a pie chart right from  the T-SQL code DECLARE @html  varchar(8000) DECLARE @Series  varchar(800),@Labels  varchar(8000),@Legend  varchar(8000);     SET @Series = ''; SET @Labels = ''; SET @Legend = ''; SELECT TOP 5 @Series = @Series + CAST(size * 8/1024 as varchar) + ',',                         @Labels = @Labels +CAST(size * 8/1024 as varchar) + 'MB'+'|',                         @Legend = @Legend + name + '|' FROM sys.master_files ORDER BY size DESC SELECT @Series = SUBSTRING(@Series,1,LEN(@Series)-1),         @Labels = SUBSTRING(@Labels,1,LEN(@Labels)-1),         @Legend = SUBSTRING(@Legend,1,LEN(@Legend)-1) SET @html =   '<H3><font color="Green"> '+@@ServerName+' top 5 databases : </H3>'+    '<br>'+    '<img src="http://chart.apis.google.com/chart?'+    'chf=bg,s,DDF8CC&'+    'cht=p&'+    'chs=400x200&'+    'chco=3072F3|7777CC|FF9900|FF0000|4A8C26&'+    'chd=t:'+@Series+'&'+    'chl='+@Labels+'&'+    'chma=0,0,0,0&'+    'chdl='+@Legend+'&'+    'chdlp=b"'+    'alt="'+@@ServerName+' top 5 databases" />'              EXEC msdb.dbo.sp_send_dbmail @recipients = '[email protected]',                             @subject = 'Top databases',                             @body = @html,                             @body_format = 'HTML' This is what you get. Isn’t it great? Chart parameters reference: chf     Gradient fill  bg - backgroud ; s- solid cht     chart type  ( p - pie) chs        chart size width/height chco    series colors chd        chart data string        1,2,3,2 chl        pir chart labels        a|b|c|d chma    chart margins chdl    chart legend            a|b|c|d chdlp    chart legend text        b - bottom of chart   Line graph implementation is also really easy and powerful DECLARE @html varchar(max) DECLARE @Series varchar(max) DECLARE @HourList varchar(max) SET @Series = ''; SET @HourList = ''; SELECT @HourList = @HourList + SUBSTRING(CONVERT(varchar(13),last_execution_time,121), 12,2)  + '|' ,              @Series = @Series + CAST( COUNT(1) as varchar) + ',' FROM sys.dm_exec_query_stats s     CROSS APPLY sys.dm_exec_sql_text(plan_handle) t WHERE last_execution_time > = getdate()-1 GROUP BY CONVERT(varchar(13),last_execution_time,121) ORDER BY CONVERT(varchar(13),last_execution_time,121) SET @Series = SUBSTRING(@Series,1,LEN(@Series)-1) SET @html = '<img src="http://chart.apis.google.com/chart?'+ 'chco=CA3D05,87CEEB&'+ 'chd=t:'+@Series+'&'+ 'chds=1,350&'+ 'chdl= Proc executions from cache&'+ 'chf=bg,s,1F1D1D|c,lg,0,363433,1.0,2E2B2A,0.0&'+ 'chg=25.0,25.0,3,2&'+ 'chls=3|3&'+ 'chm=d,CA3D05,0,-1,12,0|d,FFFFFF,0,-1,8,0|d,87CEEB,1,-1,12,0|d,FFFFFF,1,-1,8,0&'+ 'chs=600x450&'+ 'cht=lc&'+ 'chts=FFFFFF,14&'+ 'chtt=Executions for from' +(SELECT CONVERT(varchar(16),min(last_execution_time),121)          FROM sys.dm_exec_query_stats          WHERE last_execution_time > = getdate()-1) +' till '+ +(SELECT CONVERT(varchar(16),max(last_execution_time),121)     FROM sys.dm_exec_query_stats) + '&'+ 'chxp=1,50.0|4,50.0&'+ 'chxs=0,FFFFFF,12,0|1,FFFFFF,12,0|2,FFFFFF,12,0|3,FFFFFF,12,0|4,FFFFFF,14,0&'+ 'chxt=y,y,x,x,x&'+ 'chxl=0:|1|350|1:|N|2:|'+@HourList+'3:|Hour&'+ 'chma=55,120,0,0" alt="" />' EXEC msdb.dbo.sp_send_dbmail @recipients = '[email protected]', @subject ='Daily number of executions', @body = @html, @body_format = 'HTML' Chart parameters reference: chco    series colors chd        series data chds    scale format chdl    chart legend chf        background fills chg        grid line chls    line style chm        line fill chs        chart size cht        chart type chts    chart style chtt    chart title chxp    axis label positions chxs    axis label styles chxt    axis tick mark styles chxl    axis labels chma    chart margins If you don’t mind to get your charts as an email attachment, you can enjoy the Java based Google Charts which are even easier to configure, and have much more advanced graphics. In the example below, the sp_send_email procedure uses the parameter @query which will be executed at the time that sp_send_dbemail is executed and the HTML result of this execution will be attached to the email. DECLARE @html varchar(max),@query varchar(max) DECLARE @SeriesDBusers  varchar(800);     SET @SeriesDBusers = ''; SELECT @SeriesDBusers = @SeriesDBusers +  ' ["'+DB_NAME(r.database_id) +'", ' +cast(count(1) as varchar)+'],' FROM sys.dm_exec_requests r GROUP BY DB_NAME(database_id) ORDER BY count(1) desc; SET @SeriesDBusers = SUBSTRING(@SeriesDBusers,1,LEN(@SeriesDBusers)-1) SET @query = ' PRINT '' <html>   <head>     <script type="text/javascript" src="https://www.google.com/jsapi"></script>     <script type="text/javascript">       google.load("visualization", "1", {packages:["corechart"]});        google.setOnLoadCallback(drawChart);       function drawChart() {                      var data = google.visualization.arrayToDataTable([                        ["Database Name", "Active users"],                        '+@SeriesDBusers+'                      ]);                        var options = {                        title: "Active users",                        pieSliceText: "value"                      };                        var chart = new google.visualization.PieChart(document.getElementById("chart_div"));                      chart.draw(data, options);       };     </script>   </head>   <body>     <table>     <tr><td>         <div id="chart_div" style='width: 800px; height: 300px;'></div>         </td></tr>     </table>   </body> </html> ''' EXEC msdb.dbo.sp_send_dbmail    @recipients = '[email protected]',    @subject ='Active users',    @body = @html,    @body_format = 'HTML',    @query = @Query,     @attach_query_result_as_file = 1,     @query_attachment_filename = 'Results.htm' After opening the email attachment in the browser you are getting this kind of report: In fact, the above is not only for database alerts. It can be used for applicative reports if you need high levels of customization that you cannot achieve using standard methods like SSRS. If you need more information on how to customize the charts, you can try the following: Image Based Charts wizard https://google-developers.appspot.com/chart/image/docs/chart_wizard  Live Image Charts Playground https://google-developers.appspot.com/chart/image/docs/chart_playground Image Based Charts Parameters List https://google-developers.appspot.com/chart/image/docs/chart_params Java Script Charts Playground https://code.google.com/apis/ajax/playground/?type=visualization Use the above examples as a starting point for your procedures and I’d be more than happy to hear of your implementations of the above techniques. Yours, Maria

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • MSDeploy - possible to call setAcl on multiple destinations in one go?

    - by growse
    I'm building a nice little continuous integration environment for our development team, based on TeamCity. It's working rather nicely, as it can build a mix of .NET and PHP projects, and push them to our internal and external platforms. I'm primarily using MsDeploy to push everything to the internal platform, as that's all IIS based. However, there's a number of builds where I need to set directory permissions on the destination directory. I can use the setAcl operator just fine, but that only seems to take a single destination as an argument. Therefore, if I need to alter the permissions on 5 destination directories, I need to call MsDeploy 5 times, which seems a lot of overhead. Is there a sensible way around this? Reading the documentation, I don't think MsDeploy takes more than a single argument for the setAcl operator, but could be wrong. Is there a better way for a build server to set multiple directory permissions in one go?

    Read the article

  • Dropbox.py on RHEL 6

    - by Timothy R. Butler
    I'm trying to run a headless install of Dropbox on RHEL 6. The daemon seems to be running, but when I try to use Dropbox's associated dropbox.py tool to control the daemon, it fails to run with the following error: Traceback (most recent call last): File "./dropbox.py", line 26, in <module> import locale File "/usr/lib64/python2.6/locale.py", line 202, in <module> import re, operator ImportError: /home/dropbox/.dropbox-dist/operator.so: undefined symbol: _PyUnicodeUCS2_AsDefaultEncodedString I'm running the current RHEL build of Python 2.6: root@cedar [/home/dropbox/.dropbox-dist]# rpm -qv python python-2.6.6-29.el6_3.3.x86_64 (I'm not sure if this would be better suited to Stack Overflow since it is on the verge of being a programming issue, but since I'm trying to use a program straight from Dropbox, I placed it here.)

    Read the article

  • Mutt: apply command to all tagged messages

    - by mrucci
    From the mutt manual: Once you have tagged the desired messages, you can use the tag-prefix operator, which is the ; (semicolon) key by default. When the tag-prefix operator is used, the next operation will be applied to all tagged messages if that operation can be used in that manner. But it seems that I can only execute commands that are already bound to a specific keyboard shortcut. For example I can use ;d to delete all selected messages. What if I want to apply an "unbound" command (such as purge-message)? I have also tried using something based on :exec tag-prefix or :push tag-prefix without success.

    Read the article

  • How do I run multiple commands on one line in Powershell?

    - by David
    In cmd prompt, you can run two commands on one line like so: ipconfig /release & ipconfig /renew When I run this command in PowerShell, I get: Ampersand not allowed. The & operator is reserved for future use Does PowerShell have an operator that allows me to quickly produce the equivalent of & in cmd prompt? Any method of running two commands in one line will do. I know that I can make a script, but I'm looking for something a little more off the cuff.

    Read the article

  • web grid server pagination trigger multiple controller call when changing page

    - by Thomas Scattolin
    When I server-filter on "au" my web grid and change page, multiple call to the controller are done : the first with 0 filtering, the second with "a" filtering, the third with "au" filtering. My table load huge data so the first call is longer than others. I see the grid displaying firstly the third call result, then the second, and finally the first call (this order correspond to the response time of my controller due to filter parameter) Why are all that controller call made ? Can't just my controller be called once with my total filter "au" ? What should I do ? Here is my grid : $("#" + gridId).kendoGrid({ selectable: "row", pageable: true, filterable:true, scrollable : true, //scrollable: { // virtual: true //false // Bug : Génère un affichage multiple... //}, navigatable: true, groupable: true, sortable: { mode: "multiple", // enables multi-column sorting allowUnsort: true }, dataSource: { type: "json", serverPaging: true, serverSorting: true, serverFiltering: true, serverGrouping:false, // Ne fonctionne pas... pageSize: '@ViewBag.Pagination', transport: { read: { url: Procvalue + "/LOV", type: "POST", dataType: "json", contentType: "application/json; charset=utf-8" }, parameterMap: function (options, type) { // Mise à jour du format d'envoi des paramètres // pour qu'ils puissent être correctement interprétés côté serveur. // Construction du paramètre sort : if (options.sort != null) { var sort = options.sort; var sort2 = ""; for (i = 0; i < sort.length; i++) { sort2 = sort2 + sort[i].field + '-' + sort[i].dir + '~'; } options.sort = sort2; } if (options.group != null) { var group = options.group; var group2 = ""; for (i = 0; i < group.length; i++) { group2 = group2 + group[i].field + '-' + group[i].dir + '~'; } options.group = group2; } if (options.filter != null) { var filter = options.filter.filters; var filter2 = ""; for (i = 0; i < filter.length; i++) { // Vérification si type colonne == string. // Parcours des colonnes pour trouver celle qui a le même nom de champ. var type = ""; for (j = 0 ; j < colonnes.length ; j++) { if (colonnes[j].champ == filter[i].field) { type = colonnes[j].type; break; } } if (filter2.length == 0) { if (type == "string") { // Avec '' autour de la valeur. filter2 = filter2 + filter[i].field + '~' + filter[i].operator + "~'" + filter[i].value + "'"; } else { // Sans '' autour de la valeur. filter2 = filter2 + filter[i].field + '~' + filter[i].operator + "~" + filter[i].value; } } else { if (type == "string") { // Avec '' autour de la valeur. filter2 = filter2 + '~' + options.filter.logic + '~' + filter[i].field + '~' + filter[i].operator + "~'" + filter[i].value + "'"; }else{ filter2 = filter2 + '~' + options.filter.logic + '~' + filter[i].field + '~' + filter[i].operator + "~" + filter[i].value; } } } options.filter = filter2; } var json = JSON.stringify(options); return json; } }, schema: { data: function (data) { return eval(data.data.Data); }, total: function (data) { return eval(data.data.Total); } }, filter: { logic: "or", filters:filtre(valeur) } }, columns: getColonnes(colonnes) }); Here is my controller : [HttpPost] public ActionResult LOV([DataSourceRequest] DataSourceRequest request) { return Json(CProduitsManager.GetProduits().ToDataSourceResult(request)); }

    Read the article

  • boost::serialization of mutual pointers

    - by KneLL
    First, please take a look at these code: class Key; class Door; class Key { public: int id; Door *pDoor; Key() : id(0), pDoor(NULL) {} private: friend class boost::serialization::access; template <typename A> void serialize(A &ar, const unsigned int ver) { ar & BOOST_SERIALIZATION_NVP(id) & BOOST_SERIALIZATION_NVP(pDoor); } }; class Door { public: int id; Key *pKey; Door() : id(0), pKey(NULL) {} private: friend class boost::serialization::access; template <typename A> void serialize(A &ar, const unsigned int ver) { ar & BOOST_SERIALIZATION_NVP(id) & BOOST_SERIALIZATION_NVP(pKey); } }; BOOST_CLASS_TRACKING(Key, track_selectively); BOOST_CLASS_TRACKING(Door, track_selectively); int main() { Key k1, k_in; Door d1, d_in; k1.id = 1; d1.id = 2; k1.pDoor = &d1; d1.pKey = &k1; // Save data { wofstream f1("test.xml"); boost::archive::xml_woarchive ar1(f1); // !!!!! (1) const Key *pK = &k1; const Door *pD = &d1; ar1 << BOOST_SERIALIZATION_NVP(pK) << BOOST_SERIALIZATION_NVP(pD); } // Load data { wifstream i1("test.xml"); boost::archive::xml_wiarchive ar1(i1); // !!!!! (2) A *pK = &k_in; B *pD = &d_in; // (2.1) //ar1 >> BOOST_SERIALIZATION_NVP(k_in) >> BOOST_SERIALIZATION_NVP(d_in); // (2.2) ar1 >> BOOST_SERIALIZATION_NVP(pK) >> BOOST_SERIALIZATION_NVP(pD); } } The first (1) is a simple question - is it possible to pass objects to archive without pointers? If simply pass objects 'as is' that boost throws exception about duplicated pointers. But I'm confused of creating pointers to save objects. The second (2) is a real trouble. If comment out string after (2.1) then boost will corectly load a first Key object (and init internal Door pointer pDoor), but will not init a second Door (d_in) object. After this I have an inited *k_in* object with valid pointer to Door and empty *d_in* object. If use string (2.2) then boost will create two Key and Door objects somewhere in memory and save addresses in pointers. But I want to have two objects *k_in* and *d_in*. So, if I copy a values of memory objects to local variables then I store only addresses, for example, I can write code after (2.2): d_in.id = pD->id; d_in.pKey = pD->pKey; But in this case I store only a pointer and memory object remains in memory and I cannot delete it, because *d_in.pKey* will be unvalid. And I cannot perform a deep copy with operator=(), because if I write code like this: Key &operator==(const Key &k) { if (this != &k) { id = k.id; // call to Door::operator=() that calls *pKey = *d.pKey and so on *pDoor = *k.pDoor; } return *this; } then I will get a something like recursion of operator=()s of Key and Door. How to implement proper serialization of such pointers?

    Read the article

  • boost::function & boost::lambda again

    - by John Dibling
    Follow-up to post: http://stackoverflow.com/questions/2978096/using-width-precision-specifiers-with-boostformat I'm trying to use boost::function to create a function that uses lambdas to format a string with boost::format. Ultimately what I'm trying to achieve is using width & precision specifiers for strings with format. boost::format does not support the use of the * width & precision specifiers, as indicated in the docs: Width or precision set to asterisk (*) are used by printf to read this field from an argument. e.g. printf("%1$d:%2$.*3$d:%4$.*3$d\n", hour, min, precision, sec); This class does not support this mechanism for now. so such precision or width fields are quietly ignored by the parsing. so I'm trying to find other ways to accomplish the same goal. Here is what I have so far, which isn't working: #include <string> #include <boost\function.hpp> #include <boost\lambda\lambda.hpp> #include <iostream> #include <boost\format.hpp> #include <iomanip> #include <boost\bind.hpp> int main() { using namespace boost::lambda; using namespace std; boost::function<std::string(int, std::string)> f = (boost::format("%s") % boost::io::group(setw(_1*2), setprecision(_2*2), _3)).str(); std::string s = (boost::format("%s") % f(15, "Hello")).str(); return 0; } This generates many compiler errors: 1>------ Build started: Project: hacks, Configuration: Debug x64 ------ 1>Compiling... 1>main.cpp 1>.\main.cpp(15) : error C2872: '_1' : ambiguous symbol 1> could be 'D:\Program Files (x86)\boost\boost_1_42\boost/lambda/core.hpp(69) : boost::lambda::placeholder1_type &boost::lambda::`anonymous-namespace'::_1' 1> or 'D:\Program Files (x86)\boost\boost_1_42\boost/bind/placeholders.hpp(43) : boost::arg<I> `anonymous-namespace'::_1' 1> with 1> [ 1> I=1 1> ] 1>.\main.cpp(15) : error C2664: 'std::setw' : cannot convert parameter 1 from 'boost::lambda::placeholder1_type' to 'std::streamsize' 1> No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called 1>.\main.cpp(15) : error C2872: '_2' : ambiguous symbol 1> could be 'D:\Program Files (x86)\boost\boost_1_42\boost/lambda/core.hpp(70) : boost::lambda::placeholder2_type &boost::lambda::`anonymous-namespace'::_2' 1> or 'D:\Program Files (x86)\boost\boost_1_42\boost/bind/placeholders.hpp(44) : boost::arg<I> `anonymous-namespace'::_2' 1> with 1> [ 1> I=2 1> ] 1>.\main.cpp(15) : error C2664: 'std::setprecision' : cannot convert parameter 1 from 'boost::lambda::placeholder2_type' to 'std::streamsize' 1> No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called 1>.\main.cpp(15) : error C2872: '_3' : ambiguous symbol 1> could be 'D:\Program Files (x86)\boost\boost_1_42\boost/lambda/core.hpp(71) : boost::lambda::placeholder3_type &boost::lambda::`anonymous-namespace'::_3' 1> or 'D:\Program Files (x86)\boost\boost_1_42\boost/bind/placeholders.hpp(45) : boost::arg<I> `anonymous-namespace'::_3' 1> with 1> [ 1> I=3 1> ] 1>.\main.cpp(15) : error C2660: 'boost::io::group' : function does not take 3 arguments 1>.\main.cpp(15) : error C2228: left of '.str' must have class/struct/union 1>Build log was saved at "file://c:\Users\john\Documents\Visual Studio 2005\Projects\hacks\x64\Debug\BuildLog.htm" 1>hacks - 7 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== My fundamental understanding of boost's lambdas and functions is probably lacking. How can I get this to work?

    Read the article

  • Const references when dereferencing iterator on set, starting from Visual Studio 2010

    - by Patrick
    Starting from Visual Studio 2010, iterating over a set seems to return an iterator that dereferences the data as 'const data' instead of non-const. The following code is an example of something that does compile on Visual Studio 2005, but not on 2010 (this is an artificial example, but clearly illustrates the problem we found on our own code). In this example, I have a class that stores a position together with a temperature. I define comparison operators (not all them, just enough to illustrate the problem) that only use the position, not the temperature. The point is that for me two instances are identical if the position is identical; I don't care about the temperature. #include <set> class DataPoint { public: DataPoint (int x, int y) : m_x(x), m_y(y), m_temperature(0) {} void setTemperature(double t) {m_temperature = t;} bool operator<(const DataPoint& rhs) const { if (m_x==rhs.m_x) return m_y<rhs.m_y; else return m_x<rhs.m_x; } bool operator==(const DataPoint& rhs) const { if (m_x!=rhs.m_x) return false; if (m_y!=rhs.m_y) return false; return true; } private: int m_x; int m_y; double m_temperature; }; typedef std::set<DataPoint> DataPointCollection; void main(void) { DataPointCollection points; points.insert (DataPoint(1,1)); points.insert (DataPoint(1,1)); points.insert (DataPoint(1,2)); points.insert (DataPoint(1,3)); points.insert (DataPoint(1,1)); for (DataPointCollection::iterator it=points.begin();it!=points.end();++it) { DataPoint &point = *it; point.setTemperature(10); } } In the main routine I have a set to which I add some points. To check the correctness of the comparison operator, I add data points with the same position multiple times. When writing the contents of the set, I can clearly see there are only 3 points in the set. The for-loop loops over the set, and sets the temperature. Logically this is allowed, since the temperature is not used in the comparison operators. This code compiles correctly in Visual Studio 2005, but gives compilation errors in Visual Studio 2010 on the following line (in the for-loop): DataPoint &point = *it; The error given is that it can't assign a "const DataPoint" to a [non-const] "DataPoint &". It seems that you have no decent (= non-dirty) way of writing this code in VS2010 if you have a comparison operator that only compares parts of the data members. Possible solutions are: Adding a const-cast to the line where it gives an error Making temperature mutable and making setTemperature a const method But to me both solutions seem rather 'dirty'. It looks like the C++ standards committee overlooked this situation. Or not? What are clean solutions to solve this problem? Did some of you encounter this same problem and how did you solve it? Patrick

    Read the article

  • Issue with translating a delegate function from c# to vb.net for use with Google OAuth 2

    - by Jeremy
    I've been trying to translate a Google OAuth 2 example from C# to Vb.net for a co-worker's project. I'm having on end of issues translating the following methods: private OAuth2Authenticator<WebServerClient> CreateAuthenticator() { // Register the authenticator. var provider = new WebServerClient(GoogleAuthenticationServer.Description); provider.ClientIdentifier = ClientCredentials.ClientID; provider.ClientSecret = ClientCredentials.ClientSecret; var authenticator = new OAuth2Authenticator<WebServerClient>(provider, GetAuthorization) { NoCaching = true }; return authenticator; } private IAuthorizationState GetAuthorization(WebServerClient client) { // If this user is already authenticated, then just return the auth state. IAuthorizationState state = AuthState; if (state != null) { return state; } // Check if an authorization request already is in progress. state = client.ProcessUserAuthorization(new HttpRequestInfo(HttpContext.Current.Request)); if (state != null && (!string.IsNullOrEmpty(state.AccessToken) || !string.IsNullOrEmpty(state.RefreshToken))) { // Store and return the credentials. HttpContext.Current.Session["AUTH_STATE"] = _state = state; return state; } // Otherwise do a new authorization request. string scope = TasksService.Scopes.TasksReadonly.GetStringValue(); OutgoingWebResponse response = client.PrepareRequestUserAuthorization(new[] { scope }); response.Send(); // Will throw a ThreadAbortException to prevent sending another response. return null; } The main issue being this line: var authenticator = new OAuth2Authenticator<WebServerClient>(provider, GetAuthorization) { NoCaching = true }; The Method signature reads as for this particular line reads as follows: Public Sub New(tokenProvider As TClient, authProvider As System.Func(Of TClient, DotNetOpenAuth.OAuth2.IAuthorizationState)) My understanding of Delegate functions in VB.net isn't the greatest. However I have read over all of the MSDN documentation and other relevant resources on the web, but I'm still stuck as to how to translate this particular line. So far all of my attempts have resulted in either the a cast error (see below) or no call to GetAuthorization. The Code (vb.net on .net 3.5) Private Function CreateAuthenticator() As OAuth2Authenticator(Of WebServerClient) ' Register the authenticator. Dim client As New WebServerClient(GoogleAuthenticationServer.Description, oauth.ClientID, oauth.ClientSecret) Dim authDelegate As Func(Of WebServerClient, IAuthorizationState) = AddressOf GetAuthorization Dim authenticator = New OAuth2Authenticator(Of WebServerClient)(client, authDelegate) With {.NoCaching = True} 'Dim authenticator = New OAuth2Authenticator(Of WebServerClient)(client, GetAuthorization(client)) With {.NoCaching = True} 'Dim authenticator = New OAuth2Authenticator(Of WebServerClient)(client, New Func(Of WebServerClient, IAuthorizationState)(Function(c) GetAuthorization(c))) With {.NoCaching = True} 'Dim authenticator = New OAuth2Authenticator(Of WebServerClient)(client, New Func(Of WebServerClient, IAuthorizationState)(AddressOf GetAuthorization)) With {.NoCaching = True} Return authenticator End Function Private Function GetAuthorization(arg As WebServerClient) As IAuthorizationState ' If this user is already authenticated, then just return the auth state. Dim state As IAuthorizationState = AuthState If (Not state Is Nothing) Then Return state End If ' Check if an authorization request already is in progress. state = arg.ProcessUserAuthorization(New HttpRequestInfo(HttpContext.Current.Request)) If (state IsNot Nothing) Then If ((String.IsNullOrEmpty(state.AccessToken) = False Or String.IsNullOrEmpty(state.RefreshToken) = False)) Then ' Store Credentials HttpContext.Current.Session("AUTH_STATE") = state _state = state Return state End If End If ' Otherwise do a new authorization request. Dim scope As String = AnalyticsService.Scopes.AnalyticsReadonly.GetStringValue() Dim _response As OutgoingWebResponse = arg.PrepareRequestUserAuthorization(New String() {scope}) ' Add Offline Access and forced Approval _response.Headers("location") += "&access_type=offline&approval_prompt=force" _response.Send() ' Will throw a ThreadAbortException to prevent sending another response. Return Nothing End Function The Cast Error Server Error in '/' Application. Unable to cast object of type 'DotNetOpenAuth.OAuth2.AuthorizationState' to type 'System.Func`2[DotNetOpenAuth.OAuth2.WebServerClient,DotNetOpenAuth.OAuth2.IAuthorizationState]'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidCastException: Unable to cast object of type 'DotNetOpenAuth.OAuth2.AuthorizationState' to type 'System.Func`2[DotNetOpenAuth.OAuth2.WebServerClient,DotNetOpenAuth.OAuth2.IAuthorizationState]'. I've spent the better part of a day on this, and it's starting to drive me nuts. Help is much appreciated.

    Read the article

  • How to get the value of an item from database and set it as the spinner value

    - by kulPrins
    I have a database and the database populates a listview. when i long press a list item from the listview i get a context menu where i have the option to edit. When the edit option is clicked i open an activity and get all the values of the respective fields from the database. Now, here I want to get a value from the database and show it in the spinner. the spinner already has these values and is getting populated from the database... I have tried the following, but i get an error saying I can't cast a SimpleCursorAdapter to an ArrayAdapter.. String osub = cursor.getString(Database.INDEX_SUBJECT); Cursor cs = mDBS.querySub(osub); String subs = cs.getString(DatabaseSub.INDEX_SSUBJECT); if(cs!=null){ ArrayAdapter<String> myAdap = (ArrayAdapter<String>) subSpinner.getAdapter(); //cast to an ArrayAdapter int spinnerPosition = myAdap.getPosition(subs); //set the default according to value subSpinner.setSelection(spinnerPosition); Please tell me how to do this.. Thank you.. I am relatively new so please let me know if I am overlooking something. Thanks.. EDIT querySub method public Cursor querySub(String sub) throws SQLException { Cursor cursor = mDatabase.query(true, DATABASE_TABLE, sAllColumns, "ssub like " + "'" + sub + "'", null, null, null, null, null); if (cursor.moveToNext()) { return cursor; } cursor.moveToFirst(); return cursor; } Error Log 06-05 12:11:14.144: E/AndroidRuntime(775): FATAL EXCEPTION: main 06-05 12:11:14.144: E/AndroidRuntime(775): java.lang.RuntimeException: Unable to start activity ComponentInfo{an.droid.kit/an.droid.kit.DetailsActivity}: java.lang.ClassCastException: android.widget.SimpleCursorAdapter cannot be cast to android.widget.ArrayAdapter 06-05 12:11:14.144: E/AndroidRuntime(775): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1956) 06-05 12:11:14.144: E/AndroidRuntime(775): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1981) 06-05 12:11:14.144: E/AndroidRuntime(775): at android.app.ActivityThread.access$600(ActivityThread.java:123) 06-05 12:11:14.144: E/AndroidRuntime(775): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1147) 06-05 12:11:14.144: E/AndroidRuntime(775): at android.os.Handler.dispatchMessage(Handler.java:99) 06-05 12:11:14.144: E/AndroidRuntime(775): at android.os.Looper.loop(Looper.java:137) 06-05 12:11:14.144: E/AndroidRuntime(775): at android.app.ActivityThread.main(ActivityThread.java:4424) 06-05 12:11:14.144: E/AndroidRuntime(775): at java.lang.reflect.Method.invokeNative(Native Method) 06-05 12:11:14.144: E/AndroidRuntime(775): at java.lang.reflect.Method.invoke(Method.java:511) 06-05 12:11:14.144: E/AndroidRuntime(775): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784) 06-05 12:11:14.144: E/AndroidRuntime(775): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551) 06-05 12:11:14.144: E/AndroidRuntime(775): at dalvik.system.NativeStart.main(Native Method) 06-05 12:11:14.144: E/AndroidRuntime(775): Caused by: java.lang.ClassCastException: android.widget.SimpleCursorAdapter cannot be cast to android.widget.ArrayAdapter 06-05 12:11:14.144: E/AndroidRuntime(775): at an.droid.kit.DetailsActivity.dbToUI(DetailsActivity.java:217) 06-05 12:11:14.144: E/AndroidRuntime(775): at an.droid.kit.DetailsActivity.onCreate(DetailsActivity.java:104) 06-05 12:11:14.144: E/AndroidRuntime(775): at android.app.Activity.performCreate(Activity.java:4465) 06-05 12:11:14.144: E/AndroidRuntime(775): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1049) 06-05 12:11:14.144: E/AndroidRuntime(775): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1920) 06-05 12:11:14.144: E/AndroidRuntime(775): ... 11 more

    Read the article

  • Critique my heap debugger

    - by FredOverflow
    I wrote the following heap debugger in order to demonstrate memory leaks, double deletes and wrong forms of deletes (i.e. trying to delete an array with delete p instead of delete[] p) to beginning programmers. I would love to get some feedback on that from strong C++ programmers because I have never done this before and I'm sure I've done some stupid mistakes. Thanks! #include <cstdlib> #include <iostream> #include <new> namespace { const int ALIGNMENT = 16; const char* const ERR = "*** ERROR: "; int counter = 0; struct heap_debugger { heap_debugger() { std::cerr << "*** heap debugger started\n"; } ~heap_debugger() { std::cerr << "*** heap debugger shutting down\n"; if (counter > 0) { std::cerr << ERR << "failed to release memory " << counter << " times\n"; } else if (counter < 0) { std::cerr << ERR << (-counter) << " double deletes detected\n"; } } } instance; void* allocate(size_t size, const char* kind_of_memory, size_t token) throw (std::bad_alloc) { void* raw = malloc(size + ALIGNMENT); if (raw == 0) throw std::bad_alloc(); *static_cast<size_t*>(raw) = token; void* payload = static_cast<char*>(raw) + ALIGNMENT; ++counter; std::cerr << "*** allocated " << kind_of_memory << " at " << payload << " (" << size << " bytes)\n"; return payload; } void release(void* payload, const char* kind_of_memory, size_t correct_token, size_t wrong_token) throw () { if (payload == 0) return; std::cerr << "*** releasing " << kind_of_memory << " at " << payload << '\n'; --counter; void* raw = static_cast<char*>(payload) - ALIGNMENT; size_t* token = static_cast<size_t*>(raw); if (*token == correct_token) { *token = 0xDEADBEEF; free(raw); } else if (*token == wrong_token) { *token = 0x177E6A7; std::cerr << ERR << "wrong form of delete\n"; } else { std::cerr << ERR << "double delete\n"; } } } void* operator new(size_t size) throw (std::bad_alloc) { return allocate(size, "non-array memory", 0x5AFE6A8D); } void* operator new[](size_t size) throw (std::bad_alloc) { return allocate(size, " array memory", 0x5AFE6A8E); } void operator delete(void* payload) throw () { release(payload, "non-array memory", 0x5AFE6A8D, 0x5AFE6A8E); } void operator delete[](void* payload) throw () { release(payload, " array memory", 0x5AFE6A8E, 0x5AFE6A8D); }

    Read the article

  • Purpose of overloading operators in C++?

    - by Geo Drawkcab
    What is the main purpose of overloading operators in C++? In the code below, << and >> are overloaded; what is the advantage to doing so? #include <iostream> #include <string> using namespace std; class book { string name,gvari; double cost; int year; public: book(){}; book(string a, string b, double c, int d) { a=name;b=gvari;c=cost;d=year; } ~book() {} double setprice(double a) { return a=cost; } friend ostream& operator <<(ostream& , book&); void printbook(){ cout<<"wignis saxeli "<<name<<endl; cout<<"wignis avtori "<<gvari<<endl; cout<<"girebuleba "<<cost<<endl; cout<<"weli "<<year<<endl; } }; ostream& operator <<(ostream& out, book& a){ out<<"wignis saxeli "<<a.name<<endl; out<<"wignis avtori "<<a.gvari<<endl; out<<"girebuleba "<<a.cost<<endl; out<<"weli "<<a.year<<endl; return out; } class library_card : public book { string nomeri; int raod; public: library_card(){}; library_card( string a, int b){a=nomeri;b=raod;} ~library_card() {}; void printcard(){ cout<<"katalogis nomeri "<<nomeri<<endl; cout<<"gacemis raodenoba "<<raod<<endl; } friend ostream& operator <<(ostream& , library_card&); }; ostream& operator <<(ostream& out, library_card& b) { out<<"katalogis nomeri "<<b.nomeri<<endl; out<<"gacemis raodenoba "<<b.raod<<endl; return out; } int main() { book A("robizon kruno","giorgi",15,1992); library_card B("910CPP",123); A.printbook(); B.printbook(); A.setprice(15); B.printbook(); system("pause"); return 0; }

    Read the article

  • Adding to database. No repeat on refresh

    - by kevstarlive
    I have this code: Episode.php <?$feedback = new feedback; $articles = $feedback->fetch_all(); if (isset($_POST['name'], $_POST['post'])) { $cast = $_GET['id']; $name = $_POST['name']; $email = $_POST['email']; $post = nl2br ($_POST['post']); $ipaddress = $_SERVER['REMOTE_ADDR']; if (empty($name) or empty($post)) { $error = 'All Fields Are Required!'; }else{ $query = $pdo->prepare('INSERT INTO comments (cast, name, email, post, ipaddress) VALUES(?, ?, ?, ?, ?)'); $query->bindValue(1, $cast); $query->bindValue(2, $name); $query->bindValue(3, $email); $query->bindValue(4, $post); $query->bindValue(5, $ipaddress); $query->execute(); } }?> <div align="center"> <strong>Give us your feedback?</strong><br /><br /> <?php if (isset($error)) { ?> <small style="color:#aa0000;"><?php echo $error; ?></small><br /><br /> <?php } ?> <form action="episode.php?id=<?php echo $data['cast_id']; ?>" method="post" autocomplete="off" enctype="multipart/form-data"> <input type="text" name="name" placeholder="Name" /> / <input type="text" name="email" placeholder="Email" /><small style="color:#aa0000;">*</small><br /><br /> <textarea rows="10" cols="50" name="post" placeholder="Comment"></textarea><br /><br /> <input type="submit" onclick="myFunction()" value="Add Comment" /> <br /><br /> <small style="color:#aa0000;">* <b>Email will not be displayed publicly</b></small><br /> </form> </div> Include.php class feedback { public function fetch_all(){ global $pdo; $query = $pdo->prepare("SELECT * FROM comments"); $query->bindValue(1, $cast); $query->execute(); return $query->fetchAll(); } } This code updates to the database as it is suppose to. But after submission it reloads the current page as mentioned in the form action. But when I refresh the page to see the comment being added it asks to re submit. If I hit submit then the comment adds again. How can I stop this from happening? Maybe I could hide the comment box and display a thank you message but that would not stop a repeat entry. Please help. Thank you. Kev

    Read the article

  • Dynamic JSON Parsing in .NET with JsonValue

    - by Rick Strahl
    So System.Json has been around for a while in Silverlight, but it's relatively new for the desktop .NET framework and now moving into the lime-light with the pending release of ASP.NET Web API which is bringing a ton of attention to server side JSON usage. The JsonValue, JsonObject and JsonArray objects are going to be pretty useful for Web API applications as they allow you dynamically create and parse JSON values without explicit .NET types to serialize from or into. But even more so I think JsonValue et al. are going to be very useful when consuming JSON APIs from various services. Yes I know C# is strongly typed, why in the world would you want to use dynamic values? So many times I've needed to retrieve a small morsel of information from a large service JSON response and rather than having to map the entire type structure of what that service returns, JsonValue actually allows me to cherry pick and only work with the values I'm interested in, without having to explicitly create everything up front. With JavaScriptSerializer or DataContractJsonSerializer you always need to have a strong type to de-serialize JSON data into. Wouldn't it be nice if no explicit type was required and you could just parse the JSON directly using a very easy to use object syntax? That's exactly what JsonValue, JsonObject and JsonArray accomplish using a JSON parser and some sweet use of dynamic sauce to make it easy to access in code. Creating JSON on the fly with JsonValue Let's start with creating JSON on the fly. It's super easy to create a dynamic object structure. JsonValue uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JsonValue:[TestMethod] public void JsonValueOutputTest() { // strong type instance var jsonObject = new JsonObject(); // dynamic expando instance you can add properties to dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1977; album.Songs = new JsonArray() as dynamic; dynamic song = new JsonObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JsonObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces proper JSON just as you would expect: {"AlbumName":"Dirty Deeds Done Dirt Cheap","Artist":"AC\/DC","YearReleased":1977,"Songs":[{"SongName":"Dirty Deeds Done Dirt Cheap","SongLength":"4:11"},{"SongName":"Love at First Feel","SongLength":"3:10"}]} The important thing about this code is that there's no explicitly type that is used for holding the values to serialize to JSON. I am essentially creating this value structure on the fly by adding properties and then serialize it to JSON. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JsonObject() to create a new object and immediately cast it to dynamic. JsonObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JsonValue/JsonObject these values are stored in pseudo collections of key value pairs that are exposed as properties through the DynamicObject functionality in .NET. The syntax gets a little tedious only if you need to create child objects or arrays that have to be explicitly defined first. Other than that the syntax looks like normal object access sytnax. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the values you create are accessed consistently and without typos in your code. Note that you can also access the JsonValue instance directly and get access to the underlying type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JsonObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JsonValue internally stores properties keys and values in collections and you can iterate over them at runtime. You can also manipulate the collections if you need to to get the object structure to look exactly like you want. Again, if you've used ExpandoObject before JsonObject/Value are very similar in the behavior of the structure. Reading JSON strings into JsonValue The JsonValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JsonValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:[TestMethod] public void JsonValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"",""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JsonValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JsonValue object and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JsonPrimitive and I have to assign them to their appropriate types first before I can do type comparisons. The dynamic properties will automatically cast to the right type expected as long as the compiler can resolve the type of the assignment or usage. The AreEqual() method oesn't as it expects two object instances and comparing json.Company to "West Wind" is comparing two different types (JsonPrimitive to String) which fails. So the intermediary assignment is required to make the test pass. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1977, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B00008BXJ4/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""67280fb8"", ""AlbumName"": ""Echoes, Silence, Patience & Grace"", ""Artist"": ""Foo Fighters"", ""YearReleased"": 2007, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/41mtlesQPVL._SL500_AA280_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B000UFAURI/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B000UFAURI"", ""Songs"": [ { ""AlbumId"": ""67280fb8"", ""SongName"": ""The Pretender"", ""SongLength"": ""4:29"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Let it Die"", ""SongLength"": ""4:05"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Erase/Replay"", ""SongLength"": ""4:13"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; dynamic albums = JsonValue.Parse(jsonString); foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName ); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName);}   It's pretty sweet how easy it becomes to parse even complex JSON and then just run through the object using object syntax, yet without an explicit type in the mix. In fact it looks and feels a lot like if you were using JavaScript to parse through this data, doesn't it? And that's the point…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  JSON   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL SERVER – Concat Strings in SQL Server using T-SQL – SQL in Sixty Seconds #035 – Video

    - by pinaldave
    Concatenating  string is one of the most common tasks in SQL Server and every developer has to come across it. We have to concat the string when we have to see the display full name of the person by first name and last name. In this video we will see various methods to concatenate the strings. SQL Server 2012 has introduced new function CONCAT which concatenates the strings much efficiently. When we concat values with ‘+’ in SQL Server we have to make sure that values are in string format. However, when we attempt to concat integer we have to convert the integers to a string or else it will throw an error. However, with the newly introduce the function of CONCAT in SQL Server 2012 we do not have to worry about this kind of issue. It concatenates strings and integers without casting or converting them. You can specify various values as a parameter to CONCAT functions and it concatenates them together. Let us see how to concat the values in Sixty Seconds: Here is the script which is used in the video. -- Method 1: Concatenating two strings SELECT 'FirstName' + ' ' + 'LastName' AS FullName -- Method 2: Concatenating two Numbers SELECT CAST(1 AS VARCHAR(10)) + ' ' + CAST(2 AS VARCHAR(10)) -- Method 3: Concatenating values of table columns SELECT FirstName + ' ' + LastName AS FullName FROM AdventureWorks2012.Person.Person -- Method 4: SQL Server 2012 CONCAT function SELECT CONCAT('FirstName' , ' ' , 'LastName') AS FullName -- Method 5: SQL Server 2012 CONCAT function SELECT CONCAT('FirstName' , ' ' , 1) AS FullName Related Tips in SQL in Sixty Seconds: SQL SERVER – Concat Function in SQL Server – SQL Concatenation String Function – CONCAT() – A Quick Introduction 2012 Functions – FORMAT() and CONCAT() – An Interesting Usage A Quick Trick about SQL Server 2012 CONCAT Function – PRINT A Quick Trick about SQL Server 2012 CONCAT function What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • SQL SERVER – Subquery or Join – Various Options – SQL Server Engine knows the Best

    - by pinaldave
    This is followup post of my earlier article SQL SERVER – Convert IN to EXISTS – Performance Talk, after reading all the comments I have received I felt that I could write more on the same subject to clear few things out. First let us run following four queries, all of them are giving exactly same resultset. USE AdventureWorks GO -- use of = SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID = ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of in SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID IN ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of exists SELECT * FROM HumanResources.Employee E WHERE EXISTS ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- Use of Join SELECT * FROM HumanResources.Employee E INNER JOIN HumanResources.EmployeeAddress EA ON E.EmployeeID = EA.EmployeeID GO Let us compare the execution plan of the queries listed above. Click on image to see larger image. It is quite clear from the execution plan that in case of IN, EXISTS and JOIN SQL Server Engines is smart enough to figure out what is the best optimal plan of Merge Join for the same query and execute the same. However, in the case of use of Equal (=) Operator, SQL Server is forced to use Nested Loop and test each result of the inner query and compare to outer query, leading to cut the performance. Please note that here I no mean suggesting that Nested Loop is bad or Merge Join is better. This can very well vary on your machine and amount of resources available on your computer. When I see Equal (=) operator used in query like above, I usually recommend to see if user can use IN or EXISTS or JOIN. As I said, this can very much vary on different system. What is your take in above query? I believe SQL Server Engines is usually pretty smart to figure out what is ideal execution plan and use it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >