Search Results

Search found 5885 results on 236 pages for 'finally'.

Page 219/236 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Imperative vs. LINQ Performance on WP7

    - by Bil Simser
    Jesse Liberty had a nice post presenting the concepts around imperative, LINQ and fluent programming to populate a listbox. Check out the post as it’s a great example of some foundational things every .NET programmer should know. I was more interested in what the IL code that would be generated from imperative vs. LINQ was like and what the performance numbers are and how they differ. The code at the instruction level is interesting but not surprising. The imperative example with it’s creating lists and loops weighs in at about 60 instructions. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: .method private hidebysig instance void ImperativeMethod() cil managed 2: { 3: .maxstack 3 4: .locals init ( 5: [0] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> someData, 6: [1] class [mscorlib]System.Collections.Generic.List`1<int32> inLoop, 7: [2] int32 n, 8: [3] class [mscorlib]System.Collections.Generic.IEnumerator`1<int32> CS$5$0000, 9: [4] bool CS$4$0001) 10: L_0000: nop 11: L_0001: ldc.i4.1 12: L_0002: ldc.i4.s 50 13: L_0004: call class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> [System.Core]System.Linq.Enumerable::Range(int32, int32) 14: L_0009: stloc.0 15: L_000a: newobj instance void [mscorlib]System.Collections.Generic.List`1<int32>::.ctor() 16: L_000f: stloc.1 17: L_0010: nop 18: L_0011: ldloc.0 19: L_0012: callvirt instance class [mscorlib]System.Collections.Generic.IEnumerator`1<!0> [mscorlib]System.Collections.Generic.IEnumerable`1<int32>::GetEnumerator() 20: L_0017: stloc.3 21: L_0018: br.s L_003a 22: L_001a: ldloc.3 23: L_001b: callvirt instance !0 [mscorlib]System.Collections.Generic.IEnumerator`1<int32>::get_Current() 24: L_0020: stloc.2 25: L_0021: nop 26: L_0022: ldloc.2 27: L_0023: ldc.i4.5 28: L_0024: cgt 29: L_0026: ldc.i4.0 30: L_0027: ceq 31: L_0029: stloc.s CS$4$0001 32: L_002b: ldloc.s CS$4$0001 33: L_002d: brtrue.s L_0039 34: L_002f: ldloc.1 35: L_0030: ldloc.2 36: L_0031: ldloc.2 37: L_0032: mul 38: L_0033: callvirt instance void [mscorlib]System.Collections.Generic.List`1<int32>::Add(!0) 39: L_0038: nop 40: L_0039: nop 41: L_003a: ldloc.3 42: L_003b: callvirt instance bool [mscorlib]System.Collections.IEnumerator::MoveNext() 43: L_0040: stloc.s CS$4$0001 44: L_0042: ldloc.s CS$4$0001 45: L_0044: brtrue.s L_001a 46: L_0046: leave.s L_005a 47: L_0048: ldloc.3 48: L_0049: ldnull 49: L_004a: ceq 50: L_004c: stloc.s CS$4$0001 51: L_004e: ldloc.s CS$4$0001 52: L_0050: brtrue.s L_0059 53: L_0052: ldloc.3 54: L_0053: callvirt instance void [mscorlib]System.IDisposable::Dispose() 55: L_0058: nop 56: L_0059: endfinally 57: L_005a: nop 58: L_005b: ldarg.0 59: L_005c: ldfld class [System.Windows]System.Windows.Controls.ListBox PerfTest.MainPage::LB1 60: L_0061: ldloc.1 61: L_0062: callvirt instance void [System.Windows]System.Windows.Controls.ItemsControl::set_ItemsSource(class [mscorlib]System.Collections.IEnumerable) 62: L_0067: nop 63: L_0068: ret 64: .try L_0018 to L_0048 finally handler L_0048 to L_005a 65: } 66:   67: Compare that to the IL generated for the LINQ version which has about half of the instructions and just gets the job done, no fluff. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: .method private hidebysig instance void LINQMethod() cil managed 2: { 3: .maxstack 4 4: .locals init ( 5: [0] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> someData, 6: [1] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> queryResult) 7: L_0000: nop 8: L_0001: ldc.i4.1 9: L_0002: ldc.i4.s 50 10: L_0004: call class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> [System.Core]System.Linq.Enumerable::Range(int32, int32) 11: L_0009: stloc.0 12: L_000a: ldloc.0 13: L_000b: ldsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 14: L_0010: brtrue.s L_0025 15: L_0012: ldnull 16: L_0013: ldftn bool PerfTest.MainPage::<LINQProgramming>b__4(int32) 17: L_0019: newobj instance void [System.Core]System.Func`2<int32, bool>::.ctor(object, native int) 18: L_001e: stsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 19: L_0023: br.s L_0025 20: L_0025: ldsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 21: L_002a: call class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0> [System.Core]System.Linq.Enumerable::Where<int32>(class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0>, class [System.Core]System.Func`2<!!0, bool>) 22: L_002f: ldsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 23: L_0034: brtrue.s L_0049 24: L_0036: ldnull 25: L_0037: ldftn int32 PerfTest.MainPage::<LINQProgramming>b__5(int32) 26: L_003d: newobj instance void [System.Core]System.Func`2<int32, int32>::.ctor(object, native int) 27: L_0042: stsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 28: L_0047: br.s L_0049 29: L_0049: ldsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 30: L_004e: call class [mscorlib]System.Collections.Generic.IEnumerable`1<!!1> [System.Core]System.Linq.Enumerable::Select<int32, int32>(class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0>, class [System.Core]System.Func`2<!!0, !!1>) 31: L_0053: stloc.1 32: L_0054: ldarg.0 33: L_0055: ldfld class [System.Windows]System.Windows.Controls.ListBox PerfTest.MainPage::LB2 34: L_005a: ldloc.1 35: L_005b: callvirt instance void [System.Windows]System.Windows.Controls.ItemsControl::set_ItemsSource(class [mscorlib]System.Collections.IEnumerable) 36: L_0060: nop 37: L_0061: ret 38: } Again, not surprising here but a good indicator that you should consider using LINQ where possible. In fact if you have ReSharper installed you’ll see a squiggly (technical term) in the imperative code that says “Hey Dude, I can convert this to LINQ if you want to be c00L!” (or something like that, it’s the 2010 geek version of Clippy). What about the fluent version? As Jon correctly pointed out in the comments, when you compare the IL for the LINQ code and the IL for the fluent code it’s the same. LINQ and the fluent interface are just syntactical sugar so you decide what you’re most comfortable with. At the end of the day they’re both the same. Now onto the numbers. Again I expected the imperative version to be better performing than the LINQ version (before I saw the IL that was generated). Call it womanly instinct. A gut feel. Whatever. Some of the numbers are interesting though. For Jesse’s example of 50 items, the numbers were interesting. The imperative sample clocked in at 7ms while the LINQ version completed in 4. As the number of items went up, the elapsed time didn’t necessarily climb exponentially. At 500 items they were pretty much the same and the results were similar up to about 50,000 items. After that I tried 500,000 items where the gap widened but not by much (2.2 seconds for imperative, 2.3 for LINQ). It wasn’t until I tried 5,000,000 items where things were noticeable. Imperative filled the list in 20 seconds while LINQ took 8 seconds longer (although personally I wouldn’t suggest you put 5 million items in a list unless you want your users showing up at your door with torches and pitchforks). Here’s the table with the full results. Method/Items 50 500 5,000 50,000 500,000 5,000,000 Imperative 7ms 7ms 38ms 223ms 2230ms 20974ms LINQ/Fluent 4ms 6ms 41ms 240ms 2310ms 28731ms Like I said, at the end of the day it’s not a huge difference and you really don’t want your users waiting around for 30 seconds on a mobile device filling lists. In fact if Windows Phone 7 detects you’re taking more than 10 seconds to do any one thing, it considers the app hung and shuts it down. The results here are for Windows Phone 7 but frankly they're the same for desktop and web apps so feel free to apply it generally. From a programming perspective, choose what you like. Some LINQ statements can get pretty hairy so I usually fall back with my simple mind and write it imperatively. If you really want to impress your friends, write it old school then let ReSharper do the hard work for! Happy programming!

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • C#: String Concatenation vs Format vs StringBuilder

    - by James Michael Hare
    I was looking through my groups’ C# coding standards the other day and there were a couple of legacy items in there that caught my eye.  They had been passed down from committee to committee so many times that no one even thought to second guess and try them for a long time.  It’s yet another example of how micro-optimizations can often get the best of us and cause us to write code that is not as maintainable as it could be for the sake of squeezing an extra ounce of performance out of our software. So the two standards in question were these, in paraphrase: Prefer StringBuilder or string.Format() to string concatenation. Prefer string.Equals() with case-insensitive option to string.ToUpper().Equals(). Now some of you may already know what my results are going to show, as these items have been compared before on many blogs, but I think it’s always worth repeating and trying these yourself.  So let’s dig in. The first test was a pretty standard one.  When concattenating strings, what is the best choice: StringBuilder, string concattenation, or string.Format()? So before we being I read in a number of iterations from the console and a length of each string to generate.  Then I generate that many random strings of the given length and an array to hold the results.  Why am I so keen to keep the results?  Because I want to be able to snapshot the memory and don’t want garbage collection to collect the strings, hence the array to keep hold of them.  I also didn’t want the random strings to be part of the allocation, so I pre-allocate them and the array up front before the snapshot.  So in the code snippets below: num – Number of iterations. strings – Array of randomly generated strings. results – Array to hold the results of the concatenation tests. timer – A System.Diagnostics.Stopwatch() instance to time code execution. start – Beginning memory size. stop – Ending memory size. after – Memory size after final GC. So first, let’s look at the concatenation loop: 1: // build num strings using concattenation. 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = "This is test #" + i + " with a result of " + strings[i]; 5: } Pretty standard, right?  Next for string.Format(): 1: // build strings using string.Format() 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = string.Format("This is test #{0} with a result of {1}", i, strings[i]); 5: }   Finally, StringBuilder: 1: // build strings using StringBuilder 2: for (int i = 0; i < num; i++) 3: { 4: var builder = new StringBuilder(); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: } So I take each of these loops, and time them by using a block like this: 1: // get the total amount of memory used, true tells it to run GC first. 2: start = System.GC.GetTotalMemory(true); 3:  4: // restart the timer 5: timer.Reset(); 6: timer.Start(); 7:  8: // *** code to time and measure goes here. *** 9:  10: // get the current amount of memory, stop the timer, then get memory after GC. 11: stop = System.GC.GetTotalMemory(false); 12: timer.Stop(); 13: other = System.GC.GetTotalMemory(true); So let’s look at what happens when I run each of these blocks through the timer and memory check at 500,000 iterations: 1: Operator + - Time: 547, Memory: 56104540/55595960 - 500000 2: string.Format() - Time: 749, Memory: 57295812/55595960 - 500000 3: StringBuilder - Time: 608, Memory: 55312888/55595960 – 500000   Egad!  string.Format brings up the rear and + triumphs, well, at least in terms of speed.  The concat burns more memory than StringBuilder but less than string.Format().  This shows two main things: StringBuilder is not always the panacea many think it is. The difference between any of the three is miniscule! The second point is extremely important!  You will often here people who will grasp at results and say, “look, operator + is 10% faster than StringBuilder so always use StringBuilder.”  Statements like this are a disservice and often misleading.  For example, if I had a good guess at what the size of the string would be, I could have preallocated my StringBuffer like so:   1: for (int i = 0; i < num; i++) 2: { 3: // pre-declare StringBuilder to have 100 char buffer. 4: var builder = new StringBuilder(100); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: }   Now let’s look at the times: 1: Operator + - Time: 551, Memory: 56104412/55595960 - 500000 2: string.Format() - Time: 753, Memory: 57296484/55595960 - 500000 3: StringBuilder - Time: 525, Memory: 59779156/55595960 - 500000   Whoa!  All of the sudden StringBuilder is back on top again!  But notice, it takes more memory now.  This makes perfect sense if you examine the IL behind the scenes.  Whenever you do a string concat (+) in your code, it examines the lengths of the arguments and creates a StringBuilder behind the scenes of the appropriate size for you. But even IF we know the approximate size of our StringBuilder, look how much less readable it is!  That’s why I feel you should always take into account both readability and performance.  After all, consider all these timings are over 500,000 iterations.   That’s at best  0.0004 ms difference per call which is neglidgable at best.  The key is to pick the best tool for the job.  What do I mean?  Consider these awesome words of wisdom: Concatenate (+) is best at concatenating.  StringBuilder is best when you need to building. Format is best at formatting. Totally Earth-shattering, right!  But if you consider it carefully, it actually has a lot of beauty in it’s simplicity.  Remember, there is no magic bullet.  If one of these always beat the others we’d only have one and not three choices. The fact is, the concattenation operator (+) has been optimized for speed and looks the cleanest for joining together a known set of strings in the simplest manner possible. StringBuilder, on the other hand, excels when you need to build a string of inderterminant length.  Use it in those times when you are looping till you hit a stop condition and building a result and it won’t steer you wrong. String.Format seems to be the looser from the stats, but consider which of these is more readable.  Yes, ignore the fact that you could do this with ToString() on a DateTime.  1: // build a date via concatenation 2: var date1 = (month < 10 ? string.Empty : "0") + month + '/' 3: + (day < 10 ? string.Empty : "0") + '/' + year; 4:  5: // build a date via string builder 6: var builder = new StringBuilder(10); 7: if (month < 10) builder.Append('0'); 8: builder.Append(month); 9: builder.Append('/'); 10: if (day < 10) builder.Append('0'); 11: builder.Append(day); 12: builder.Append('/'); 13: builder.Append(year); 14: var date2 = builder.ToString(); 15:  16: // build a date via string.Format 17: var date3 = string.Format("{0:00}/{1:00}/{2:0000}", month, day, year); 18:  So the strength in string.Format is that it makes constructing a formatted string easy to read.  Yes, it’s slower, but look at how much more elegant it is to do zero-padding and anything else string.Format does. So my lesson is, don’t look for the silver bullet!  Choose the best tool.  Micro-optimization almost always bites you in the end because you’re sacrificing readability for performance, which is almost exactly the wrong choice 90% of the time. I love the rules of optimization.  They’ve been stated before in many forms, but here’s how I always remember them: For Beginners: Do not optimize. For Experts: Do not optimize yet. It’s so true.  Most of the time on today’s modern hardware, a micro-second optimization at the sake of readability will net you nothing because it won’t be your bottleneck.  Code for readability, choose the best tool for the job which will usually be the most readable and maintainable as well.  Then, and only then, if you need that extra performance boost after profiling your code and exhausting all other options… then you can start to think about optimizing.

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • CodePlex Daily Summary for Thursday, March 25, 2010

    CodePlex Daily Summary for Thursday, March 25, 2010New ProjectsAccessibilityChecker: Accessibility Checker is custom feature developed to check accessibility requirements in a SharePoint PortalAnne Epstein - Personal Repository: Project Description This project contains multiple samples with various snippets and projects from blog posts, user group talks, and conference se...BatterySaver: BatterySaver is a simple application, in C#, that allows laptop users to perform actions based on battery notification events (switching from batte...dtxJson: C# coded JSON (JavaScript Object Notation) parser.eCamp: eCamp is a modular and extensible electronic camp management application. Written in C# and WPF, it follows many of the latest technology trends su...epdevplatform: epdevplatformERP: Environment Colaborative Resources ProjectFaceLight - Simple Silverlight Face Detection: FaceLight is a simple facial recognition method that can be used with Silverlight 's webcam. It searches for a certain sized skin color region in a...Forum PAF - The Open Source .Net Forum - From Viet Nam - By Thomas John (jntpaf): The Open Source .Net Forum - From Viet Nam ------------------------- Các phần mềm cần thiết để chạy Forum PAF: 1. .Net Framework 2.0 (trở lên) 2....Gawam Savel - Sistema de Avaliação Eletrônica: Projeto de TCC ...Html5 Helpers and tools for Asp.Net MVC: Html5 Helper aims to provide a generic helper context to produce HTML5 content in ASP.NET MVCIfeanyi Echeruo's WPF Recipes: WPF Recipes C# code samples showing how to solve some non-trivial problems in WPFITM 495 - iPhone App: school project iphone appKnowledge Exchange: Stack Overflow Inspired Knowledge ExchangeMailCheck: Mail检查程序。NetBoard: NetBoard is a lightweight system designed to act as the Blackboard in a micro-blackboard architecture for use within an OO system - even when withi...RodBass.com: RodBass.comsemanticrest: This is a vision of semantics mashups for rest web services.StatSpaceUI: StatSpaceUITFS Merge Tool: A small tool for merging changesets between TFS branches.The Interface To End All Interfaces: We interfaced everything, so that you can implement anything...Tim - Open Source Projects And Samples: Open source projects / Samples for http://tim.bellette.netWindows XNA: A place for those who enjoy there XNA Game Studio programing on Windows. For a place to share XNA Game Studio games for Windows in English. I'm loo...XAML Code Snippets addin for Visual Studio 2010: Provides support for adding XAML code snippets in the Visual Studio 2010 code editor for XAML in WPF and Silverlight projects.New ReleasesAnyWorks: AnyWorks1.2Bin: AnyWorks1.2AnyWorks: AnyWorks1.2Src: AnyWorks1.2AppFabric Caching Admin Tool: AppFabric Caching Admin Tool 1.0: System Requirements:.NET 4.0 RC AppFabric Caching Beta2 Test On:Win 7 (64x) Note: Must run as Administrator !!!ASP.NET Wiki Control: Release 1.1: - Modified text and varchar columns to nvarchar for unicode support. - Modified path info logic to disable its use if the page's raw url currently...B&W Port Scanner: Black`n`White Port Scanner 2.0: Fast Cross-Platform Port Scanner with Vulnerability Detection Tools. 3 vulnerability detection tools are included in this version: - Detection of ...BatterySaver: 0.1: Initial Release This is the initial release of the application. The application is very much beta with lots of changes upcoming. Known Issues The...BatterySaver: 0.2: Changes+ Add support for enabling and disabling devices (6)Compare .NET Objects: Version 1.2.0.0: New Features: Compare Generic Classes that Implement IList Indexers Compare Datasets Compare DataTables Compare DataRows Consider IList and...Controlled Vocabulary: 1.0.0.3: System Requirements Outlook 2007 / 2010 .Net Framework 3.5 Installation 1. Close Outlook (Use Task Manager to ensure no running instances in the b...crudwork is a library of reuseable classes for developing .NET applications: crudwork 2.2.0.2: minor changes. new guid for msi and new strongly named guidDigitallyCreated Utilities: DigitallyCreated Utilities v1.0.0: This release is the v1.0.0 version of DigitallyCreated Utilities. Binary Distribution The binary distribution contains the following: Compiled bin...DirectQ: Release 1.8.2: Adds several bugfixes and improved functionality. This release supersedes 1.8.1 which will be shortly removed. A very big THANK YOU to everyone w...DotNetNuke® Community Edition: 05.03.01: Major Highlights Issue fixed issue with the email notifications where the From and To addresses were swapped. Issue fixed with signature ch...Encrypted Notes: Encrypted Notes 1.5: This is the latest version of Encrypted Notes (1.5). It has an installer - it will create a directory 'CPascoe' in My Documents. Once you have ext...EnhSim: Release v1.9.8.1: Release v1.9.8.1Adding in the Glyph of Flame Shock changes in 3.3.3FlickrNet API Library: 3.0 Beta: A brand new version of the FlickrNet library, exposing 100% of the Flickr API's methods, along with streamlined class and method names. All classe...Forum PAF - The Open Source .Net Forum - From Viet Nam - By Thomas John (jntpaf): Forum PAF - The Open Source .Net Forum: A, Các phần mềm cần thiết để chạy Forum PAF: 1. .Net Framework 2.0 (trở lên) 2. Ajax Extension 1.0 (trở lên) 3. Sql Server 2005 (Sql Server Expr...HydroDesktop - CUAHSI Hydrologic Information System Desktop Application: HydroDesktop 0.7.3735 Alpha Installer: This is the testing release of the HydroDesktop 0.7 alpha version. Features supported in this version include: Search for data and download of Hydr...MDownloader: MDownloader-0.15.9.56953: Fixed Uploading.com links detection.MiniTwitter: 1.10: MiniTwitter 1.10 更新内容 追加 未読管理時に未読数をタブに表示する機能を実装 サイレントモードを実装(通知領域アイコンを右クリックして出るメニューから切り替え) 修正 「お気に入りワードを含む項目だけ表示する」オプションが機能していなかった問題を修正NoteExpress User Tools (NEUT) - Do it by ourselves!: NoteExpress User Tools 1.9.1: 测试版本:NoteExpress 2.5.0.1147 #修正一个改动的bugOneCMS: OneCMS 2.6: OneCMS 2.6 is finally here! Along with various bug fixes 2.6 also brings with it many new features such as the videos module, plugins system, and m...Quantity System Framework: Quantity System Calculator 1.1.9.93: Experience the new edition of the quantity system with text support and function treated as values now you can multiply functions and divide funct...Selection Maker: Selection Maker 1.4: some minor bugs fixed. icon added for running and uninstalling the application.sPATCH: sPatcher v0.8a: + Disabled patchers proxy settings to increase connection speed sPatch - Server Example *Contains a sample Patch that "downgrades" PWI 1.4.2 Clien...VSTT 2008 Quick Reference Guide: VS Performance Testing Quick Reference V2.0: Visual Studio Performance Testing Quick Reference Guide (Version 2.0)WeatherBar: WeatherBar 2.0: WeatherBar 2.0 Changelog: Introduced application settings. Modified UI. Ability to switch between Fahrenheit and Celsius (application-wide). ...WillStrohl.LightboxGallery Module for DotNetNuke: WillStrohl.LightboxGallery v1.02.01: This version of the Lightbox Gallery Module adds the following features: Upgraded the Autocomplete jQuery plugin Fixed an IE8 error that was occu...Windows XNA: Base Defense Alpha 0.339: Alpha 0.338 had a really bad bug that made the game crash, that is what I get for coding after 3am... I also made some AI for the Raptor. So now it...WPF Dynamic Data Display: Silverlight DynamicDataDisplay v0.2 - Spring 2010: Silverlight version of WPF DynamicDataDisplay charting library The version 0.2 shows a greater performance comparing with version 0.1 while having...Most Popular ProjectsMetaSharpRawrWBFS ManagerASP.NET Ajax LibrarySilverlight ToolkitMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesFarseer Physics EngineBlogEngine.NETFacebook Developer ToolkitNB_Store - Free DotNetNuke Ecommerce Catalog ModulePHPExcelTable2ClassFluent Ribbon Control SuiteLINQ to Twitter

    Read the article

  • What's up with LDoms: Part 1 - Introduction & Basic Concepts

    - by Stefan Hinker
    LDoms - the correct name is Oracle VM Server for SPARC - have been around for quite a while now.  But to my surprise, I get more and more requests to explain how they work or to give advise on how to make good use of them.  This made me think that writing up a few articles discussing the different features would be a good idea.  Now - I don't intend to rewrite the LDoms Admin Guide or to copy and reformat the (hopefully) well known "Beginners Guide to LDoms" by Tony Shoumack from 2007.  Those documents are very recommendable - especially the Beginners Guide, although based on LDoms 1.0, is still a good place to begin with.  However, LDoms have come a long way since then, and I hope to contribute to their adoption by discussing how they work and what features there are today.  In this and the following posts, I will use the term "LDoms" as a common abbreviation for Oracle VM Server for SPARC, just because it's a lot shorter and easier to type (and presumably, read). So, just to get everyone on the same baseline, lets briefly discuss the basic concepts of virtualization with LDoms.  LDoms make use of a hypervisor as a layer of abstraction between real, physical hardware and virtual hardware.  This virtual hardware is then used to create a number of guest systems which each behave very similar to a system running on bare metal:  Each has its own OBP, each will install its own copy of the Solaris OS and each will see a certain amount of CPU, memory, disk and network resources available to it.  Unlike some other type 1 hypervisors running on x86 hardware, the SPARC hypervisor is embedded in the system firmware and makes use both of supporting functions in the sun4v SPARC instruction set as well as the overall CPU architecture to fulfill its function. The CMT architecture of the supporting CPUs (T1 through T4) provide a large number of cores and threads to the OS.  For example, the current T4 CPU has eight cores, each running 8 threads, for a total of 64 threads per socket.  To the OS, this looks like 64 CPUs.  The SPARC hypervisor, when creating guest systems, simply assigns a certain number of these threads exclusively to one guest, thus avoiding the overhead of having to schedule OS threads to CPUs, as do typical x86 hypervisors.  The hypervisor only assigns CPUs and then steps aside.  It is not involved in the actual work being dispatched from the OS to the CPU, all it does is maintain isolation between different guests. Likewise, memory is assigned exclusively to individual guests.  Here,  the hypervisor provides generic mappings between the physical hardware addresses and the guest's views on memory.  Again, the hypervisor is not involved in the actual memory access, it only maintains isolation between guests. During the inital setup of a system with LDoms, you start with one special domain, called the Control Domain.  Initially, this domain owns all the hardware available in the system, including all CPUs, all RAM and all IO resources.  If you'd be running the system un-virtualized, this would be what you'd be working with.  To allow for guests, you first resize this initial domain (also called a primary domain in LDoms speak), assigning it a small amount of CPU and memory.  This frees up most of the available CPU and memory resources for guest domains.  IO is a little more complex, but very straightforward.  When LDoms 1.0 first came out, the only way to provide IO to guest systems was to create virtual disk and network services and attach guests to these services.  In the meantime, several different ways to connect guest domains to IO have been developed, the most recent one being SR-IOV support for network devices released in version 2.2 of Oracle VM Server for SPARC. I will cover these more advanced features in detail later.  For now, lets have a short look at the initial way IO was virtualized in LDoms: For virtualized IO, you create two services, one "Virtual Disk Service" or vds, and one "Virtual Switch" or vswitch.  You can, of course, also create more of these, but that's more advanced than I want to cover in this introduction.  These IO services now connect real, physical IO resources like a disk LUN or a networt port to the virtual devices that are assigned to guest domains.  For disk IO, the normal case would be to connect a physical LUN (or some other storage option that I'll discuss later) to one specific guest.  That guest would be assigned a virtual disk, which would appear to be just like a real LUN to the guest, while the IO is actually routed through the virtual disk service down to the physical device.  For network, the vswitch acts very much like a real, physical ethernet switch - you connect one physical port to it for outside connectivity and define one or more connections per guest, just like you would plug cables between a real switch and a real system. For completeness, there is another service that provides console access to guest domains which mimics the behavior of serial terminal servers. The connections between the virtual devices on the guest's side and the virtual IO services in the primary domain are created by the hypervisor.  It uses so called "Logical Domain Channels" or LDCs to create point-to-point connections between all of these devices and services.  These LDCs work very similar to high speed serial connections and are configured automatically whenever the Control Domain adds or removes virtual IO. To see all this in action, now lets look at a first example.  I will start with a newly installed machine and configure the control domain so that it's ready to create guest systems. In a first step, after we've installed the software, let's start the virtual console service and downsize the primary domain.  root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- UART 512 261632M 0.3% 2d 13h 58m root@sun # ldm add-vconscon port-range=5000-5100 \ primary-console primary root@sun # svcadm enable vntsd root@sun # svcs vntsd STATE STIME FMRI online 9:53:21 svc:/ldoms/vntsd:default root@sun # ldm set-vcpu 16 primary root@sun # ldm set-mau 1 primary root@sun # ldm start-reconf primary root@sun # ldm set-memory 7680m primary root@sun # ldm add-config initial root@sun # shutdown -y -g0 -i6 So what have I done: I've defined a range of ports (5000-5100) for the virtual network terminal service and then started that service.  The vnts will later provide console connections to guest systems, very much like serial NTS's do in the physical world. Next, I assigned 16 vCPUs (on this platform, a T3-4, that's two cores) to the primary domain, freeing the rest up for future guest systems.  I also assigned one MAU to this domain.  A MAU is a crypto unit in the T3 CPU.  These need to be explicitly assigned to domains, just like CPU or memory.  (This is no longer the case with T4 systems, where crypto is always available everywhere.) Before I reassigned the memory, I started what's called a "delayed reconfiguration" session.  That avoids actually doing the change right away, which would take a considerable amount of time in this case.  Instead, I'll need to reboot once I'm all done.  I've assigned 7680MB of RAM to the primary.  That's 8GB less the 512MB which the hypervisor uses for it's own private purposes.  You can, depending on your needs, work with less.  I'll spend a dedicated article on sizing, discussing the pros and cons in detail. Finally, just before the reboot, I saved my work on the ILOM, to make this configuration available after a powercycle of the box.  (It'll always be available after a simple reboot, but the ILOM needs to know the configuration of the hypervisor after a power-cycle, before the primary domain is booted.) Now, lets create a first disk service and a first virtual switch which is connected to the physical network device igb2. We will later use these to connect virtual disks and virtual network ports of our guest systems to real world storage and network. root@sun # ldm add-vds primary-vds root@sun # ldm add-vswitch net-dev=igb2 switch-primary primary You are free to choose whatever names you like for the virtual disk service and the virtual switch.  I strongly recommend that you choose names that make sense to you and describe the function of each service in the context of your implementation.  For the vswitch, for example, you could choose names like "admin-vswitch" or "production-network" etc. This already concludes the configuration of the control domain.  We've freed up considerable amounts of CPU and RAM for guest systems and created the necessary infrastructure - console, vts and vswitch - so that guests systems can actually interact with the outside world.  The system is now ready to create guests, which I'll describe in the next section. For further reading, here are some recommendable links: The LDoms 2.2 Admin Guide The "Beginners Guide to LDoms" The LDoms Information Center on MOS LDoms on OTN

    Read the article

  • Native packaging for JavaFX

    - by igor
    JavaFX 2.2 adds new packaging option for JavaFX applications, allowing you to package your application as a "native bundle". This gives your users a way to install and run your application without any external dependencies on a system JRE or FX SDK. I'd like to give you an overview of what is it, motivation behind it, and finally explain how to get started with it. Screenshots may give you some idea of user experience but first hand experience is always the best. Before we go into all of the boring details, here are few different flavors of Ensemble for you to try: exe, msi, dmg, rpm installers and zip of linux bundle for non-rpm aware systems. Alternatively, check out native packages for JFXtras 2. Whats wrong with existing deployment options? JavaFX 2 applications are easy to distribute as a standalone application or as an application deployed on the web (embedded in the web page or as link to launch application from the webpage). JavaFX packaging tools, such as ant tasks and javafxpackager utility, simplify the creation of deployment packages even further. Why add new deployment options? JavaFX applications have implicit dependency on the availability of Java and JavaFX runtimes, and while existing deployment methods provide a means to validate the system requirements are met -- and even guide user to perform required installation/upgrades -- they do not fully address all of the important scenarios. In particular, here are few examples: the user may not have admin permissions to install new system software if the application was certified to run in the specific environment (fixed version of Java and JavaFX) then it may be hard to ensure user has this environment due to an autoupdate of the system version of Java/JavaFX (to ensure they are secure). Potentially, other apps may have a requirement for a different JRE or FX version that your app is incompatible with. your distribution channel may disallow dependencies on external frameworks (e.g. Mac AppStore) What is a "native package" for JavaFX application? In short it is  A Wrapper for your JavaFX application that makes is into a platform-specific application bundle Each Bundle is self-contained and includes your application code and resources (same set as need to launch standalone application from jar) Java and JavaFX runtimes (private copies to be used by this application only) native application launcher  metadata (icons, etc.) No separate installation is needed for Java and JavaFX runtimes Can be distributed as .zip or packaged as platform-specific installer No application changes, the same jar app binaries can be deployed as a native bundle, double-clickable jar, applet, or web start app What is good about it: Easy deployment of your application on fresh systems, without admin permissions when using .zip or a user-level installer No-hassle compatibility.  Your application is using a private copy of Java and JavaFX. The developer (you!) controls when these are updated. Easily package your application for Mac AppStore (or Windows, or...) Process name of running application is named after your application (and not just java.exe)  Easily deploy your application using enterprise deployment tools (e.g. deploy as MSI) Support is built in into JDK 7u6 (that includes JavaFX 2.2) Is it a silver bullet for the deployment that other deployment options will be deprecated? No.  There are no plans to deprecate other deployment options supported by JavaFX, each approach addresses different needs. Deciding whether native packaging is a best way to deploy your application depends on your requirements. A few caveats to consider: "Download and run" user experienceUnlike web deployment, the user experience is not about "launch app from web". It is more of "download, install and run" process, and the user may need to go through additional steps to get application launched - e.g. accepting a browser security dialog or finding and launching the application installer from "downloads" folder. Larger download sizeIn general size of bundled application will be noticeably higher than size of unbundled app as a private copy of the JRE and JavaFX are included.  We're working to reduce the size through compression and customizable "trimming", but it will always be substantially larger than than an app that depends on a "system JRE". Bundle per target platformBundle formats are platform specific. Currently a native bundle can only be produced for the same system you are building on.  That is, if you want to deliver native app bundles on Windows, Linux and Mac you will have to build your project on all three platforms. Application updates are the responsibility of developerWeb deployed Java applications automatically download application updates from the web as soon as they are available. The Java Autoupdate mechanism takes care of updating the Java and JavaFX runtimes to latest secure version several times every year. There is no built in support for this in for bundled applications. It is possible to use 3rd party libraries (like Sparkle on Mac) to add autoupdate support at application level.  In a future version of JavaFX we may include built-in support for autoupdate (add yourself as watcher for RT-22211 if you are interested in this) Getting started with native bundles First, you need to get the latest JDK 7u6 beta build (build 14 or later is recommended). On Windows/Mac/Linux it comes with JavaFX 2.2 SDK as part of JDK installation and contains JavaFX packaging tools, including: bin/javafxpackagerCommand line utility to produce JavaFX packages. lib/ant-javafx.jar Set of ant tasks to produce JavaFX packages (most recommended way to deploy apps) For general information on how to use them refer to the Deploying JavaFX Application guide. Once you know how use these tools to package your JavaFX application for other deployment methods there are only a few minor tweaks necessary to produce native bundles: make sure java is used from JDK7u6 bundle you have installed adjust your PATH settings if needed  if you are using ant tasks add "nativeBundles=all" attribute to fx:deploy task if you are using javafxpackager pass "-native" option to deploy command or if you are using makeall command then it will try build native packages by default result bundles will be in the "bundles" folder next to other deployment artifacts Note that building some types of native packages (e.g. .exe or .msi) may require additional free 3rd party software to be installed and available on PATH. As of JDK 7u6 build 14 you could build following types of packages: Windows bundle image EXE Inno Setup 5 or later is required Result exe will perform user level installation (no admin permissions are required) At least one shortcut will be created (menu or desktop) Application will be launched at the end of install MSI WiX 3.0 or later is required Result MSI will perform user level installation (no admin permissions are required) At least one shortcut will be created (menu or desktop)  MacOS bundle image dmg (drag and drop) installer Linux bundle image rpm rpmbuild is required shortcut will be added to the programs menu If you are using Netbeans for producing the deployment packages then you will need to add custom build step to the build.xml to execute the fx:deploy task with native bundles enabled. Here is what we do for BrickBreaker sample: <target name="-post-jfx-deploy"> <fx:deploy width="${javafx.run.width}" height="${javafx.run.height}" nativeBundles="all" outdir="${basedir}/${dist.dir}" outfile="${application.title}"> <fx:application name="${application.title}" mainClass="${javafx.main.class}"> <fx:resources> <fx:fileset dir="${basedir}/${dist.dir}" includes="BrickBreaker.jar"/> </fx:resources> <info title="${application.title}" vendor="${application.vendor}"/> </fx:application> </fx:deploy> </target> This is pretty much regular use of fx:deploy task, the only special thing here is nativeBundles="all". Perhaps the easiest way to try building native bundles is to download the latest JavaFX samples bundle and build Ensemble, BrickBreaker or SwingInterop. Please give it a try and share your experience. We need your feedback! BTW, do not hesitate to file bugs and feature requests to JavaFX bug database! Wait! How can i ... This entry is not a comprehensive guide into native bundles, and we plan to post on this topic more. However, I am sure that once you play with native bundles you will have a lot of questions. We may not have all the answers, but please do not hesitate to ask! Knowing all of the questions is the first step to finding all of the answers.

    Read the article

  • CodePlex Daily Summary for Sunday, December 26, 2010

    CodePlex Daily Summary for Sunday, December 26, 2010Popular ReleasesNoSimplerAccounting: NoSimplerAccounting 6.0: -Fixed a bug in expense category report.Temporary Data Storage Folder: TDS Folder source code: This is latest version 0.2 Beta code. To know the password request at neutron.request@gmail.comNHibernate Mapping Generator: NHibernate Mapping Generator 2.0: Added support for Postgres (Thanks to Angelo)NewLife XCode: XCode v6.5.2010.1223 ????(????v3.5??): XCode v6.5.2010.1223 ????,??: NewLife.Core ??? NewLife.Net ??? XControl ??? XTemplate ????,??C#?????? XAgent ???? NewLife.CommonEnitty ??????(???,XCode??????) XCode?? ?????????,??????????????????,?????95% XCode v3.5.2009.0714 ??,?v3.5?v6.0???????????????,?????????。v3.5???????????,??????????????。 XCoder ??XTemplate?????????,????????XCode??? XCoder_Src ???????(????XTemplate????),??????????????????Windows Workflow Foundation on Codeplex: Microsoft.Activities.UnitTesting 1.71: New in this release Episode as Tasks using the Task Parallel Library CHM Help file now included in release. StyleCop compliance is resulting in massive refatoring but should not cause breaking changes Breaking Change DefaultTimeout is now a TimeSpan Removed default parameters using int timeout = DefaultTimeout and added overloads insteadMiniTwitter: 1.64: MiniTwitter 1.64 ???? ?? 1.63 ??? URL ??????????????Ajax ASP.Net Forum: InSeCla Forum Software v0.1.9: *VERSION: 0.1.9* HAPPY CHRISTMAS FEATURES ADDED Added features customizabled per category level (Customize at ADMIN/Categories Tab) Allow Anonymous Threads, Allow Anonymous Post Virtual URLs (friendly urls) has finally added And you can have some forum (category) using virtual urls and other using normal urls. Check !, as this improve the SEO indexing results Moderation Instant On: Delete Thread, Move Thread Available to users being members of moderators or administrators InstantO...VivoSocial: VivoSocial 7.4.0: Please see changes: http://support.vivoware.com/project/ChangeLog.aspx?PROJID=48Umbraco CMS: Umbraco 4.6 Beta - codename JUNO: The Umbraco 4.6 beta (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains more than 89 bug fixes since the 4.5.2 release. Improved installer experience Updated Starter Kits (Simple, Blog, Personal, Business) Beautiful, free, customizable skins included Skinning engine and Skin customization (see Skinning Documentation Kit) Default dashboards on install with hide option Updated Login t...SSH.NET Library: 2010.12.23: This release includes some bug fixes and few new fetures. Fixes Allow to retrieve big directory structures ssh-dss algorithm is fixed Populate sftp file attributes New Features Support for passhrase when private key is used Support added for diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha256 and diffie-hellman-group-exchange-sha1 key exchange algorithms Allow to provide multiple key files for authentication Add support for "keyboard-interactive" authentication method...ASP.NET MVC SiteMap provider: MvcSiteMapProvider 2.3.0: Using NuGet?MvcSiteMapProvider is also listed in the NuGet feed. Learn more... Like the project? Consider a donation!Donate via PayPal via PayPal. Release notesThis will be the last release targeting ASP.NET MVC 2 and .NET 3.5. MvcSiteMapProvider 3.0.0 will be targeting ASP.NET MVC 3 and .NET 4 Web.config setting skipAssemblyScanOn has been deprecated in favor of excludeAssembliesForScan and includeAssembliesForScan ISiteMapNodeUrlResolver is now completely responsible for generating th...SuperSocket, an extensible socket application framework: SuperSocket 1.3 beta 2: Compared with SuperSocket 1.3 beta 1, the changes listed below have been done in SuperSocket 1.3 beta 2: added supports for .NET 3.5 replaced Logging Application Block of EntLib with Log4Net improved the code about logging fixed a bug in QuickStart sample project added IPv6 supportTibiaPinger: TibiaPinger v1.0: TibiaPinger v1.0Media Companion: Media Companion 3.400: Extract the entire archive to a folder which has user access rights, eg desktop, documents etc. A manual is included to get you startedMulticore Task Framework: MTF 1.0.1: Release 1.0.1 of Multicore Task Framework.SQL Monitor - tracking sql server activities: SQL Monitor 3.0 alpha 7: 1. added script save/load in user query window 2. fixed problem with connection dialog when choosing windows auth but still ask for user name 3. auto open user table when double click one table node 4. improved alert message, added log only methodEnhSim: EnhSim 2.2.6 ALPHA: 2.2.6 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Fixing up some r...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4.3: Helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: Improvements for confirm, popup, popup form RenderView controller extension the user experience for crud in live demo has been substantially improved + added search all the features are shown in the live demoGanttPlanner: GanttPlanner V1.0: GanttPlanner V1.0 include GanttPlanner.dll and also a Demo application.N2 CMS: 2.1 release candidate 3: * Web platform installer support available N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) A bunch of bugs were fixed File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the templ...New ProjectsA.I. Semantic Net Tests: A test using "semantic net" artificial intelligence developed in C++.Awesome.ClientID: Awesome.ClientID is a HTTPModule which serializes the ClientID of server controls, and page/usercontrol properties to a Json to make working with JavaScript easier without the need for outputting ClientID's. It's a solution for clean ClientIDs for .NET 2.0/3.5BitTweet: BitTweet is a fast, simple and easy to use Twitter client with tweeting, reading and writing functions included. It is know for being the most secure and fast Twitter client out there! It is written in Visual Basic .NET 2008. ~Tweet in a bit!CarRental001: carCrudo: CRUDO - The MCG (Model-Controller-Generator) CGF (Code Generation Framework) Visit The Project HomePage: http://adityayadav.com/CRUDO_The_MCG_Model_Controller_Generator_CGF_Code_Generation_Framework.aspx Licenses: 1) GPL v2 2) Commercial (contact us for information)Effeks: FX trading sample application sing prismEnhanced WebPart: Enhanced Webpart is a webpart with enhanced properties editor. It allows you display properties of most common types, and provides easy way to add custom controls to display your own types in webpart properties editor. Enhanced Webpart fully supports localization.Esteffin's graphic: Esercitazioni grafica 2010farasun: Source code Repository for farasun.wordpress.comGCTF: Desafio .NET Realizado na FACISAImgshow Plugin for Windows Live Writer: By using Imgshow Query you are able to publish multimedia content such as Youtube, Facebook Video, PDF, and other Imgshow supported service.Instant Messenger Using WPF and WCF: instant messenger using wcf and wpfjMenu: Create a dotnetnuke module that cover all kind of jquery menu pluginsMcopy API: Copy all files and directores in windows shell. Support long path (less then 32000 chars) and network path (eg. \\server\share or \\127.0.0.1\share)MOSSWebServices: This project contains code for interacting with the 21 web services provided by MOSS. This is targeted towards WSS developers This currently contains code to interact with "UserGroups" web service. I will keep adding code to interact with other MOSS web services soon. Nest Hub - Twitter Client: Nest Hub is a free Twitter client for PC. It's developed in VB.NET.Neural Networks Library: Neural networks Library by SefnajParallelism: Parallelism- Unified Parallel & Concurrency Framework Visit the Project HomePage: http://adityayadav.com/Parallelism.aspx Licenses: 1) GPL v2 2) Commercial (contact us for information)PDF IRM Protector For SharePoint 2007 And 2010: The PDF IRM Protector controls the conversion of PDF documents to their encrypted, rights-managed format and the decryption of PDF documents from their rights-managed format back to their original format. The project targets both SharePoint 2007 and 2010. PHP Form Validator - Isis: Project Isis simplifies the process of form validation for PHP. Offering a robust flag and function system, allowing you to encode the forms with the correct validation data and be done.Recycle Me: Open Source Project is to help and educate people in regarding reuse and recycling of daily life items. This is a Social project so we need funds for research and surveys. Volunteers are most welcome to contribute and certifications for their participation are available. ThanksSencha ExtJS Import Library for Script#: Script# import library for Sencha Ext JS Javascript Framework Separating Vertices: Given a graph as a collection of edges and vertices, it identifies the separating vertices, biconnected components, and edges. SIG - SISTEMA DE GESTÃO INTERNA: Projeto feito para a empresa MODEMTELECOM, consultoria em telecomunicoções.Smart Card COM: Smart Card COM Library, useful to access smart cards from a Web page in Internet ExplorerSocial Hub: Social HubStock Research: Test project for stocke researchtaokeba: ????????????????????,???????????。Temporary Data Storage Folder: This software creates a folder that stores temporary data that you want to store for temporary tasks. Temporary tasks such as storing a file that is to be deleted after some time. It automatically deleted those files on exit. Learn more at www.neutronsoftwares.weebly.comTukUI Updater: A OpenSource Windows Updater tool for the World of Warcraft(R) addon TukUI Developed in .NET 3.5 Discusion thread are found at the official TukUI forums: http://www.tukui.org/v2/forums/topic.php?id=4982 WebUtil: This is a project for using all of features by the web.

    Read the article

  • Routing to a Controller with no View in Angular

    - by Rick Strahl
    I've finally had some time to put Angular to use this week in a small project I'm working on for fun. Angular's routing is great and makes it real easy to map URL routes to controllers and model data into views. But what if you don't actually need a view, if you effectively need a headless controller that just runs code, but doesn't render a view?Preserve the ViewWhen Angular navigates a route and and presents a new view, it loads the controller and then renders the view from scratch. Views are not cached or stored, but displayed and then removed. So if you have routes configured like this:'use strict'; // Declare app level module which depends on filters, and services window.myApp = angular.module('myApp', ['myApp.filters', 'myApp.services', 'myApp.directives', 'myApp.controllers']). config(['$routeProvider', function($routeProvider) { $routeProvider.when('/map', { template: "partials/map.html ", controller: 'mapController', reloadOnSearch: false, animation: 'slide' }); … $routeProvider.otherwise({redirectTo: '/map'}); }]); Angular routes to the mapController and then re-renders the map.html template with the new data from the $scope filled in.But, but… I don't want a new View!Now in most cases this works just fine. If I'm rendering plain DOM content, or textboxes in a form interface that is all fine and dandy - it's perfectly fine to completely re-render the UI.But in some cases, the UI that's being managed has state and shouldn't be redrawn. In this case the main page in question has a Google Map on it. The map is  going to be manipulated throughout the lifetime of the application and the rest of the pages. In my application I have a toolbar on the bottom and the rest of the content is replaced/switched out by the Angular Views:The problem is that the map shouldn't be redrawn each time the Location view is activated. It should maintain its state, such as the current position selected (which can move), and shouldn't redraw due to the overhead of re-rendering the initial map.Originally I set up the map, exactly like all my other views - as a partial, that is rendered with a separate file, but that didn't work.The Workaround - Controller Only RoutesThe workaround for this goes decidedly against Angular's way of doing things:Setting up a Template-less RouteIn-lining the map view directly into the main pageHiding and showing the map view manuallyLet's see how this works.Controller Only RouteThe template-less route is basically a route that doesn't have any template to render. This is not directly supported by Angular, but thankfully easy to fake. The end goal here is that I want to simply have the Controller fire and then have the controller manage the display of the already active view by hiding and showing the map and any other view content, in effect bypassing Angular's view display management.In short - I want a controller action, but no view rendering.The controller-only or template-less route looks like this: $routeProvider.when('/map', { template: " ", // just fire controller controller: 'mapController', animation: 'slide' });Notice I'm using the template property rather than templateUrl (used in the first example above), which allows specifying a string template, and leaving it blank. The template property basically allows you to provide a templated string using Angular's HandleBar like binding syntax which can be useful at times. You can use plain strings or strings with template code in the template, or as I'm doing here a blank string to essentially fake 'just clear the view'. In-lined ViewSo if there's no view where does the HTML go? Because I don't want Angular to manage the view the map markup is in-lined directly into the page. So instead of rendering the map into the Angular view container, the content is simply set up as inline HTML to display as a sibling to the view container.<div id="MapContent" data-icon="LocationIcon" ng-controller="mapController" style="display:none"> <div class="headerbar"> <div class="right-header" style="float:right"> <a id="btnShowSaveLocationDialog" class="iconbutton btn btn-sm" href="#/saveLocation" style="margin-right: 2px;"> <i class="icon-ok icon-2x" style="color: lightgreen; "></i> Save Location </a> </div> <div class="left-header">GeoCrumbs</div> </div> <div class="clearfix"></div> <div id="Message"> <i id="MessageIcon"></i> <span id="MessageText"></span> </div> <div id="Map" class="content-area"> </div> </div> <div id="ViewPlaceholder" ng-view></div>Note that there's the #MapContent element and the #ViewPlaceHolder. The #MapContent is my static map view that is always 'live' and is initially hidden. It is initially hidden and doesn't get made visible until the MapController controller activates it which does the initial rendering of the map. After that the element is persisted with the map data already loaded and any future access only updates the map with new locations/pins etc.Note that default route is assigned to the mapController, which means that the mapController is fired right as the page loads, which is actually a good thing in this case, as the map is the cornerstone of this app that is manipulated by some of the other controllers/views.The Controller handles some UISince there's effectively no view activation with the template-less route, the controller unfortunately has to take over some UI interaction directly. Specifically it has to swap the hidden state between the map and any of the other views.Here's what the controller looks like:myApp.controller('mapController', ["$scope", "$routeParams", "locationData", function($scope, $routeParams, locationData) { $scope.locationData = locationData.location; $scope.locationHistory = locationData.locationHistory; if ($routeParams.mode == "currentLocation") { bc.getCurrentLocation(false); } bc.showMap(false,"#LocationIcon"); }]);bc.showMap is responsible for a couple of display tasks that hide/show the views/map and for activating/deactivating icons. The code looks like this:this.showMap = function (hide,selActiveIcon) { if (!hide) $("#MapContent").show(); else { $("#MapContent").hide(); } self.fitContent(); if (selActiveIcon) { $(".iconbutton").removeClass("active"); $(selActiveIcon).addClass("active"); } };Each of the other controllers in the app also call this function when they are activated to basically hide the map and make the View Content area visible. The map controller makes the map.This is UI code and calling this sort of thing from controllers is generally not recommended, but I couldn't figure out a way using directives to make this work any more easily than this. It'd be easy to hide and show the map and view container using a flag an ng-show, but it gets tricky because of scoping of the $scope. I would have to resort to storing this setting on the $rootscope which I try to avoid. The same issues exists with the icons.It sure would be nice if Angular had a way to explicitly specify that a View shouldn't be destroyed when another view is activated, so currently this workaround is required. Searching around, I saw a number of whacky hacks to get around this, but this solution I'm using here seems much easier than any of that I could dig up even if it doesn't quite fit the 'Angular way'.Angular nice, until it's notOverall I really like Angular and the way it works although it took me a bit of time to get my head around how all the pieces fit together. Once I got the idea how the app/routes, the controllers and views snap together, putting together Angular pages becomes fairly straightforward. You can get quite a bit done never going beyond those basics. For most common things Angular's default routing and view presentation works very well.But, when you do something a bit more complex, where there are multiple dependencies or as in this case where Angular doesn't appear to support a feature that's absolutely necessary, you're on your own. Finding information on more advanced topics is not trivial especially since versions are changing so rapidly and the low level behaviors are changing frequently so finding something that works is often an exercise in trial and error. Not that this is surprising. Angular is a complex piece of kit as are all the frameworks that try to hack JavaScript into submission to do something that it was really never designed to. After all everything about a framework like Angular is an elaborate hack. A lot of shit has to happen to make this all work together and at that Angular (and Ember, Durandel etc.) are pretty amazing pieces of JavaScript code. So no harm, no foul, but I just can't help feeling like working in toy sandbox at times :-)© Rick Strahl, West Wind Technologies, 2005-2013Posted in Angular  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Wednesday, October 03, 2012

    CodePlex Daily Summary for Wednesday, October 03, 2012Popular ReleasesSharePoint Column & View Permission: SharePoint Column and View Permission v1.5: Version 1.5 of this project. If you will find any bugs please let me know at enti@zoznam.sk or post your findings in Issue TrackerZ3: Z3 4.1.1 source code: Snapshot corresponding to version 4.1.1.DirectX Tool Kit: October 2012: October 2, 2012 Added ScreenGrab module Added CreateGeoSphere for drawing a geodesic sphere Put DDSTextureLoader and WICTextureLoader into the DirectX C++ namespace Renamed project files for better naming consistency Updated WICTextureLoader for Windows 8 96bpp floating-point formats Win32 desktop projects updated to use Windows Vista (0x0600) rather than Windows 7 (0x0601) APIs Tweaked SpriteBatch.cpp to workaround ARM NEON compiler codegen bugHome Access Plus+: v8.1: HAP+ Web v8.1.1003.000079318 Fixed: Issue with the Help Desk and updating a ticket as an admin 79319 Fixed: formatting issue with the booking system admin header 79321 Moved to using the arrow with a circle symbol on the homepage instead of the > and < 79541 Added: 480px wide mobile theme to login page 79541 Added: 480px wide mobile theme to home page 79541 Added: slide events for homepage 79553 Fixed: Booking System Multiple Lesson Bug 79553 Fixed: IE Error Message 79684 Fixed: jQuery issue ...System.Net.FtpClient: System.Net.FtpClient 2012.10.02: This is the first release of the new code base. It is not compatible with the old API, I repeat it is not a drop in update for projects currently using System.Net.FtpClient. New users should download this release. The old code base (Branch: System.Net.FtpClient_1) will continue to be supported while the new code matures. This release is a complete re-write of System.Net.FtpClient. The API and code are simpler than ever before. There are some new features included as well as an attempt at be...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1002.3): Visual Ribbon Editor 1.3.1002.3 What's New: Multi-language support for Labels/Tooltips for custom buttons and groups Support for base language other than English (1033) Connect dialog will not require organization name for ADFS / IFD connections Automatic creation of missing labels for all provisioned languages Minor connection issues fixed Notes: Before saving the ribbon to CRM server, editor will check Ribbon XML for any missing <Title> elements inside existing <LocLabel> elements...YAXLib: Yet Another XML Serialization Library for the .NET Framework: YAXLib 2.10: See change-log for the list of new features added and bugs fixedRenameApp: RenameApp 1.0: First release of RenameAppJsonToStaticTypeGenerator: JsonToStaticTypeGenerator 0.1: This is the first alpha release of JsonToStaticTypeGenerator.XiaoKyun: XiaoKyun V1.00: https://xiaokyun.codeplex.com/CatchThatException: Release 1.12: Wow a very fast change and a much better and faster writing to the text fileNaked Objects: Naked Objects Release 5.0.0: Corresponds to the packaged version 5.0.0 available via NuGet. Please note that the easiest way to install and run the Naked Objects Framework is via the NuGet package manager: just search the Official NuGet Package Source for 'nakedobjects'. It is only necessary to download the source code (from here) if you wish to modify or re-build the framework yourself. If you do wish to re-build the framework, consul the file HowToBuild.txt in the release. Major enhancementsNaked Objects 5.0 is desi...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.0: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...D3 Loot Tracker: 1.4.1: This version will automatically save a recording session on application exit if the user didn't stop the current session.SubExtractor: Release 1029: Feature: Added option to make i and ¡ characters movie-specific for improved OCR on Spanish subs (Special Characters tab in Options) Feature: Allow switch to Word Spacing dialog directly from Spell Check dialog Fix: Added more default word spacings for accented characters Fix: Changed Word Spacing dialog to show all OCR'd characters in current sub Fix: Removed application focus grab during OCR Fix: Tightened HD subs fuzzy logic to reduce false matches in small characters Fix: Improved Arrow k...MCEBuddy 2.x: MCEBuddy 2.2.18: Reccomended download Changelog for 2.2.18 (32bit and 64bit) 1. Added support for checking if Showanalyzer has hung and cancelling it 2. New version of comskip, 0.81.48 3. Speeding up comskip 4. Fixed a build bug in 64bit 2.2.17 5. Added a new comkip.ini, better commercial detection for international channels and less aggressive. Old one has been retained as comskip_old.ini 6. Added support for Audio Offset on Conversion Task page in GUI (this overrides the profiles AudioDelay when specified)Readable Passphrase Generator: KeePass Plugin 0.7.1: See the KeePass Plugin Step By Step Guide for instructions on how to install the plugin. Changes Built against KeePass 2.20Windows 8 Toolkit - Charts and More: Beta 1.0: The First Compiled Version of my LibraryPDF.NET: PDF.NET.Ver4.5-OpenSourceCode: PDF.NET Ver4.5 ????,????Web??????。 PDF.NET Ver4.5 Open Source Code,include a sample Web application project.Visual Studio Icon Patcher: Version 1.5.2: This version contains no new images from v1.5.1 Contains the following improvements: Better support for detecting the installed languages The extract & inject commands won’t run if Visual Studio is running You may now run in extract or inject mode The p/invoke code was cleaned up based on Code Analysis recommendations When a p/invoke method fails the Win32 error message is now displayed Error messages use red text Status messages use green textNew Projects.Net Exception Reporter: A reusable and extensible exception reporter for Microsoft .NET projects.Aesha Broker: A rich client Auction House Broker application. Built upon Blizzard's new REST API. Provides a client experience which caches historical auction data to provideASP.NET Friendly URLs: A library that enables automatic resolving of extensionless URLs to ASP.NET file-based handlers, e.g. ASPX pages.Astro Power CMS: Astro Power CMS build on GraffitiCMS, a product of Telligent. GraffitiCMS stop develop, I create this project with name is Astro Power CMSaTester: Here is a good place. And now, I can upload my soruce to it. It's very good.Automacao Residencial: O Netduino é uma plataforma onde voce utiliza a linguagem C# para controlar hardware. O objetivo é criar uma estrutura de comunicaçao com o netduino.Derbster: Explore and learn about modern C# architecture and programming by implementing software to support the modern game of roller derby. Dot FPE - A free Format-preserving encryption implementation for .net: There aren't any widely available implementations of a format-preserving encryption in .NET. Thus we aim to be the first!DotNetEx: .NET Framework extended functionality for data access, working with Tasks and asynchronous programming, encryption algorithms such as SkipJack and other stuff.Elemental Development Toolchain (.NET version): A complete toolchain built around the Æthere langauge.elFinder ASP.NET Connector: The one and only .NET connector for the amazing elFinder 2.X web-based file manager. Finally you can manage your files easily right from your browser!Geosynkronisering: Prosjekt for utarbeidelse av spesifikasjoner for grensesnitt som muliggjør synkronisering av datalager med geografisk datainnhold på tvers av ulike plattformerGIII_P1: Jesli wszyscy w Ciebie zwatpili pokaz ze sie mylili !IntroduceCompany: Website gi?i thi?u doanh nghi?p - công ty.JsonToStaticTypeGenerator: This is the JsonToStaticTypeGenerator project that gives the possibility to generate c# classes out of Json data.kwerty: Coming soonMachine Learning: My machine learning project. Just to figure out things...MicroManager: MaNGOS Web-based ManagerMvcContrib3: This is the version of mvccontrib which works with ASP.Net MVC 3Oracle Destination via ODP.Net (Custom Destination Component): SSIS 2008 R2 solution (custom destination component) to write to oracle via ODP.NetOrchard Commerce History with PayPal: Project expands on Nwazet.Commerce module (and is required for this module to work). Adds a purchase history, product role associations, and PayPal.Phoenix Trans: Web Phoenix Trans v?n t?i hàng hóa trong và ngoài nu?cPowerState: PowerState is .NET application for sending Wake-On-LAN (WOL) requests to computers. It can also shutdown, log off and reboot computers using the WMI.RenameApp: RenameApp is a free and very simple to use renaming software for Windows. RenameApp allows you to easily rename files based on the specified criteria and order.Rose-Hulman User Experience Design: This project will contain labs intended for use in Rose-Hulman's Computer Science and Software Engineering department.Server d? phòng: Ðây là server d? phòng, SharePoint BCS External Connector Caching Pattern Library: Library for enabling caching on SharePoint BCS external connectors. Enables BCS .Net Assemblies to be written that are scalable and performant for search.SharpDX.WPF: This projects provides a DirectX 9, DirectX 10 and DirectX 11 support for WPF. The assembly contains DXElement - an easy to use WPF-FrameworkElement.Simple Password Generator Library: The password generator library, written in C#, is a simple assembly which allow generation of passwords with length anywhere from 1-99.SisEagle.NET: Esse sistema foi desenvolvido pra fins de apresentação do TCC referente ao ano de 2012 na UDF-BrasiliaSWebshop: SWebshop is a PHP based webshop system which allows you to insert, edit and delete data easily and is easy to use for customers.Tabular Database Powershell Cmdlets: This project provides a sample of PowerShell Cmdlets to manage Tabular models, from Analysis Services.University timetable using java: the project is using java language to create timetable (full timetable with exam tables and labs tables) and it will be free for all users with sql databaseURLShoter: This project for shorting URL for ASP.NETWeb Input Form Control: This control allow developer to create the input form by configuring the control in html modeWeibo: rtWorkoutMemo: Project descritpion(first draft): Memorise your workout. Keep archive records of your daily trening such: - series of excercise, - quantity of each serie, - weWPF - Automate Acrobat Security Policy: This WPF Tool was created to quickly password protect batches of PDF documents, using a random generator to generate the passwords.XiaoKyun: Hello Page for Web.Z3: Z3 is a high-performance theorem prover being developed at Microsoft Research.

    Read the article

  • CodePlex Daily Summary for Saturday, October 13, 2012

    CodePlex Daily Summary for Saturday, October 13, 2012Popular ReleasesArduino Installer For Atmel Studio 6: Arduino Installer - Version 1.01 Beta2: Bug Fixes - Handling spaces in avrdude path - Handling spaces in avrdude config path - Handling spaces in project names - Handling spaces in project path - hard coded directories pointing to my space has been removed New Features - Total of 7 project templates included - C program - C library - C++ program - C++ library - Arduino Unit Tests - Arduino library - Arduino program (sketch) - Group all supporting scripts under a script directory in the solution - Support for calling multiple pre-...AcDown????? - AcDown Downloader Framework: AcDown????? v4.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...PHPExcel: PHPExcel 1.7.8: See Change Log for details of the new features and bugfixes included in this release, and methods that are now deprecated. Note changes to the PDF Writer: tcPDF is no longer bundled with PHPExcel, but should be installed separately if you wish to use that 3rd-Party library with PHPExcel. Alternatively, you can choose to use mPDF or DomPDF as PDF Rendering libraries instead: PHPExcel now provides a configurable wrapper allowing you a choice of PDF renderer. See the documentation, or the PDF s...DirectX Tool Kit: October 12, 2012: October 12, 2012 Added PrimitiveBatch for drawing user primitives Debug object names for all D3D resources (for PIX and debug layer leak reporting)Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.70: Fixed issue described in discussion #399087: variable references within case values weren't getting resolved.GoogleMap Control: GoogleMap Control 6.1: Some important bug fixes and couple of new features were added. There are no major changes to the sample website. Source code could be downloaded from the Source Code section selecting branch release-6.1. Thus just builds of GoogleMap Control are issued here in this release. NuGet Package GoogleMap Control 6.1 NuGet Package FeaturesBounds property to provide ability to create a map by center and bounds as well; Setting in markup <artem:GoogleMap ID="GoogleMap1" runat="server" MapType="HY...mojoPortal: 2.3.9.3: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2393-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0, but we recommend you to use .NET 4, we will probably drop support for .NET 3.5 once .NET 4.5 is available The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code and are not intended for use in Visual Studio. To download the source code see getting the lates...OstrivDB: OstrivDB 0.1: - Storage configuration: objects serialization (Xml, Json), storage file compressing, data block size. - Caching for Select queries. - Indexing. - Batch of queries. - No special query language (LINQ used). - Integrated sorting and paging. - Multithreaded data processing.D3 Loot Tracker: 1.5.4: Fixed a bug where the server ip was not logged properly in the stats file.Captcha MVC: Captcha Mvc 2.1.2: v 2.1.2: Fixed problem with serialization. Made all classes from a namespace Jetbrains.Annotaions as the internal. Added autocomplete attribute and autocorrect attribute for captcha input element. Minor changes. v 2.1.1: Fixed problem with serialization. Minor changes. v 2.1: Added support for storing captcha in the session or cookie. See the updated example. Updated example. Minor changes. v 2.0.1: Added support for a partial captcha. Now you can easily customize the layout, s...DotNetNuke® Community Edition CMS: 06.02.04: Major Highlights Fixed issue where the module printing function was only visible to administrators Fixed issue where pane level skinning was being assigned to a default container for any content pane Fixed issue when using password aging and FB / Google authentication Fixed issue that was causing the DateEditControl to not load the assigned value Fixed issue that stopped additional profile properties to be displayed in the member directory after modifying the template Fixed er...Advanced DataGridView with Excel-like auto filter: 1.0.0.0: ?????? ??????WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.3: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes NOTE:...VidCoder: 1.4.4 Beta: Fixed inability to create new presets with "Save As".MCEBuddy 2.x: MCEBuddy 2.3.2: Changelog for 2.3.2 (32bit and 64bit) 1. Added support for generating XBMC XML NFO files for files in the conversion queue (store it along with the source video with source video name.nfo). Right click on the file in queue and select generate XML 2. UI bugifx, start and end trim box locations interchanged 3. Added support for removing commercials from non DVRMS/WTV files (MP4, AVI etc) 4. Now checking for Firewall port status before enabling (might help with some firewall problems) 5. User In...Sandcastle Help File Builder: SHFB v1.9.5.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release supports the Sandcastle October 2012 Release (v2.7.1.0). It includes full support for generating, installing, and removing MS Help Viewer files. This new release suppor...The GLMET Project: Shutdown Manager: Shutdown, Log off and Restart Timer Set time for shutdown, log off and restartClosedXML - The easy way to OpenXML: ClosedXML 0.68.0: ClosedXML now resolves formulas! Yes it finally happened. If you call cell.Value and it has a formula the library will try to evaluate the formula and give you the result. For example: var wb = new XLWorkbook(); var ws = wb.AddWorksheet("Sheet1"); ws.Cell("A1").SetValue(1).CellBelow().SetValue(1); ws.Cell("B1").SetValue(1).CellBelow().SetValue(1); ws.Cell("C1").FormulaA1 = "\"The total value is: \" & SUM(A1:B2)"; var...Json.NET: Json.NET 4.5 Release 10: New feature - Added Portable build to NuGet package New feature - Added GetValue and TryGetValue with StringComparison to JObject Change - Improved duplicate object reference id error message Fix - Fixed error when comparing empty JObjects Fix - Fixed SecAnnotate warnings Fix - Fixed error when comparing DateTime JValue with a DateTimeOffset JValue Fix - Fixed serializer sometimes not using DateParseHandling setting Fix - Fixed error in JsonWriter.WriteToken when writing a DateT...Readable Passphrase Generator: KeePass Plugin 0.7.2: Changes: Tested against KeePass 2.20.1 Tested under Ubuntu 12.10 (and KeePass 2.20) Added GenerateAsUtf8 method returning the encrypted passphrase as a UTF8 byte array.New ProjectsAaron.Core: It's a special core be used to help your project become standardization. It provides the standard platform, including core systems, data flows,...AFSSignarlRServer: Server ready tryAgileFramework: Agile Framework in order to build easily application based on WCF, NHibernate , WPF, and multithreadingala, A Programming Language: ala, A Programming LanguageBaseSRS (Basic Service Request System): BaseSRS is a basic "Service Request System" or "SRS" which can be adapted and used by anyone. ContainerVariations: ContainerVariations is a collection of similar unit tests projects, each applied to a different Inversion of Control container. If successful, it will provide a consistent and comprehensive set of examples for popular .NET IoC containers. It is developed in C#.COST Policies - ART Work: COST Policies: ART Work ? ??????? ?????? ?? ??????? ?? ??????????: -baceCP -guideCPCS Script Runner: Provide users an easy way of executing C# programs (scripts) that are compiled on the fly.DbUtility: DbUtility is a free utility to display databases info like, size, backup date, instance name, database name, last backup log date, with export to excel feature.EchoLink Monitor: EchoLink Monitor is a management tool for EchoLink sysops maintaining remote EchoLink nodes.NextUI: We are on the way...Notas Alexandre: Apenas teste ainda...será modificado depois...Orchard CMS Amba.HtmlBlocks: Amba.HtmlBlocks module for Orchard CMS 1.5.1Portable Basemap Server: multiple map data source<--PBS-->multiple map apiRonYee: ??????????????。 ????(viewer)???????????????????,???????????????。 ????(customer)?????????????????,???????,?????????????????。 ??????(user)??????????????????。ShangWu: ????,???StoreIpAddress: Examine different means to store IP addressesSummon for Umbraco: Summon for Umbraco is a .NET solution for Summon API, provided by Serials Solutions.teasingg: this is for testing code plexWeb Scripting and Content Creation assignment: A stub project.WriteableBitmapEx for Windows Embedded: WriteableBitmapEx for Windows Embedded Compact 7 and Silverlight for Windows Embedded. Requires XAML In The Hand for managed code development.X.Web.Sitemap: X.Web.Sitemap is a part of X-Framework library. X.Web.Sitemap allows quickly and easily generate a Google-compatible filesZeropaste: A pastebin with minimal features.??: ?????EPUB????。

    Read the article

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • VLOOKUP in Excel, part 2: Using VLOOKUP without a database

    - by Mark Virtue
    In a recent article, we introduced the Excel function called VLOOKUP and explained how it could be used to retrieve information from a database into a cell in a local worksheet.  In that article we mentioned that there were two uses for VLOOKUP, and only one of them dealt with querying databases.  In this article, the second and final in the VLOOKUP series, we examine this other, lesser known use for the VLOOKUP function. If you haven’t already done so, please read the first VLOOKUP article – this article will assume that many of the concepts explained in that article are already known to the reader. When working with databases, VLOOKUP is passed a “unique identifier” that serves to identify which data record we wish to find in the database (e.g. a product code or customer ID).  This unique identifier must exist in the database, otherwise VLOOKUP returns us an error.  In this article, we will examine a way of using VLOOKUP where the identifier doesn’t need to exist in the database at all.  It’s almost as if VLOOKUP can adopt a “near enough is good enough” approach to returning the data we’re looking for.  In certain circumstances, this is exactly what we need. We will illustrate this article with a real-world example – that of calculating the commissions that are generated on a set of sales figures.  We will start with a very simple scenario, and then progressively make it more complex, until the only rational solution to the problem is to use VLOOKUP.  The initial scenario in our fictitious company works like this:  If a salesperson creates more than $30,000 worth of sales in a given year, the commission they earn on those sales is 30%.  Otherwise their commission is only 20%.  So far this is a pretty simple worksheet: To use this worksheet, the salesperson enters their sales figures in cell B1, and the formula in cell B2 calculates the correct commission rate they are entitled to receive, which is used in cell B3 to calculate the total commission that the salesperson is owed (which is a simple multiplication of B1 and B2). The cell B2 contains the only interesting part of this worksheet – the formula for deciding which commission rate to use: the one below the threshold of $30,000, or the one above the threshold.  This formula makes use of the Excel function called IF.  For those readers that are not familiar with IF, it works like this: IF(condition,value if true,value if false) Where the condition is an expression that evaluates to either true or false.  In the example above, the condition is the expression B1<B5, which can be read as “Is B1 less than B5?”, or, put another way, “Are the total sales less than the threshold”.  If the answer to this question is “yes” (true), then we use the value if true parameter of the function, namely B6 in this case – the commission rate if the sales total was below the threshold.  If the answer to the question is “no” (false), then we use the value if false parameter of the function, namely B7 in this case – the commission rate if the sales total was above the threshold. As you can see, using a sales total of $20,000 gives us a commission rate of 20% in cell B2.  If we enter a value of $40,000, we get a different commission rate: So our spreadsheet is working. Let’s make it more complex.  Let’s introduce a second threshold:  If the salesperson earns more than $40,000, then their commission rate increases to 40%: Easy enough to understand in the real world, but in cell B2 our formula is getting more complex.  If you look closely at the formula, you’ll see that the third parameter of the original IF function (the value if false) is now an entire IF function in its own right.  This is called a nested function (a function within a function).  It’s perfectly valid in Excel (it even works!), but it’s harder to read and understand. We’re not going to go into the nuts and bolts of how and why this works, nor will we examine the nuances of nested functions.  This is a tutorial on VLOOKUP, not on Excel in general. Anyway, it gets worse!  What about when we decide that if they earn more than $50,000 then they’re entitled to 50% commission, and if they earn more than $60,000 then they’re entitled to 60% commission? Now the formula in cell B2, while correct, has become virtually unreadable.  No-one should have to write formulae where the functions are nested four levels deep!  Surely there must be a simpler way? There certainly is.  VLOOKUP to the rescue! Let’s redesign the worksheet a bit.  We’ll keep all the same figures, but organize it in a new way, a more tabular way: Take a moment and verify for yourself that the new Rate Table works exactly the same as the series of thresholds above. Conceptually, what we’re about to do is use VLOOKUP to look up the salesperson’s sales total (from B1) in the rate table and return to us the corresponding commission rate.  Note that the salesperson may have indeed created sales that are not one of the five values in the rate table ($0, $30,000, $40,000, $50,000 or $60,000).  They may have created sales of $34,988.  It’s important to note that $34,988 does not appear in the rate table.  Let’s see if VLOOKUP can solve our problem anyway… We select cell B2 (the location we want to put our formula), and then insert the VLOOKUP function from the Formulas tab: The Function Arguments box for VLOOKUP appears.  We fill in the arguments (parameters) one by one, starting with the Lookup_value, which is, in this case, the sales total from cell B1.  We place the cursor in the Lookup_value field and then click once on cell B1: Next we need to specify to VLOOKUP what table to lookup this data in.  In this example, it’s the rate table, of course.  We place the cursor in the Table_array field, and then highlight the entire rate table – excluding the headings: Next we must specify which column in the table contains the information we want our formula to return to us.  In this case we want the commission rate, which is found in the second column in the table, so we therefore enter a 2 into the Col_index_num field: Finally we enter a value in the Range_lookup field. Important:  It is the use of this field that differentiates the two ways of using VLOOKUP.  To use VLOOKUP with a database, this final parameter, Range_lookup, must always be set to FALSE, but with this other use of VLOOKUP, we must either leave it blank or enter a value of TRUE.  When using VLOOKUP, it is vital that you make the correct choice for this final parameter. To be explicit, we will enter a value of true in the Range_lookup field.  It would also be fine to leave it blank, as this is the default value: We have completed all the parameters.  We now click the OK button, and Excel builds our VLOOKUP formula for us: If we experiment with a few different sales total amounts, we can satisfy ourselves that the formula is working. Conclusion In the “database” version of VLOOKUP, where the Range_lookup parameter is FALSE, the value passed in the first parameter (Lookup_value) must be present in the database.  In other words, we’re looking for an exact match. But in this other use of VLOOKUP, we are not necessarily looking for an exact match.  In this case, “near enough is good enough”.  But what do we mean by “near enough”?  Let’s use an example:  When searching for a commission rate on a sales total of $34,988, our VLOOKUP formula will return us a value of 30%, which is the correct answer.  Why did it choose the row in the table containing 30% ?  What, in fact, does “near enough” mean in this case?  Let’s be precise: When Range_lookup is set to TRUE (or omitted), VLOOKUP will look in column 1 and match the highest value that is not greater than the Lookup_value parameter. It’s also important to note that for this system to work, the table must be sorted in ascending order on column 1! If you would like to practice with VLOOKUP, the sample file illustrated in this article can be downloaded from here. Similar Articles Productive Geek Tips Using VLOOKUP in ExcelImport Microsoft Access Data Into ExcelImport an Access Database into ExcelCopy a Group of Cells in Excel 2007 to the Clipboard as an ImageShare Access Data with Excel in Office 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition

    Read the article

  • Caching NHibernate Named Queries

    - by TStewartDev
    I recently started a new job and one of my first tasks was to implement a "popular products" design. The parameters were that it be done with NHibernate and be cached for 24 hours at a time because the query will be pretty taxing and the results do not need to be constantly up to date. This ended up being tougher than it sounds. The database schema meant a minimum of four joins with filtering and ordering criteria. I decided to use a stored procedure rather than letting NHibernate create the SQL for me. Here is a summary of what I learned (even if I didn't ultimately use all of it): You can't, at the time of this writing, use Fluent NHibernate to configure SQL named queries or imports You can return persistent entities from a stored procedure and there are a couple ways to do that You can populate POCOs using the results of a stored procedure, but it isn't quite as obvious You can reuse your named query result mapping other places (avoid duplication) Caching your query results is not at all obvious Testing to see if your cache is working is a pain NHibernate does a lot of things right. Having unified, up-to-date, comprehensive, and easy-to-find documentation is not one of them. By the way, if you're new to this, I'll use the terms "named query" and "stored procedure" (from NHibernate's perspective) fairly interchangeably. Technically, a named query can execute any SQL, not just a stored procedure, and a stored procedure doesn't have to be executed from a named query, but for reusability, it seems to me like the best practice. If you're here, chances are good you're looking for answers to a similar problem. You don't want to read about the path, you just want the result. So, here's how to get this thing going. The Stored Procedure NHibernate has some guidelines when using stored procedures. For Microsoft SQL Server, you have to return a result set. The scalar value that the stored procedure returns is ignored as are any result sets after the first. Other than that, it's nothing special. CREATE PROCEDURE GetPopularProducts @StartDate DATETIME, @MaxResults INT AS BEGIN SELECT [ProductId], [ProductName], [ImageUrl] FROM SomeTableWithJoinsEtc END The Result Class - PopularProduct You have two options to transport your query results to your view (or wherever is the final destination): you can populate an existing mapped entity class in your model, or you can create a new entity class. If you go with the existing model, the advantage is that the query will act as a loader and you'll get full proxied access to the domain model. However, this can be a disadvantage if you require access to the related entities that aren't loaded by your results. For example, my PopularProduct has image references. Unless I tie them into the query (thus making it even more complicated and expensive to run), they'll have to be loaded on access, requiring more trips to the database. Since we're trying to avoid trips to the database by using a second-level cache, we should use the second option, which is to create a separate entity for results. This approach is (I believe) in the spirit of the Command-Query Separation principle, and it allows us to flatten our data and optimize our report-generation process from data source to view. public class PopularProduct { public virtual int ProductId { get; set; } public virtual string ProductName { get; set; } public virtual string ImageUrl { get; set; } } The NHibernate Mappings (hbm) Next up, we need to let NHibernate know about the query and where the results will go. Below is the markup for the PopularProduct class. Notice that I'm using the <resultset> element and that it has a name attribute. The name allows us to drop this into our query map and any others, giving us reusability. Also notice the <import> element which lets NHibernate know about our entity class. <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <import class="PopularProduct, Infrastructure.NHibernate, Version=1.0.0.0"/> <resultset name="PopularProductResultSet"> <return-scalar column="ProductId" type="System.Int32"/> <return-scalar column="ProductName" type="System.String"/> <return-scalar column="ImageUrl" type="System.String"/> </resultset> </hibernate-mapping>  And now the PopularProductsMap: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <sql-query name="GetPopularProducts" resultset-ref="PopularProductResultSet" cacheable="true" cache-mode="normal"> <query-param name="StartDate" type="System.DateTime" /> <query-param name="MaxResults" type="System.Int32" /> exec GetPopularProducts @StartDate = :StartDate, @MaxResults = :MaxResults </sql-query> </hibernate-mapping>  The two most important things to notice here are the resultset-ref attribute, which links in our resultset mapping, and the cacheable attribute. The Query Class – PopularProductsQuery So far, this has been fairly obvious if you're familiar with NHibernate. This next part, maybe not so much. You can implement your query however you want to; for me, I wanted a self-encapsulated Query class, so here's what it looks like: public class PopularProductsQuery : IPopularProductsQuery { private static readonly IResultTransformer ResultTransformer; private readonly ISessionBuilder _sessionBuilder;   static PopularProductsQuery() { ResultTransformer = Transformers.AliasToBean<PopularProduct>(); }   public PopularProductsQuery(ISessionBuilder sessionBuilder) { _sessionBuilder = sessionBuilder; }   public IList<PopularProduct> GetPopularProducts(DateTime startDate, int maxResults) { var session = _sessionBuilder.GetSession(); var popularProducts = session .GetNamedQuery("GetPopularProducts") .SetCacheable(true) .SetCacheRegion("PopularProductsCacheRegion") .SetCacheMode(CacheMode.Normal) .SetReadOnly(true) .SetResultTransformer(ResultTransformer) .SetParameter("StartDate", startDate.Date) .SetParameter("MaxResults", maxResults) .List<PopularProduct>();   return popularProducts; } }  Okay, so let's look at each line of the query execution. The first, GetNamedQuery, matches up with our NHibernate mapping for the sql-query. Next, we set it as cacheable (this is probably redundant since our mapping also specified it, but it can't hurt, right?). Then we set the cache region which we'll get to in the next section. Set the cache mode (optional, I believe), and my cache is read-only, so I set that as well. The result transformer is very important. This tells NHibernate how to transform your query results into a non-persistent entity. You can see I've defined ResultTransformer in the static constructor using the AliasToBean transformer. The name is obviously leftover from Java/Hibernate. Finally, set your parameters and then call a result method which will execute the query. Because this is set to cached, you execute this statement every time you run the query and NHibernate will know based on your parameters whether to use its cached version or a fresh version. The Configuration – hibernate.cfg.xml and Web.config You need to explicitly enable second-level caching in your hibernate configuration: <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> [...] <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property> <property name="cache.provider_class">NHibernate.Caches.SysCache.SysCacheProvider,NHibernate.Caches.SysCache</property> <property name="cache.use_query_cache">true</property> <property name="cache.use_second_level_cache">true</property> [...] </session-factory> </hibernate-configuration> Both properties "use_query_cache" and "use_second_level_cache" are necessary. As this is for a web deployement, we're using SysCache which relies on ASP.NET's caching. Be aware of this if you're not deploying to the web! You'll have to use a different cache provider. We also need to tell our cache provider (in this cache, SysCache) about our caching region: <syscache> <cache region="PopularProductsCacheRegion" expiration="86400" priority="5" /> </syscache> Here I've set the cache to be valid for 24 hours. This XML snippet goes in your Web.config (or in a separate file referenced by Web.config, which helps keep things tidy). The Payoff That should be it! At this point, your queries should run once against the database for a given set of parameters and then use the cache thereafter until it expires. You can, of course, adjust settings to work in your particular environment. Testing Testing your application to ensure it is using the cache is a pain, but if you're like me, you want to know that it's actually working. It's a bit involved, though, so I'll create a separate post for it if comments indicate there is interest.

    Read the article

  • Metro: Understanding Observables

    - by Stephen.Walther
    The goal of this blog entry is to describe how the Observer Pattern is implemented in the WinJS library. You learn how to create observable objects which trigger notifications automatically when their properties are changed. Observables enable you to keep your user interface and your application data in sync. For example, by taking advantage of observables, you can update your user interface automatically whenever the properties of a product change. Observables are the foundation of declarative binding in the WinJS library. The WinJS library is not the first JavaScript library to include support for observables. For example, both the KnockoutJS library and the Microsoft Ajax Library (now part of the Ajax Control Toolkit) support observables. Creating an Observable Imagine that I have created a product object like this: var product = { name: "Milk", description: "Something to drink", price: 12.33 }; Nothing very exciting about this product. It has three properties named name, description, and price. Now, imagine that I want to be notified automatically whenever any of these properties are changed. In that case, I can create an observable product from my product object like this: var observableProduct = WinJS.Binding.as(product); This line of code creates a new JavaScript object named observableProduct from the existing JavaScript object named product. This new object also has a name, description, and price property. However, unlike the properties of the original product object, the properties of the observable product object trigger notifications when the properties are changed. Each of the properties of the new observable product object has been changed into accessor properties which have both a getter and a setter. For example, the observable product price property looks something like this: price: { get: function () { return this.getProperty(“price”); } set: function (value) { this.setProperty(“price”, value); } } When you read the price property then the getProperty() method is called and when you set the price property then the setProperty() method is called. The getProperty() and setProperty() methods are methods of the observable product object. The observable product object supports the following methods and properties: · addProperty(name, value) – Adds a new property to an observable and notifies any listeners. · backingData – An object which represents the value of each property. · bind(name, action) – Enables you to execute a function when a property changes. · getProperty(name) – Returns the value of a property using the string name of the property. · notify(name, newValue, oldValue) – A private method which executes each function in the _listeners array. · removeProperty(name) – Removes a property and notifies any listeners. · setProperty(name, value) – Updates a property and notifies any listeners. · unbind(name, action) – Enables you to stop executing a function in response to a property change. · updateProperty(name, value) – Updates a property and notifies any listeners. So when you create an observable, you get a new object with the same properties as an existing object. However, when you modify the properties of an observable object, then you can notify any listeners of the observable that the value of a particular property has changed automatically. Imagine that you change the value of the price property like this: observableProduct.price = 2.99; In that case, the following sequence of events is triggered: 1. The price setter calls the setProperty(“price”, 2.99) method 2. The setProperty() method updates the value of the backingData.price property and calls the notify() method 3. The notify() method executes each function in the collection of listeners associated with the price property Creating Observable Listeners If you want to be notified when a property of an observable object is changed, then you need to register a listener. You register a listener by using the bind() method like this: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { // Simple product object var product = { name: "Milk", description: "Something to drink", price: 12.33 }; // Create observable product var observableProduct = WinJS.Binding.as(product); // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); // Change the price observableProduct.price = 2.99; } }; app.start(); })(); In the code above, the bind() method is used to associate the price property with a function. When the price property is changed, the function logs the new value of the price property to the Visual Studio JavaScript console. The price property is associated with the function using the following line of code: // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); Coalescing Notifications If you make multiple changes to a property – one change immediately following another – then separate notifications won’t be sent. Instead, any listeners are notified only once. The notifications are coalesced into a single notification. For example, in the following code, the product price property is updated three times. However, only one message is written to the JavaScript console. Only the last value assigned to the price property is written to the JavaScript Console window: // Simple product object var product = { name: "Milk", description: "Something to drink", price: 12.33 }; // Create observable product var observableProduct = WinJS.Binding.as(product); // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); // Change the price observableProduct.price = 3.99; observableProduct.price = 2.99; observableProduct.price = 1.99; Only the last value assigned to price, the value 1.99, appears in the console: If there is a time delay between changes to a property then changes result in different notifications. For example, the following code updates the price property every second: // Simple product object var product = { name: "Milk", description: "Something to drink", price: 12.33 }; // Create observable product var observableProduct = WinJS.Binding.as(product); // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); // Add 1 to price every second window.setInterval(function () { observableProduct.price += 1; }, 1000); In this case, separate notification messages are logged to the JavaScript Console window: If you need to prevent multiple notifications from being coalesced into one then you can take advantage of promises. I discussed WinJS promises in a previous blog entry: http://stephenwalther.com/blog/archive/2012/02/22/windows-web-applications-promises.aspx Because the updateProperty() method returns a promise, you can create different notifications for each change in a property by using the following code: // Change the price observableProduct.updateProperty("price", 3.99) .then(function () { observableProduct.updateProperty("price", 2.99) .then(function () { observableProduct.updateProperty("price", 1.99); }); }); In this case, even though the price is immediately changed from 3.99 to 2.99 to 1.99, separate notifications for each new value of the price property are sent. Bypassing Notifications Normally, if a property of an observable object has listeners and you change the property then the listeners are notified. However, there are certain situations in which you might want to bypass notification. In other words, you might need to change a property value silently without triggering any functions registered for notification. If you want to change a property without triggering notifications then you should change the property by using the backingData property. The following code illustrates how you can change the price property silently: // Simple product object var product = { name: "Milk", description: "Something to drink", price: 12.33 }; // Create observable product var observableProduct = WinJS.Binding.as(product); // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); // Change the price silently observableProduct.backingData.price = 5.99; console.log(observableProduct.price); // Writes 5.99 The price is changed to the value 5.99 by changing the value of backingData.price. Because the observableProduct.price property is not set directly, any listeners associated with the price property are not notified. When you change the value of a property by using the backingData property, the change in the property happens synchronously. However, when you change the value of an observable property directly, the change is always made asynchronously. Summary The goal of this blog entry was to describe observables. In particular, we discussed how to create observables from existing JavaScript objects and bind functions to observable properties. You also learned how notifications are coalesced (and ways to prevent this coalescing). Finally, we discussed how you can use the backingData property to update an observable property without triggering notifications. In the next blog entry, we’ll see how observables are used with declarative binding to display the values of properties in an HTML document.

    Read the article

  • guvcview recording video and audio out of synchronisation in Ubuntu 10.10

    - by SIJAR
    I finally got Guvcview, a great software for Logitech webcam and it does all the stuff that one wants out of it. But I'm not satisfy with the video recording, video and audio out of synchronisation also video seems to be in slow motion. Please help so that I can tweak in and get a good video recording with the webcam. Below is the log of Guvcview ------------------------------------------------------------------------------- guvcview 1.4.1 video_device: /dev/video0 vid_sleep: 0 cap_meth: 1 resolution: 640 x 480 windowsize: 1024 x 715 vert pane: 578 spin behavior: 0 mode: mjpg fps: 1/25 Display Fps: 0 bpp: 0 hwaccel: 1 avi_format: 4 sound: 1 sound Device: 4 sound samp rate: 0 sound Channels: 0 Sound delay: 0 nanosec Sound Format: 85 Pan Step: 2 degrees Tilt Step: 2 degrees Video Filter Flags: 0 image inc: 0 profile(default):/home/sijar/default.gpfl starting portaudio... bt_audio_service_open: connect() failed: Connection refused (111) bt_audio_service_open: connect() failed: Connection refused (111) bt_audio_service_open: connect() failed: Connection refused (111) bt_audio_service_open: connect() failed: Connection refused (111) Cannot connect to server socket err = No such file or directory Cannot connect to server socket jack server is not running or cannot be started language catalog= dir:/usr/share/locale type:UTF-8 lang:en_US.utf8 cat:guvcview.mo mjpg: setting format to 1196444237 capture method = 1 video device: /dev/video0 libv4lconvert: warning more framesizes then I can handle! libv4lconvert: warning more framesizes then I can handle! /dev/video0 - device 1 libv4lconvert: warning more framesizes then I can handle! libv4lconvert: warning more framesizes then I can handle! Init. UVC Camera (046d:0825) (location: usb-0000:00:1d.7-5) { pixelformat = 'YUYV', description = 'YUV 4:2:2 (YUYV)' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 176 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 432, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 544, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 640, height = 360 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, ... repeats a couple of times ... vid:046d pid:0825 driver:uvcvideo Adding control for Pan (relative) UVCIOC_CTRL_ADD - Error: Operation not permitted checking format: 1196444237 VIDIOC_G_COMP:: Invalid argument compression control not supported fps is set to 1/25 drawing controls control[0]: 0x980900 Brightness, 0:255:1, default 128 control[0]: 0x980901 Contrast, 0:255:1, default 32 control[0]: 0x980902 Saturation, 0:255:1, default 32 control[0]: 0x98090c White Balance Temperature, Auto, 0:1:1, default 1 control[0]: 0x980913 Gain, 0:255:1, default 0 control[0]: 0x980918 Power Line Frequency, 0:2:1, default 2 control[0]: 0x98091a White Balance Temperature, 0:10000:10, default 4000 control[0]: 0x98091b Sharpness, 0:255:1, default 24 control[0]: 0x98091c Backlight Compensation, 0:1:1, default 1 control[0]: 0x9a0901 Exposure, Auto, 0:3:1, default 3 control[0]: 0x9a0902 Exposure (Absolute), 1:10000:1, default 166 control[0]: 0x9a0903 Exposure, Auto Priority, 0:1:1, default 0 resolutions of format(2) = 19 frame rates of 1º resolution=6 Def. Res: 0 numb. fps:6 --------------------------------------- device #0 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 (hw:0,0) Host API = ALSA Max inputs = 2, Max outputs = 2 Def. low input latency = 0.012 Def. low output latency = 0.012 Def. high input latency = 0.046 Def. high output latency = 0.046 Def. sample rate = 44100.00 --------------------------------------- device #1 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - MIC ADC (hw:0,1) Host API = ALSA Max inputs = 2, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #2 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - MIC2 ADC (hw:0,2) Host API = ALSA Max inputs = 2, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #3 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - ADC2 (hw:0,3) Host API = ALSA Max inputs = 2, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #4 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - IEC958 (hw:0,4) Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.011 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #5 Name = USB Device 0x46d:0x825: USB Audio (hw:1,0) Host API = ALSA Max inputs = 1, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #6 Name = front Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.012 Def. high input latency = -1.000 Def. high output latency = 0.046 Def. sample rate = 44100.00 --------------------------------------- device #7 Name = iec958 Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.011 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #8 Name = spdif Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.011 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #9 Name = pulse Host API = ALSA Max inputs = 32, Max outputs = 32 Def. low input latency = 0.012 Def. low output latency = 0.012 Def. high input latency = 0.046 Def. high output latency = 0.046 Def. sample rate = 44100.00 --------------------------------------- device #10 Name = dmix Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.043 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #11 [ Default Input, Default Output ] Name = default Host API = ALSA Max inputs = 32, Max outputs = 32 Def. low input latency = 0.012 Def. low output latency = 0.012 Def. high input latency = 0.046 Def. high output latency = 0.046 Def. sample rate = 44100.00 ---------------------------------------------- SampleRate:0 Channels:0 Video driver: x11 A window manager is available VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_CTRL for user class controls control(0x0098091a) "White Balance Temperature" failed to set (error -1) VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_EXT_CTRLS on single controls for class: 0x009a0000 control(0x009a0902) "Exposure (Absolute)" failed to set (error -1) VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_CTRL for user class controls control(0x0098091a) "White Balance Temperature" failed to set (error -1) VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_EXT_CTRLS on single controls for class: 0x009a0000 control(0x009a0902) "Exposure (Absolute)" failed to set (error -1) Cap Video toggled: 1 (/home/sijar/Videos/Webcam) 25371756K bytes free on a total of 39908968K (used: 36 %) treshold=51200K using audio codec: 0x0055 Audio frame size is 1152 samples for selected codec IO thread started...OK [libx264 @ 0x8cbd8b0]using cpu capabilities: MMX2 SSE2 Cache64 [libx264 @ 0x8cbd8b0]profile Baseline, level 3.0 [libx264 @ 0x8cbd8b0]non-strictly-monotonic PTS shift sound by -9 ms shift sound by -9 ms shift sound by -9 ms AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... AUDIO: droping audio data (/home/sijar/Videos/Webcam) 25371748K bytes free on a total of 39908968K (used: 36 %) treshold=51200K AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... Cap Video toggled: 0 Shuting Down IO Thread AUDIO: droping audio data stop= 4426644744000 start=4416533023000 VIDEO: 146 frames in 10111.000000 ms = 14.439719 fps Stoping audio stream Closing audio stream... close avi Last message repeated 145 times [libx264 @ 0x8cbd8b0]frame I:2 Avg QP:14.10 size: 24492 [libx264 @ 0x8cbd8b0]frame P:103 Avg QP:16.06 size: 20715 [libx264 @ 0x8cbd8b0]mb I I16..4: 48.4% 0.0% 51.6% [libx264 @ 0x8cbd8b0]mb P I16..4: 57.5% 0.0% 0.0% P16..4: 40.2% 0.0% 0.0% 0.0% 0.0% skip: 2.3% [libx264 @ 0x8cbd8b0]final ratefactor: 62.05 [libx264 @ 0x8cbd8b0]coded y,uvDC,uvAC intra: 79.7% 92.2% 68.4% inter: 62.4% 87.5% 48.0% [libx264 @ 0x8cbd8b0]i16 v,h,dc,p: 23% 17% 41% 19% [libx264 @ 0x8cbd8b0]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 30% 24% 26% 2% 5% 3% 3% 3% 4% [libx264 @ 0x8cbd8b0]i8c dc,h,v,p: 53% 20% 23% 4% [libx264 @ 0x8cbd8b0]ref P L0: 63.0% 37.0% [libx264 @ 0x8cbd8b0]kb/s:-0.00 total frames encoded: 0 total audio frames encoded: 0 IO thread finished...OK IO Thread finished enabling controls Cap Video toggled: 1 (/home/sijar/Videos/Webcam) 25379744K bytes free on a total of 39908968K (used: 36 %) treshold=51200K using audio codec: 0x0055 Audio frame size is 1152 samples for selected codec IO thread started...OK [libx264 @ 0x8cfba20]using cpu capabilities: MMX2 SSE2 Cache64 [libx264 @ 0x8cfba20]profile Baseline, level 3.0 [libx264 @ 0x8cfba20]non-strictly-monotonic PTS shift sound by -236 ms shift sound by -236 ms shift sound by -236 ms (/home/sijar/Videos/Webcam) 25377044K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25373408K bytes free on a total of 39908968K (used: 36 %) treshold=51200K AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... (/home/sijar/Videos/Webcam) 25370696K bytes free on a total of 39908968K (used: 36 %) treshold=51200K AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... (/home/sijar/Videos/Webcam) 25367680K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25364052K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25360312K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25356628K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25352908K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25349316K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25345552K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25341828K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25338092K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25334412K bytes free on a total of 39908968K (used: 36 %) treshold=51200K Cap Video toggled: 0 Shuting Down IO Thread stop= 4708817235000 start=4578624714000 VIDEO: 1604 frames in 130192.000000 ms = 12.320265 fps Stoping audio stream Closing audio stream... close avi Last message repeated 1603 times [libx264 @ 0x8cfba20]frame I:16 Avg QP:14.78 size: 42627 [libx264 @ 0x8cfba20]frame P:1547 Avg QP:16.44 size: 28599 [libx264 @ 0x8cfba20]mb I I16..4: 21.6% 0.0% 78.4% [libx264 @ 0x8cfba20]mb P I16..4: 28.1% 0.0% 0.0% P16..4: 70.5% 0.0% 0.0% 0.0% 0.0% skip: 1.4% [libx264 @ 0x8cfba20]final ratefactor: 88.17 [libx264 @ 0x8cfba20]coded y,uvDC,uvAC intra: 74.4% 95.8% 83.2% inter: 75.2% 94.6% 69.2% [libx264 @ 0x8cfba20]i16 v,h,dc,p: 27% 17% 40% 16% [libx264 @ 0x8cfba20]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 25% 25% 21% 3% 6% 4% 5% 4% 7% [libx264 @ 0x8cfba20]i8c dc,h,v,p: 61% 18% 18% 4% [libx264 @ 0x8cfba20]ref P L0: 64.0% 36.0% [libx264 @ 0x8cfba20]kb/s:-0.00 total frames encoded: 0 total audio frames encoded: 0 IO thread finished...OK IO Thread finished enabling controls Shuting Down Thread Thread terminated... cleaning Thread allocations: 100% SDL Quit Video Thread finished write /home/sijar/.guvcviewrc OK free audio mutex closed v4l2 strutures free controls free controls - vidState cleaned allocations - 100% Closing portaudio ...OK Closing GTK... OK

    Read the article

  • CodePlex Daily Summary for Wednesday, June 16, 2010

    CodePlex Daily Summary for Wednesday, June 16, 2010New ProjectsAtomFeedBuilder: Simple and lightweight Atom feed builder. Developed in VB.Net.Cable and Wire harness tester: If you build lots of cable/wire harness' you know that testing them is a pain. I have wanted an automated cable tester for a while now but commerci...Carmenta Engine Power Pack: The target of Carmenta Engine Power Pack is to provide extensions, utilities and wrapper classes that allows developers to work more efficiently w...Customer Book: Customer Book, its like address book with facility for generating quotation for a business or a supplier to the clients.Dialector: Using this program, you can convert pure Turkish texts into different dialects; such as: Emmi, Kufurbaz, Kusdili, Laz, Peltek, Tiki, and many more....Downline Commision Generator: Analyze the compensations plan of the organizations in multi-level marketing or network marketing. Check with this tool the commision plan of the c...EmbeddedSpark 2010 Project M: Project M is a system for seamlessly interfacing a tabletop interface to portable devices placed upon it. Using image recognition and projectors, P...Event Log Creator by eVestment Alliance: Provides a simple utility to create a new source and log in the Windows event log. The utility checks if the current user is an administrator, and...ExchangeHog: Desktop/daemon application that aggregates emails from multiple pop3-accounts into single Microsoft Exchange 2010 account. For users receiving ema...Extra Time Calculator: Extra Time Calculator allows exam end times to be easily calculated for students receiving an extra time accommodation.Generic WCF Hosting Service: The Generic Host Service provides a simple, reusable, and reliable mechanism for hosting WCF services. Google Storage for .NET: Google Storage for .NET (GSN) is an open source library that provides .NET developers with easy access to the Google Storage API. The library allo...Helium: The Helium XNA game engine is a light portable game engine designed to work on many platforms and soon to be expanded on more. Currently the helium...IconizedButton Control Set: ASP.NET WebForms IconizedButton Custom Control Set. Replaces the dull Button/LinkButton/HyperLink controls with styling and left and right aligned...Jedi Council PM List: Allows for users to process Private Message Lists on the Jedi Council forums for TheForce.Net.JetPumpDesign: 本软件为蒸气喷射泵设计计算软件 作者:申阳 单位:西安交通大学过程装备与控制工程61班log4Nez: An high personalized implementation of a logging libraryMutantFramework: Provides a common set of building blocks for building enterprise applicationsNUnit Add-in for Growl Notifications: NUnit add-in which allows to send notifications to Growl when test run is started or finished, when a first test failure occurs and so on.Object Reports: Object Reports is a "proof of concept" application which provides users the ability to visualy build queries based on data stored in the relational...openTrionyx: openTrionyx is a set of tools to make easier web application development. Includes Data, Web and plain text documents tools. Developed in C#, compl...Partial Rendering control for MVC 2: This project shows a web custom control that allow to have partial rendering using async post-back (through JQuery) in a MVC 2 web application.PowerGUI Visual Studio Extension: The PowerGUI Visual Studio Extension exposes PowerGUI as an editor in Visual Studio. PowerShell developers can now write scripts directly in Visual...PowerShell Script Provider: Write your own PowerShell provider using only script, no C# required. Module definition is provided by a Windows PowerShell 2.0 Module, which may b...Scholar: Scholar is a solution/framework for .Net developers to help with the creation of distributed data processing (think SETI@home style apps). It is in...scrabb: Scrabb help people play scrabble over net.SharePointNuke: A DotNetNuke module that connects to a SharePoint server using web services API and displays the content of a specified list. SolidWorksBackConverter: a Project to Convert a solidwork file to an older version Soma - Sql Oriented MApping framework: Sql Oriented MApping framework.SPCreate: SPCreate auto store procedure creator. It's developed in c#. SpCreate as output ADO.NET Class (C# or VB.Net) and SQL Server or MS Access Store pro...std::streambuf wrapper for COM IStream: This provides a subclass of std::streambuf that wraps a COM IStream, so you can use an IStream with any C++ code that uses iostreams or the STL alg...VACID solutions: Solutions of verification problems posed in paper "Verification of Ample Correctness of Invariants of Data-structures". Developed with various tool...Viewer: Our Goal is to create a C# project that will centeralize Image and Movie Viewing in a forms application, It will also have a Specialized Webbrowser...vsXPathTester: vsXPathTester is a utility for Developer. This help them load XML file and the run their XPath Query. The Resultant is shown in window. It save the...New Releases.Net Max Framework: Version 1.0.0: Version 1.0.0 - EstableAndrew's XNA Helpers: V1.2: Features upgraded features based off of the V1.1 code for both X86 and XBOX Additions/Changes Reworked the Texture2D and Rectangle extender namesp...BaseCalendar: BaseControls 1.2: BaseControls 1.2 contains the BaseCalendar ASP.NET control. Changes: 1.2 Exposed EffectiveVisibleDate and FirstVisibleDay methods 1.1 Rendering ...Customer Book: Customer Book Code: Bronze Release PostgreSQL database dump for Customer Book. Open PgAdmin III and restore the database dump into your server. Notice User Name for t...Data Connection Suite: Data Connections Suite v1.0.0.0: This is the first release of this incomplete component, but good enought to use in a production environment (it's what we do).DigitArchive: Build 8: Now the software works on .NET 3.5 and above. So if you have Windows 7 it installs without any pre-requisites. Changes: -Works on .NET 3.5 -Now t...Doom 64 Ex (SVN Builds): Doom 64 Ex r-738: Finally a new build after so many months. There are way to many updates to even begin to write about here just download and frag away. There is a s...DotNetNuke® Media: 03.03.00a: This release is Beta!! There is no guaranteed upgrade path to the 03.03.00 release version! Please use this to help us and test what we have. Repor...Downline Commision Generator: Downline Commision Generator: Downline Commision GeneratorElmah2 : An extensable error logger for ASP.net: 1.0 Beta 1: This is a beta release be sure to report any errors etc. Be sure to check out the documentation tab on information on how to install and configure...EPiServer Template Foundation: First compiled release: First compiled release for experimenting only! :) An introductory post will be published shortly on the blog.Helium: Initial Release: This is the initial release of the Helium Engine. Please check out the documentation link for information on how to use the engine. To see a ful...IconizedButton Control Set: IconizedButton Control Set: Taking a line from Google's play book - marking everything as Beta. Seriously, I'd like to hear some feedback before moving the Development Status...JetPumpDesign: JetPumpDesign 1.0: 当前的软件可以设计5级以内的蒸汽喷射泵。Microsoft Silverlight Analytics Framework: Version 1.4.4 Installer: Tools TargetingVisual Studio 2010 Expression Blend 4 (part of Expression Studio 4) Analytics Services Included Vendor Behavior Silverlight 3...NHibernate Sidekick Library: 0.7.0: Added a few methods for use with the NHibernate 2nd level cache (EvictAllObjectsFromCache and EvictPersistentClass). I also added the boolean optio...NHibernate Sidekick Library: 0.7.5: Fix for http://nhprof.com/Learn/Alerts/DoNotUseImplicitTransactionsNito.KitchenSink: Version 9: Dependencies Nito.Linq 0.6 Beta (released 2010-06-14) Rx 1.0.2563.0 (released 2010-06-09) Supported Platforms .NET 4.0 Client Profile, with Rx. ...NQueue: Version 1.0.0.0: Version 1.0.0.0NUnit Add-in for Growl Notifications: NUnit Add-in for Growl Notifications 1.0 build 0: The very first stable releasePartial Rendering control for MVC 2: Partial Rendering control for MVC 2: Here there is the source code and a MVC 2 web site as testPowerShell Script Provider: PSProvider 0.1: Requires PowerShell 2.0 RTM The functions in the attached ps1 script are the bare minimum for a working container-style provider (no subfolders.) ...Quick Performance Monitor: Version 1.4.3: Fixed issue where if an instance name contains backslash characters (\) the program would not load the performance counter properly. Also added sta...SharePointNuke: SharePointNuke 2.00.08: SharePointNuke 2.00.08 - Binary DotNetNuke 5.x module.Skype Voice Changer: 1.0 Updated Sample Code: This updated release is the accompanying code for the Skype Voice Changer article on Coding4Fun. Changes in this release: Added support for PreEmp...std::streambuf wrapper for COM IStream: Beta release (tested in a commercial project): This code has been tested in a custom Windows Search filter and property handler I wrote for a proprietary binary format. There may be some bugs, b...Sunlit World Scheme: Sunlit World Scheme - 20100615 - source and binary: This is the result of building the current source code in Debug mode. The source code is included. The binaries are in the SchemeCode folder along...Timo-Design / 40FINGERS DotNetNuke® Skinning Extensions: Style Helper Skin Object Beta: The 40FINGERS Style Helper Skin object allows you to add CSS and Javascript links and meta tags to the head of your page. It can also remove CSS l...Umbraco CMS: Umbraco 4.1 RC: This is the final test version of Umbraco 4.1 before the final release. PLEASE BE AWARE THAT UMBRACO 4.1 RC IS A .NET 4.0 RELEASE AND WON'T WORK O...VCC: Latest build, v2.1.30615.0: Automatic drop of latest buildWCF 4 Templates for Visual Studio 2010: UserNameForCertificate Template: Produces a WCF service application supporting username and password authentication, relying on message security to protect messages en route. Suppl...WCF 4 Templates for Visual Studio 2010: UserNameOverHttps Template: Produces a WCF service application supporting username and password authentication over HTTPS/SSL, relying on transport security to protect message...xUnit.net Contrib: xunitcontrib 0.4.1 alpha (ReSharper 5.1.1709 only): xunitcontrib release 0.4.1 (ReSharper runner) This release targets the current nightly build of ReSharper 5.1's Early Access Programme (build 1709)...Most Popular ProjectsCommunity Forums NNTP bridgeRIA Services EssentialsNeatUploadBxf (Basic XAML Framework).NET Transactional File ManagerSOLID by exampleSSIS Expression Editor & TesterWEI ShareChirpy - VS Add In For Handling Js, Css, and DotLess FilesASP.NET MVC Time PlannerMost Active ProjectsdotSpatialRhyduino - Arduino and Managed CodeCassandraemonpatterns & practices – Enterprise LibraryCommunity Forums NNTP bridgeLightweight Fluent Workflowpatterns & practices: Enterprise Library ContribNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBlogEngine.NETjQuery Library for SharePoint Web Services

    Read the article

  • Optimizing Solaris 11 SHA-1 on Intel Processors

    - by danx
    SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file. It is intended to uniquely identify text and to verify it hasn't been modified. Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques. This code has been incorporated into Solaris 11 and is available in the Solaris Crypto Framework via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1. The optimized code is used automatically on systems with a x86 CPU supporting SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families. Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge). Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more. Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort. SHA-1 is used heavily though in SSL/TLS, for example. And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS. Optimizations Review SHA-1 operates by reading an arbitrary amount of data. The data is read in 512 bit (64 byte) blocks (the last block is padded in a specific way to ensure it's a full 64 bytes). Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block. Each round produces a 32-bit intermediate result, called W[i]. Here's what each round operates: The first 16 rounds, rounds 0 to 15, read the 512 bit block 32 bits at-a-time. These 32 bits is used as input to the round. The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit. The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators on the 32-bit input and some constants. The 32-bit result is saved as W[i] for round i. The 32-bit result of the final round, W[79], is the SHA-1 checksum. Optimization: Vectorization The first 16 rounds can be vectorized (computed in parallel) because they don't depend on the output of a previous round. As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time. Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with: W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1 Initialize W[i] as follows: W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2 That means that 6 rounds could be vectorized at once, with no additional calculations, instead of just 3! This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1. Optimization: SSSE3 Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide. The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register. The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions: movdqa W_minus_04, W_TMP pxor W_minus_28, W // W equals W[i-32:i-29] before XOR // W = W[i-32:i-29] ^ W[i-28:i-25] palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from // W[i-4:i-1] and W[i-8:i-5] vectors pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) movdqa W, W_TMP // 4 dwords in W are rotated left by 2 psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2) pslld $2, W_TMP por W, W_TMP movdqa W_TMP, W // four new W values W[i:i+3] are now calculated paddd (K_XMM), W_TMP // adding 4 current round's values of K movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack. This is best illustrated with a chart. Without vectorization, computing the rounds is like this (each "R" represents 1 round of SHA-1 computation): RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR With vectorization, 4 rounds can be computed in parallel: RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR Optimization: AVX The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization. SSSE3 instructions have two operands, a input and an output. AVX allows three operands, two inputs and an output. In many cases two SSSE3 instructions can be combined into one AVX instruction. The difference is best illustrated with an example. Consider these two instructions from the snippet above: pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) With AVX they can be combined in one instruction: vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet. As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15). Can %ymm registers be used to parallelize the code even more? Optimization: Solaris-specific In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code: Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well. Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time. Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-). Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits) Performance I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are: The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB). I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds. The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second. Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second. For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second. Availability This code is in Solaris 11 FCS. It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel. You must be running hardware that supports Intel's SSSE3 instructions (for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures). The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example, nehalem $ isainfo -v $ isainfo -v 64-bit amd64 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above: sandybridge $ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this code. Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines and execute the correctly-tuned code for that microprocessor. Summary The Solaris 11 Crypto Framework, via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries, incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors. As with other Solaris optimizations, they come automatically "under the hood" with the current Solaris release. References "Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010). The source for these SHA-1 optimizations used in Solaris "SHA-1", Wikipedia Good overview of SHA-1 FIPS 180-1 SHA-1 standard (FIPS, 1995) NIST Comments on Cryptanalytic Attacks on SHA-1 (2005, revised 2006)

    Read the article

  • DocumentDB - Another Azure NoSQL Storage Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/08/25/documentdb---another-azure-nosql-storage-service.aspxMicrosoft just released a bunch of new features for Azure on 22nd and one of them I was interested in most is DocumentDB, a document NoSQL database service on the cloud.   Quick Look at DocumentDB We can try DocumentDB from the new azure preview portal. Just click the NEW button and select the item named DocumentDB to create a new account. Specify the name of the DocumentDB, which will be the endpoint we are going to use to connect later. Select the capacity unit, resource group and subscription. In resource group section we can select which region our DocumentDB will be located. Same as other azure services select the same location with your consumers of the DocumentDB, for example the website, web services, etc.. After several minutes the DocumentDB will be ready. Click the KEYS button we can find the URI and primary key, which will be used when connecting. Now let's open Visual Studio and try to use the DocumentDB we had just created. Create a new console application and install the DocumentDB .NET client library from NuGet with the keyword "DocumentDB". You need to select "Include Prerelase" in NuGet Package Manager window since this library was not yet released. Next we will create a new database and document collection under our DocumentDB account. The code below created an instance of DocumentClient with the URI and primary key we just copied from azure portal, and create a database and collection. And it also prints the document and collection link string which will be used later to insert and query documents. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7: Run(client).Wait(); 8:  9: Console.WriteLine("done"); 10: Console.ReadKey(); 11: } 12:  13: static async Task Run(DocumentClient client) 14: { 15:  16: var database = new Database() { Id = "testdb" }; 17: database = await client.CreateDatabaseAsync(database); 18: Console.WriteLine("database link = {0}", database.SelfLink); 19:  20: var collection = new DocumentCollection() { Id = "testcol" }; 21: collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection); 22: Console.WriteLine("collection link = {0}", collection.SelfLink); 23: } Below is the result from the console window. We need to copy the collection link string for future usage. Now if we back to the portal we will find a database was listed with the name we specified in the code. Next we will insert a document into the database and collection we had just created. In the code below we pasted the collection link which copied in previous step, create a dynamic object with several properties defined. As you can see we can add some normal properties contains string, integer, we can also add complex property for example an array, a dictionary and an object reference, unless they can be serialized to JSON. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: // collection link pasted from the result in previous demo 9: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 10:  11: // document we are going to insert to database 12: dynamic doc = new ExpandoObject(); 13: doc.firstName = "Shaun"; 14: doc.lastName = "Xu"; 15: doc.roles = new string[] { "developer", "trainer", "presenter", "father" }; 16:  17: // insert the docuemnt 18: InsertADoc(client, collectionLink, doc).Wait(); 19:  20: Console.WriteLine("done"); 21: Console.ReadKey(); 22: } the insert code will be very simple as below, just provide the collection link and the object we are going to insert. 1: static async Task InsertADoc(DocumentClient client, string collectionLink, dynamic doc) 2: { 3: var document = await client.CreateDocumentAsync(collectionLink, doc); 4: Console.WriteLine(await JsonConvert.SerializeObjectAsync(document, Formatting.Indented)); 5: } Below is the result after the object had been inserted. Finally we will query the document from the database and collection. Similar to the insert code, we just need to specify the collection link so that the .NET SDK will help us to retrieve all documents in it. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 9:  10: SelectDocs(client, collectionLink); 11:  12: Console.WriteLine("done"); 13: Console.ReadKey(); 14: } 15:  16: static void SelectDocs(DocumentClient client, string collectionLink) 17: { 18: var docs = client.CreateDocumentQuery(collectionLink + "docs/").ToList(); 19: foreach(var doc in docs) 20: { 21: Console.WriteLine(doc); 22: } 23: } Since there's only one document in my collection below is the result when I executed the code. As you can see all properties, includes the array was retrieve at the same time. DocumentDB also attached some properties we didn't specified such as "_rid", "_ts", "_self" etc., which is controlled by the service.   DocumentDB Benefit DocumentDB is a document NoSQL database service. Different from the traditional database, document database is truly schema-free. In a short nut, you can save anything in the same database and collection if it could be serialized to JSON. We you query the document database, all sub documents will be retrieved at the same time. This means you don't need to join other tables when using a traditional database. Document database is very useful when we build some high performance system with hierarchical data structure. For example, assuming we need to build a blog system, there will be many blog posts and each of them contains the content and comments. The comment can be commented as well. If we were using traditional database, let's say SQL Server, the database schema might be defined as below. When we need to display a post we need to load the post content from the Posts table, as well as the comments from the Comments table. We also need to build the comment tree based on the CommentID field. But if were using DocumentDB, what we need to do is to save the post as a document with a list contains all comments. Under a comment all sub comments will be a list in it. When we display this post we just need to to query the post document, the content and all comments will be loaded in proper structure. 1: { 2: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 3: "title": "xxxxx", 4: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 5: "postedOn": "08/25/2014 13:55", 6: "comments": 7: [ 8: { 9: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 10: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 11: "commentedOn": "08/25/2014 14:00", 12: "commentedBy": "xxx" 13: }, 14: { 15: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 16: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 17: "commentedOn": "08/25/2014 14:10", 18: "commentedBy": "xxx", 19: "comments": 20: [ 21: { 22: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 23: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 24: "commentedOn": "08/25/2014 14:18", 25: "commentedBy": "xxx", 26: "comments": 27: [ 28: { 29: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 30: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 31: "commentedOn": "08/25/2014 18:22", 32: "commentedBy": "xxx", 33: } 34: ] 35: }, 36: { 37: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 38: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 39: "commentedOn": "08/25/2014 15:02", 40: "commentedBy": "xxx", 41: } 42: ] 43: }, 44: { 45: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 46: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 47: "commentedOn": "08/25/2014 14:30", 48: "commentedBy": "xxx" 49: } 50: ] 51: }   DocumentDB vs. Table Storage DocumentDB and Table Storage are all NoSQL service in Microsoft Azure. One common question is "when we should use DocumentDB rather than Table Storage". Here are some ideas from me and some MVPs. First of all, they are different kind of NoSQL database. DocumentDB is a document database while table storage is a key-value database. Second, table storage is cheaper. DocumentDB supports scale out from one capacity unit to 5 in preview period and each capacity unit provides 10GB local SSD storage. The price is $0.73/day includes 50% discount. For storage service the highest price is $0.061/GB, which is almost 10% of DocumentDB. Third, table storage provides local-replication, geo-replication, read access geo-replication while DocumentDB doesn't support. Fourth, there is local emulator for table storage but none for DocumentDB. We have to connect to the DocumentDB on cloud when developing locally. But, DocumentDB supports some cool features that table storage doesn't have. It supports store procedure, trigger and user-defined-function. It supports rich indexing while table storage only supports indexing against partition key and row key. It supports transaction, table storage supports as well but restricted with Entity Group Transaction scope. And the last, table storage is GA but DocumentDB is still in preview.   Summary In this post I have a quick demonstration and introduction about the new DocumentDB service in Azure. It's very easy to interact through .NET and it also support REST API, Node.js SDK and Python SDK. Then I explained the concept and benefit of  using document database, then compared with table storage.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How I Record Screencasts

    - by Daniel Moth
    I get this asked a lot so here is my brain dump on the topic. What A screencast is just a demo that you present to yourself while recording the screen. As such, my advice for clearing your screen for demo purposes and setting up Visual Studio still applies here (adjusting for the fact I wrote those blog posts when I was running Vista and VS2008, not Windows 8 and VS2012). To see examples of screencasts, watch any of my screencasts on channel9. Why If you are a technical presenter, think of when you get best reactions from a developer audience in your sessions: when you are doing demos, of course. Imagine if you could package those alone and share them with folks to watch over and over? If you have ever gone through a tutorial trying to recreate steps to explore a feature, think how much more helpful it would be if you could watch a video and follow along. Think of how many folks you "touch" with a conference presentation, and how many more you can reach with an online shorter recording of the demo. If you invest so much of your time for the first type of activity, isn't the second type of activity also worth an investment? Fact: If you are able to record a screencast of a demo, you will be much better prepared to deliver it in person. In fact lately I will force myself to make a screencast of any demo I need to present live at an upcoming event. It is also a great backup - if for whatever reason something fails (software, network, etc) during an attempt of a live demo, you can just play the recorded video for the live audience. There are other reasons (e.g. internal sharing of the latest implemented feature) but the context above is the one within which I create most of my screencasts. Software & Hardware I use Camtasia from Tech Smith, version 7.1.1. Microsoft has a variety of options for capturing the screen to video, but I have been using this software for so long now that I have not invested time to explore alternatives… I also use whatever cheapo headset is near me, but sometimes I get some complaints from some folks about the audio so now I try to remember to use "the good headset". I do not use a web camera as I am not a huge fan of PIP. Preparation First you have to know your technology and demo. Once you think you know it, write down the outline and major steps of the demo. Keep it short 5-20 minutes max. I break that rule sometimes but try not to. The longer the video is the more chances that people will not have the patience to sit through it and the larger the download wmv file ends up being. Run your demo a few times, timing yourself each time to ensure that you have the planned timing correct, but also to make sure that you are comfortable with what you are going to demo. Unlike with a live audience, there is no live reaction/feedback to steer you, so it can be a bit unnerving at first. It can also lead you to babble too much, so try extra hard to be succinct when demoing/screencasting on your own. TIP: Before recording, hide your desktop/taskbar clock if it is showing. Recording To record you start the Camtasia Recorder tool Configure the settings thought the menus Capture menu to choose custom size or full screen. I try to use full screen and remember to lower the resolution of your screen to as low as possible, e.g. 1024x768 or 1360x768 or something like that. From the Tools -> Options dialog you can choose to record audio and the volume level. Effects menu I typically leave untouched but you should explore and experiment to your liking, e.g. how the mouse pointer is captured, and whether there should be a delay for the recording when you start it. Once you've configured these settings, typically you just launch this tool and hit the F9 key to start recording. TIP: As you record, if you ever start to "lose your way" hit F9 again to pause recording, regroup your thoughts and flow, and then hit F9 again to resume. Finally, hit F10 to stop recording. At that point the video starts playing for you in the recorder. This is where you can preview the video to see that you are happy with it before saving. If you are happy, hit the Save As menu to choose where you want to save the video.     TIP: If you've really lost your way to the extent where you'll need to do some editing, hit F10 to stop recording, save the video and then record some more - you'll be able to stitch the videos together later and this will make it easier for you to delete the parts where you messed up. TIP: Before you commit to recording the whole demo, every time you should record 5 seconds and preview them to ensure that you are capturing the screen the way you want to and that your audio is still correctly configured and at the right level. Trust me, you do not want to be recording 15 minutes only to find out that you messed up on the configuration somewhere. Editing To edit the video you launch another Camtasia app, the Camtasia Studio. File->New Project. File->Save Project and choose location. File->Import Media and choose the video(s) you saved earlier. These adds them to the area at the top/middle but not at the timeline at the bottom. Right click on the video and choose Add to timeline. It will prompt you for the Editing dimensions and I always choose Recording Dimensions. Do whatever edits you want to do for this video, then add the next video if you have one to stitch and repeat. In terms of edits there are many options. The simplest is to do nothing, which is the option I did when I first starting doing these in 2006. Nowadays, I typically cut out pieces that I don't like and also lower/mute the audio in other areas and also speed up the video in some areas. A full tutorial on how to do this is beyond the scope of this blog post, but your starting point is to select portions on the timeline and then open the Edit menu at the very top (tip: the context menu doesn't have all options). You can spend hours editing a recording, so don’t lose track of time! When you are done editing, save again, and you are now ready to Produce. Producing Production is specific to where you will publish. I've only ever published on channel9, so for that I do the following File -> Produce and share. This opens a wizard dialog In the dropdown choose Custom production settings Hit Next and then choose WMV Hit Next and keep the default of Camtasia Studio Best Quality and File Size (recommended) Hit Next and choose Editing dimensions video size Hit Next, hit Options and you get a dialog. Enter a Title for the project tab and then on the author tab enter the Creator and Homepage. Hit OK Hit Next. Hit Next again. Enter a video file name in the Production name textbox and then hit Finish. Now do other stuff while you wait for the video to be produced and you hear it playing. After the video is produced watch it to ensure it was produced correctly (e.g. sometimes you get mouse issues) and then you are ready for publishing it. Publishing Follow the instructions of the place where you are going to publish. If you are MSFT internal and want to choose channel9 then contact those folks so they can share their instructions (if you don't know who they are ping me and I'll connect you but they are easy to find in the GAL). For me this involves using a tool to point to the video, choosing a file name (again), choosing an image from the video to display when it is not playing, choosing what output formats I want, and then later on a webpage adding tags, adding a description, and adding a title. That’s all folks, have fun! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Click Once Deployment Process and Issue Resolution

    - by Geordie
    Introduction We are adopting Click Once as a deployment standard for Thick .Net application clients.  The latest version of this tool has matured it to a point where it can be used in an enterprise environment.  This guide will identify how to use Click Once deployment and promote code trough the dev, test and production environments. Why Use Click Once over SCCM If we already use SCCM why add Click Once to the deployment options.  The advantages of Click Once are their ability to update the code in a single location and have the update flow automatically down to the user community.  There have been challenges in the past with getting configuration updates to download but these can now be achieved.  With SCCM you can do the same thing but it then needs to be packages and pushed out to users.  Each time a new user is added to an application, time needs to be spent by an administrator, to push out any required application packages.  With Click Once the user would go to a web link and the application and pre requisites will automatically get installed. New Deployment Steps Overview The deployment in an enterprise environment includes several steps as the solution moves through the development life cycle before being released into production.  To make mitigate risk during the release phase, it is important to ensure the solution is not deployed directly into production from the development tools.  Although this is the easiest path, it can introduce untested code into production and result in unexpected results. 1. Deploy the client application to a development web server using Visual Studio 2008 Click Once deployment tools.  Once potential production versions of the solution are being generated, ensure the production install URL is specified when deploying code from Visual Studio.  (For details see ‘Deploying Click Once Code from Visual Studio’) 2. xCopy the code to the test server.  Run the MageUI tool to update the URLs, signing and version numbers to match the test server. (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) 3. xCopy the code to the production server.  Run the MageUI tool to update the URLs, signing and version numbers to match the production server. The certificate used to sign the code should be provided by a certificate authority that will be trusted by the client machines.  Finally make sure the setup.exe contains the production install URL.  If not redeploy the solution from Visual Studio to the dev environment specifying the production install URL.  Then xcopy the install.exe file from dev to production.  (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) Detailed Deployment Steps Deploying Click Once Code From Visual Studio Open Visual Studio and create a new WinForms or WPF project.   In the solution explorer right click on the project and select ‘Publish’ in the context menu.   The ‘Publish Wizard’ will start.  Enter the development deployment path.  This could be a local directory or web site.  When first publishing the solution set this to a development web site and Visual basic will create a site with an install.htm page.  Click Next.  Select weather the application will be available both online and offline. Then click Finish. Once the initial deployment is completed, republish the solution this time mapping to the directory that holds the code that was just published.  This time the Publish Wizard contains and additional option.   The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application. Visual studio will publish the application to the desired location in the process it will create an anonymous ‘pfx’ certificate to sign the deployment configuration files.  A production certificate should be acquired in preparation for deployment to production.   Directory structure created by Visual Studio     Application files created by Visual Studio   Development web site (install.htm) created by Visual Studio Migrating Click Once Code to a new Server without using Visual Studio To migrate the Click Once application code to a new server, a tool called MageUI is needed to modify the .application and .manifest files.  The MageUI tool is usually located – ‘C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bin’ folder or can be downloaded from the web. When deploying to a new environment copy all files in the project folder to the new server.  In this case the ‘ClickOnceSample’ folder and contents.  The old application versions can be deleted, in this case ‘ClickOnceSample_1_0_0_0’ and ‘ClickOnceSample_1_0_0_1’.  Open IIS Manager and create a virtual directory that points to the project folder.  Also make the publish.htm the default web page.   Run the ManeUI tool and then open the .application file in the root project folder (in this case in the ‘ClickOnceSample’ folder). Click on the Deployment Options in the left hand list and update the URL to the new server URL and save the changes.   When MageUI tries to save the file it will prompt for the file to be signed.   This step cannot be bypassed if you want the Click Once deployment to work from a web site.  The easiest solution to this for test is to use the auto generated certificate that Visual Studio created for the project.  This certificate can be found with the project source code.   To save time go to File>Preferences and configure the ‘Use default signing certificate’ fields.   Future deployments will only require application files to be transferred to the new server.  The only difference is then updating the .application file the ‘Version’ must be updated to match the new version and the ‘Application Reference’ has to be update to point to the new .manifest file.     Updating the Configuration File of a Click Once Deployment Package without using Visual Studio When an update to the configuration file is required, modifying the ClickOnceSample.exe.config.deploy file will not result in current users getting the new configurations.  We do not want to go back to Visual Studio and generate a new version as this might introduce unexpected code changes.  A new version of the application can be created by copying the folder (in this case ClickOnceSample_1_0_0_2) and pasting it into the application Files directory.  Rename the directory ‘ClickOnceSample_1_0_0_3’.  In the new folder open the configuration file in notepad and make the configuration changes. Run MageUI and open the manifest file in the newly copied directory (ClickOnceSample_1_0_0_3).   Edit the manifest version to reflect the newly copied files (in this case 1.0.0.3).  Then save the file.  Open the .application file in the root folder.  Again update the version to 1.0.0.3.  Since the file has not changed the Deployment Options/Start Location URL should still be correct.  The application Reference needs to be updated to point to the new versions .manifest file.  Save the file. Next time a user runs the application the new version of the configuration file will be down loaded.  It is worth noting that there are 2 different types of configuration parameter; application and user.  With Click Once deployment the difference is significant.  When an application is downloaded the configuration file is also brought down to the client machine.  The developer may have written code to update the user parameters in the application.  As a result each time a new version of the application is down loaded the user parameters are at risk of being overwritten.  With Click Once deployment the system knows if the user parameters are still the default values.  If they are they will be overwritten with the new default values in the configuration file.  If they have been updated by the user, they will not be overwritten. Settings configuration view in Visual Studio Production Deployment When deploying the code to production it is prudent to disable the development and test deployment sites.  This will allow errors such as incorrect URL to be quickly identified in the initial testing after deployment.  If the sites are active there is no way to know if the application was downloaded from the production deployment and not redirected to test or dev.   Troubleshooting Clicking the install button on the install.htm page fails. Error: URLDownloadToCacheFile failed with HRESULT '-2146697210' Error: An error occurred trying to download <file>   This is due to the setup.exe file pointing to the wrong location. ‘The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application.’

    Read the article

  • How did I get here? My route to Android, iPhone, Windows Phone 7, and interest in Mobile Devices

    - by Wallym
    I get asked all the time how/why I got interested in mobile and jumped on this fairly early.  I tend to give half answers because it wasn't just one thing that took me to mobile, but a whole host of separate ivents culminating in a specific event where I wasdoing market research in May/June 2008.  Let me throw out the events and the facts about me: I tend to like new, different, cool stuff.  I jumped on .NET early on.  I jumped on Ajax early on.  I don't jump on every new technology that comes down the road, I'm probably the only person on the planet that doesn't "get" MVC, though I acknowledge that a lot of people do and it solves a number of problems in the default settings of ASP.NET WebForms. I remember buying an early Windows CE device. It was interesting, but dang, this stylus thing sucks. After I lost my third stylus, i just gave up.  I got my first mobile phone in early 1999.  Reception was crappy, but I could see the value in being mobile. In 1999, I worked on a manufacturing systems project.  One piece of the projects was a set of handheld devices on the shop floor.  While the UI was a crappy DOS based, yes I said DOS as in Disk Operating System Version 6.22, I could see that the wireless world was a direction I wanted to be in. In 2000, Microsoft released the first public alpha of .NET.  Very cool stuff indeed.  One piece of the puzzle was a set of mobile controls for ASP.NET.  I build numerous test apps as well as mobile version using these mobile controls.  Now, the mobile UIs of the time were based on WML, which was crap. I could real all the analysis of mobile and read all about growth rates.  Now, you have to realize that growth rates can be impressive when dealing with small numbers, but I knew it was a comer. In our first book, I got talked out of mobile because of the line from the publisher "Wally, mobile doesn't sell." Blackberry was the dominant device of the mid 2000s.  Its users were referred to as "Crackberry addicts."  Unfortunately, the mobile development experience for native apps was crap and the web experience was fairly rough as well, but if they could get the ecosystem started, other phones and better blackberryies would come out.  I finally jumped into using a blackberry. Sometime around 2006, I heard "Wally, mobile doesn't sell" again.  Now, anyone that knows me knows that someone saying something like this to me means I'll keep trying it. The phones of the mid 2000s were moving to be more graphical, but there were too many that had this idea that they had to use a stylus.  Stylus suck.  They get lost too easily. I worked on a project in 2007 and 2008 for a startup trying to answer the question of "What is there to do where I am at?"  For some reason, they wanted to be tied to PCs.  As it became obvious that they were having problems, their investor asked us to do some market research and to figure out what the marketplace did want.  One of the important things that I figured out was the we lived in a mobile world and if you had a mobile app, it need to be on a mobile device, not tied to a desktop/laptop/netbook device.  If there was any single event, this was it - I was doing some market research and sat and talked to people in a bar/restaurant in Atlanta called "The Grove" on Lavista.  The consensus of the people that I talked to was that they wanted their data where ever they were at, laptop, pc, mobile, whereever. In 2007, Apple released the iPhone.  Wow, what an impressive device, even with all the problems of a 1st generation device.  I bought an iPod Touch 1st generation to understand touch better, one of the best decisions I ever made. I decided in late 2008, to make a move into cloud, for a number of reasons.  I was working on an example app.  In April, 2009, one of my friends at Microsoft said "don't mention my name with this, but you need an iPhone front end for this app."  How do you get on the iPhone.  Well, there are a number of ways including: ObjectiveC.  Its hard to teach an old dog new tricks, and this dog knows .NET, not ObjectiveC. HTML, web, javascript optimized interface.  yeah, this is possible. PhoneGap.  Now, this is interesting, take an html interface and get it to run on the iPhone, Android, Blackberry, and other platforms.  I thought that this way made the most sense for me until......... MonoTouch.  In May/June 2009, Novell announced a way for .NET/c# developers to write apps for the iPhone.  This is the way that made the most sense to me. Titanium by Appcelerator.  This is similar in concept to PhoneGap.  I haven't played with this much but do want to learn more about it. In July, 2009, I emailed one of my contacts at Wrox to see if they would be interested in a short MonoTouch ebook in their Wrox Blox format.  I fully expected another  response along the lines of "Wally, mobile doesn't sell."  The response I got was "Wally, iPhone is H O T, get started immediately, can you have this to me before Labor Day."  Not quite the response I expected.  Thankfully, we didn't make the Labor Day, first draft date. I kept pushing back because I had a feeling that things were not going to be quite as polished and feature rich as necessary.  After all, Novell doesn't have the resouces of Microsoft's developer division. The ebook shipped on November 30, 2009. On about December, 15, 2009, my editor emailed and said "Your ebook is selling really well, lets do a full book and it by March 1 so get started."  Thankfully, guys like Craig Dunn and Chris Hardy were interested along with Martin and Ror joinged us later on. I bought my wife an iPhone 3Gs in early 2010 to go along with all my iPod Touch devices. I tried to pretend in 2010 that I wasn't that interested in mobile and still had interest in the desktop technologies.  I love the technologies and continue to use them today, but that isn't where my interest is right now.  I'm just about all mobile all the time with my energies.  Our book shipped in the beginning of July, 2010 right in the middle of the Apple FUD.I've been looking at Mobile Web as a way around the AppStores and Apple FUD problems of 2010. With all the Apple self FUD, we became interested in Android.I went up to Dino Esposito at DevConnections in Las Vegas at introduced myself. I've always tried to keep up with what Dino has been doing. I was shocked, he wanted to meet me.  We must have talked for 1.5 hours. It was way more time than I deserved. If you get a chance, go and introduce yourself to Dino. He's a great guy. Microsoft released Windows Phone 7 in the Fall of 2010.  I'm not doing development on that platform at this time.  I think they have a very interesting user interface.  The devices are being positively reviewed.  For my purposes, the devices are limited at this point in time.  We'll see what 2011 brings as far as updates to the operating system.  I need multitasking/background processing and html5 in the browser. Add that as well as acceptance in the marketplace and I'll be more interested in the device. Obviosuly, I'm now working on a MonoDroid book . I own Android and iPhone/iOS devices.  I am currently working on some startup ideas and am exploring as much in that area as I can. For 2011, I'm planning on speaking at Android Developer's Conference (AnDevCon) and Mobile Connections.  I'm really excited about this. I have a couple of magazine articles coming out in 2011 on Android and iPhone development with the Mono technologies.is Mono "The Answer"? What's "The Question?" I think it will work for me.  It might work for you, it might not.  it depends on your situation.  Its the current horse that I am riding. I might find a better horse tomorrow. So, that's how I got here.  I'm in love with mobile.  Mobile native apps on the device as well as mobile web.  I'm into all this cool stuff.  Where are you at?

    Read the article

  • How I understood monads, part 1/2: sleepless and self-loathing in Seattle

    - by Bertrand Le Roy
    For some time now, I had been noticing some interest for monads, mostly in the form of unintelligible (to me) blog posts and comments saying “oh, yeah, that’s a monad” about random stuff as if it were absolutely obvious and if I didn’t know what they were talking about, I was probably an uneducated idiot, ignorant about the simplest and most fundamental concepts of functional programming. Fair enough, I am pretty much exactly that. Being the kind of guy who can spend eight years in college just to understand a few interesting concepts about the universe, I had to check it out and try to understand monads so that I too can say “oh, yeah, that’s a monad”. Man, was I hit hard in the face with the limitations of my own abstract thinking abilities. All the articles I could find about the subject seemed to be vaguely understandable at first but very quickly overloaded the very few concept slots I have available in my brain. They also seemed to be consistently using arcane notation that I was entirely unfamiliar with. It finally all clicked together one Friday afternoon during the team’s beer symposium when Louis was patient enough to break it down for me in a language I could understand (C#). I don’t know if being intoxicated helped. Feel free to read this with or without a drink in hand. So here it is in a nutshell: a monad allows you to manipulate stuff in interesting ways. Oh, OK, you might say. Yeah. Exactly. Let’s start with a trivial case: public static class Trivial { public static TResult Execute<T, TResult>( this T argument, Func<T, TResult> operation) { return operation(argument); } } This is not a monad. I removed most concepts here to start with something very simple. There is only one concept here: the idea of executing an operation on an object. This is of course trivial and it would actually be simpler to just apply that operation directly on the object. But please bear with me, this is our first baby step. Here’s how you use that thing: "some string" .Execute(s => s + " processed by trivial proto-monad.") .Execute(s => s + " And it's chainable!"); What we’re doing here is analogous to having an assembly chain in a factory: you can feed it raw material (the string here) and a number of machines that each implement a step in the manufacturing process and you can start building stuff. The Trivial class here represents the empty assembly chain, the conveyor belt if you will, but it doesn’t care what kind of raw material gets in, what gets out or what each machine is doing. It is pure process. A real monad will need a couple of additional concepts. Let’s say the conveyor belt needs the material to be processed to be contained in standardized boxes, just so that it can safely and efficiently be transported from machine to machine or so that tracking information can be attached to it. Each machine knows how to treat raw material or partly processed material, but it doesn’t know how to treat the boxes so the conveyor belt will have to extract the material from the box before feeding it into each machine, and it will have to box it back afterwards. This conveyor belt with boxes is essentially what a monad is. It has one method to box stuff, one to extract stuff from its box and one to feed stuff into a machine. So let’s reformulate the previous example but this time with the boxes, which will do nothing for the moment except containing stuff. public class Identity<T> { public Identity(T value) { Value = value; } public T Value { get; private set;} public static Identity<T> Unit(T value) { return new Identity<T>(value); } public static Identity<U> Bind<U>( Identity<T> argument, Func<T, Identity<U>> operation) { return operation(argument.Value); } } Now this is a true to the definition Monad, including the weird naming of the methods. It is the simplest monad, called the identity monad and of course it does nothing useful. Here’s how you use it: Identity<string>.Bind( Identity<string>.Unit("some string"), s => Identity<string>.Unit( s + " was processed by identity monad.")).Value That of course is seriously ugly. Note that the operation is responsible for re-boxing its result. That is a part of strict monads that I don’t quite get and I’ll take the liberty to lift that strange constraint in the next examples. To make this more readable and easier to use, let’s build a few extension methods: public static class IdentityExtensions { public static Identity<T> ToIdentity<T>(this T value) { return new Identity<T>(value); } public static Identity<U> Bind<T, U>( this Identity<T> argument, Func<T, U> operation) { return operation(argument.Value).ToIdentity(); } } With those, we can rewrite our code as follows: "some string".ToIdentity() .Bind(s => s + " was processed by monad extensions.") .Bind(s => s + " And it's chainable...") .Value; This is considerably simpler but still retains the qualities of a monad. But it is still pointless. Let’s look at a more useful example, the state monad, which is basically a monad where the boxes have a label. It’s useful to perform operations on arbitrary objects that have been enriched with an attached state object. public class Stateful<TValue, TState> { public Stateful(TValue value, TState state) { Value = value; State = state; } public TValue Value { get; private set; } public TState State { get; set; } } public static class StateExtensions { public static Stateful<TValue, TState> ToStateful<TValue, TState>( this TValue value, TState state) { return new Stateful<TValue, TState>(value, state); } public static Stateful<TResult, TState> Execute<TValue, TState, TResult>( this Stateful<TValue, TState> argument, Func<TValue, TResult> operation) { return operation(argument.Value) .ToStateful(argument.State); } } You can get a stateful version of any object by calling the ToStateful extension method, passing the state object in. You can then execute ordinary operations on the values while retaining the state: var statefulInt = 3.ToStateful("This is the state"); var processedStatefulInt = statefulInt .Execute(i => ++i) .Execute(i => i * 10) .Execute(i => i + 2); Console.WriteLine("Value: {0}; state: {1}", processedStatefulInt.Value, processedStatefulInt.State); This monad differs from the identity by enriching the boxes. There is another way to give value to the monad, which is to enrich the processing. An example of that is the writer monad, which can be typically used to log the operations that are being performed by the monad. Of course, the richest monads enrich both the boxes and the processing. That’s all for today. I hope with this you won’t have to go through the same process that I did to understand monads and that you haven’t gone into concept overload like I did. Next time, we’ll examine some examples that you already know but we will shine the monadic light, hopefully illuminating them in a whole new way. Realizing that this pattern is actually in many places but mostly unnoticed is what will enable the truly casual “oh, yes, that’s a monad” comments. Here’s the code for this article: http://weblogs.asp.net/blogs/bleroy/Samples/Monads.zip The Wikipedia article on monads: http://en.wikipedia.org/wiki/Monads_in_functional_programming This article was invaluable for me in understanding how to express the canonical monads in C# (interesting Linq stuff in there): http://blogs.msdn.com/b/wesdyer/archive/2008/01/11/the-marvels-of-monads.aspx

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >