Search Results

Search found 560 results on 23 pages for 'recall'.

Page 23/23 | < Previous Page | 19 20 21 22 23 

  • External USB attached drive works in Windows XP but not in Windows 7. How to fix?

    - by irrational John
    Earlier this week I purchased this "N52300 EZQuest Pro" external hard drive enclosure from here. I can connect the enclosure using USB 2.0 and access the files in both NTFS partitions on the MBR partitioned drive when I use either Windows XP (SP3) or Mac OS X 10.6. So it works as expected in XP & Snow Leopard. However, the enclosure does not work in Windows 7 (Home Premium) either 64-bit or 32-bit or in Ubuntu 10.04 (kernel 2.6.32-23-generic). I'm thinking this must be a Windows 7 driver problem because the enclosure works in XP & Snow Leopard. I do know that no special drivers are required to use this enclosure. It is supported using the USB mass storage drivers included with XP and OS X. It should also work fine using the mass storage support in Windows 7, no? FWIW, I have also tried using 32-bit Windows 7 on both my desktop, a Gigabyte GA-965P-DS3 with a Pentium Dual-Core E6500 @ 2.93GHz, and on my early 2008 MacBook. I see the same failure in both cases that I see with 64-bit Windows 7. So it doesn't appear to be specific to one hardware platform. I'm hoping someone out there can help me either get the enclosure to work in Windows 7 or convince me that the enclosure hardware is bad and should be RMAed. At the moment though an RMA seems pointless since this appears to be a (Windows 7) device driver problem. I have tried to track down any updates to the mass storage drivers included with Windows 7 but have so far come up empty. Heck, I can't even figure out how to place a bug report with Microsoft since apparently the grace period for Windows 7 email support is only a few months. I came across a link to some USB troubleshooting steps in another question. I haven't had a chance to look over the suggestions on that site or try them yet. Maybe tomorrow if I have time ... ;-) I'll finish up with some more details about the problem. When I connect the enclosure using USB to Windows 7 at first it appears everything worked. Windows detects the drive and installs a driver for it. Looking in Device Manager there is an entry under the Hard Drives section with the title, Hitachi HDT721010SLA360 USB Device. When you open Windows Disk Management the first time after the enclosure has been attached the drive appears as "Not initialize" and I'm prompted to initialize it. This is bogus. After all, the drive worked fine in XP so I know it has already been initialized, partitioned, and formatted. So of course I never try to initialize it "again". (It's a 1 GB drive and I don't want to lose the data on it). Except for this first time, the drive never shows up in Disk Management again unless I uninstall the Hitachi HDT721010SLA360 USB Device entry under Hard Drives, unplug, and then replug the enclosure. If I do that then the process in the previous paragraph repeats. In Ubuntu the enclosure never shows up at all at the file system level. Below are an excerpt from kern.log and an excerpt from the result of lsusb -v after attaching the enclosure. It appears that Ubuntu at first recongnizes the enclosure and is attempting to attach it, but encounters errors which prevent it from doing so. Unfortunately, I don't know whether any of this info is useful or not. excerpt from kern.log [ 2684.240015] usb 1-2: new high speed USB device using ehci_hcd and address 22 [ 2684.393618] usb 1-2: configuration #1 chosen from 1 choice [ 2684.395399] scsi17 : SCSI emulation for USB Mass Storage devices [ 2684.395570] usb-storage: device found at 22 [ 2684.395572] usb-storage: waiting for device to settle before scanning [ 2689.390412] usb-storage: device scan complete [ 2689.390894] scsi 17:0:0:0: Direct-Access Hitachi HDT721010SLA360 ST6O PQ: 0 ANSI: 4 [ 2689.392237] sd 17:0:0:0: Attached scsi generic sg7 type 0 [ 2689.395269] sd 17:0:0:0: [sde] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) [ 2689.395632] sd 17:0:0:0: [sde] Write Protect is off [ 2689.395636] sd 17:0:0:0: [sde] Mode Sense: 11 00 00 00 [ 2689.395639] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.412003] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.412009] sde: sde1 sde2 [ 2689.455759] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.455765] sd 17:0:0:0: [sde] Attached SCSI disk [ 2692.620017] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2707.740014] usb 1-2: device descriptor read/64, error -110 [ 2722.970103] usb 1-2: device descriptor read/64, error -110 [ 2723.200027] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2738.320019] usb 1-2: device descriptor read/64, error -110 [ 2753.550024] usb 1-2: device descriptor read/64, error -110 [ 2753.780020] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2758.810147] usb 1-2: device descriptor read/8, error -110 [ 2763.940142] usb 1-2: device descriptor read/8, error -110 [ 2764.170014] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2769.200141] usb 1-2: device descriptor read/8, error -110 [ 2774.330137] usb 1-2: device descriptor read/8, error -110 [ 2774.440069] usb 1-2: USB disconnect, address 22 [ 2774.440503] sd 17:0:0:0: Device offlined - not ready after error recovery [ 2774.590023] usb 1-2: new high speed USB device using ehci_hcd and address 23 [ 2789.710020] usb 1-2: device descriptor read/64, error -110 [ 2804.940020] usb 1-2: device descriptor read/64, error -110 [ 2805.170026] usb 1-2: new high speed USB device using ehci_hcd and address 24 [ 2820.290019] usb 1-2: device descriptor read/64, error -110 [ 2835.520027] usb 1-2: device descriptor read/64, error -110 [ 2835.750018] usb 1-2: new high speed USB device using ehci_hcd and address 25 [ 2840.780085] usb 1-2: device descriptor read/8, error -110 [ 2845.910079] usb 1-2: device descriptor read/8, error -110 [ 2846.140023] usb 1-2: new high speed USB device using ehci_hcd and address 26 [ 2851.170112] usb 1-2: device descriptor read/8, error -110 [ 2856.300077] usb 1-2: device descriptor read/8, error -110 [ 2856.410027] hub 1-0:1.0: unable to enumerate USB device on port 2 [ 2856.730033] usb 3-2: new full speed USB device using uhci_hcd and address 11 [ 2871.850017] usb 3-2: device descriptor read/64, error -110 [ 2887.080014] usb 3-2: device descriptor read/64, error -110 [ 2887.310011] usb 3-2: new full speed USB device using uhci_hcd and address 12 [ 2902.430021] usb 3-2: device descriptor read/64, error -110 [ 2917.660013] usb 3-2: device descriptor read/64, error -110 [ 2917.890016] usb 3-2: new full speed USB device using uhci_hcd and address 13 [ 2922.911623] usb 3-2: device descriptor read/8, error -110 [ 2928.051753] usb 3-2: device descriptor read/8, error -110 [ 2928.280013] usb 3-2: new full speed USB device using uhci_hcd and address 14 [ 2933.301876] usb 3-2: device descriptor read/8, error -110 [ 2938.431993] usb 3-2: device descriptor read/8, error -110 [ 2938.540073] hub 3-0:1.0: unable to enumerate USB device on port 2 excerpt from lsusb -v Bus 001 Device 017: ID 0dc4:0000 Macpower Peripherals, Ltd Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x0dc4 Macpower Peripherals, Ltd idProduct 0x0000 bcdDevice 0.01 iManufacturer 1 EZ QUEST iProduct 2 USB Mass Storage iSerial 3 220417 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 5 Config0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 4 Interface0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Update: Results using Firewire to connect. Today I recieved a 1394b 9 pin to 1394a 6 pin cable which allowed me to connect the "EZQuest Pro" via Firewire. Everything works. When I use Firewire I can connect whether I'm using Windows 7 or Ubuntu 10.04. I even tried booting my Gigabyte desktop as an OS X 10.6.3 Hackintosh and it worked there as well. (Though if I recall correctly, it also worked when using USB 2.0 and booting OS X on the desktop. Certainly it works with USB 2.0 and my MacBook.) I believe the firmware on the device is at the latest level available, v1.07. I base this on the excerpt below from the OS X System Profiler which shows Firmware Revision: 0x107. Bottom line: It's nice that the enclosure is actually usable when I connect with Firewire. But I am still searching for an answer as to why it does not work correctly when using USB 2.0 in Windows 7 (and Ubuntu ... but really Windows 7 is my biggest concern). OXFORD IDE Device 1: Manufacturer: EZ QUEST Model: 0x0 GUID: 0x1D202E0220417 Maximum Speed: Up to 800 Mb/sec Connection Speed: Up to 400 Mb/sec Sub-units: OXFORD IDE Device 1 Unit: Unit Software Version: 0x10483 Unit Spec ID: 0x609E Firmware Revision: 0x107 Product Revision Level: ST6O Sub-units: OXFORD IDE Device 1 SBP-LUN: Capacity: 1 TB (1,000,204,886,016 bytes) Removable Media: Yes BSD Name: disk3 Partition Map Type: MBR (Master Boot Record) S.M.A.R.T. status: Not Supported

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Syncing Data with a Server using Silverlight and HTTP Polling Duplex

    - by dwahlin
    Many applications have the need to stay in-sync with data provided by a service. Although web applications typically rely on standard polling techniques to check if data has changed, Silverlight provides several interesting options for keeping an application in-sync that rely on server “push” technologies. A few years back I wrote several blog posts covering different “push” technologies available in Silverlight that rely on sockets or HTTP Polling Duplex. We recently had a project that looked like it could benefit from pushing data from a server to one or more clients so I thought I’d revisit the subject and provide some updates to the original code posted. If you’ve worked with AJAX before in Web applications then you know that until browsers fully support web sockets or other duplex (bi-directional communication) technologies that it’s difficult to keep applications in-sync with a server without relying on polling. The problem with polling is that you have to check for changes on the server on a timed-basis which can often be wasteful and take up unnecessary resources. With server “push” technologies, data can be pushed from the server to the client as it changes. Once the data is received, the client can update the user interface as appropriate. Using “push” technologies allows the client to listen for changes from the data but stay 100% focused on client activities as opposed to worrying about polling and asking the server if anything has changed. Silverlight provides several options for pushing data from a server to a client including sockets, TCP bindings and HTTP Polling Duplex.  Each has its own strengths and weaknesses as far as performance and setup work with HTTP Polling Duplex arguably being the easiest to setup and get going.  In this article I’ll demonstrate how HTTP Polling Duplex can be used in Silverlight 4 applications to push data and show how you can create a WCF server that provides an HTTP Polling Duplex binding that a Silverlight client can consume.   What is HTTP Polling Duplex? Technologies that allow data to be pushed from a server to a client rely on duplex functionality. Duplex (or bi-directional) communication allows data to be passed in both directions.  A client can call a service and the server can call the client. HTTP Polling Duplex (as its name implies) allows a server to communicate with a client without forcing the client to constantly poll the server. It has the benefit of being able to run on port 80 making setup a breeze compared to the other options which require specific ports to be used and cross-domain policy files to be exposed on port 943 (as with sockets and TCP bindings). Having said that, if you’re looking for the best speed possible then sockets and TCP bindings are the way to go. But, they’re not the only game in town when it comes to duplex communication. The first time I heard about HTTP Polling Duplex (initially available in Silverlight 2) I wasn’t exactly sure how it was any better than standard polling used in AJAX applications. I read the Silverlight SDK, looked at various resources and generally found the following definition unhelpful as far as understanding the actual benefits that HTTP Polling Duplex provided: "The Silverlight client periodically polls the service on the network layer, and checks for any new messages that the service wants to send on the callback channel. The service queues all messages sent on the client callback channel and delivers them to the client when the client polls the service." Although the previous definition explained the overall process, it sounded as if standard polling was used. Fortunately, Microsoft’s Scott Guthrie provided me with a more clear definition several years back that explains the benefits provided by HTTP Polling Duplex quite well (used with his permission): "The [HTTP Polling Duplex] duplex support does use polling in the background to implement notifications – although the way it does it is different than manual polling. It initiates a network request, and then the request is effectively “put to sleep” waiting for the server to respond (it doesn’t come back immediately). The server then keeps the connection open but not active until it has something to send back (or the connection times out after 90 seconds – at which point the duplex client will connect again and wait). This way you are avoiding hitting the server repeatedly – but still get an immediate response when there is data to send." After hearing Scott’s definition the light bulb went on and it all made sense. A client makes a request to a server to check for changes, but instead of the request returning immediately, it parks itself on the server and waits for data. It’s kind of like waiting to pick up a pizza at the store. Instead of calling the store over and over to check the status, you sit in the store and wait until the pizza (the request data) is ready. Once it’s ready you take it back home (to the client). This technique provides a lot of efficiency gains over standard polling techniques even though it does use some polling of its own as a request is initially made from a client to a server. So how do you implement HTTP Polling Duplex in your Silverlight applications? Let’s take a look at the process by starting with the server. Creating an HTTP Polling Duplex WCF Service Creating a WCF service that exposes an HTTP Polling Duplex binding is straightforward as far as coding goes. Add some one way operations into an interface, create a client callback interface and you’re ready to go. The most challenging part comes into play when configuring the service to properly support the necessary binding and that’s more of a cut and paste operation once you know the configuration code to use. To create an HTTP Polling Duplex service you’ll need to expose server-side and client-side interfaces and reference the System.ServiceModel.PollingDuplex assembly (located at C:\Program Files (x86)\Microsoft SDKs\Silverlight\v4.0\Libraries\Server on my machine) in the server project. For the demo application I upgraded a basketball simulation service to support the latest polling duplex assemblies. The service simulates a simple basketball game using a Game class and pushes information about the game such as score, fouls, shots and more to the client as the game changes over time. Before jumping too far into the game push service, it’s important to discuss two interfaces used by the service to communicate in a bi-directional manner. The first is called IGameStreamService and defines the methods/operations that the client can call on the server (see Listing 1). The second is IGameStreamClient which defines the callback methods that a server can use to communicate with a client (see Listing 2).   [ServiceContract(Namespace = "Silverlight", CallbackContract = typeof(IGameStreamClient))] public interface IGameStreamService { [OperationContract(IsOneWay = true)] void GetTeamData(); } Listing 1. The IGameStreamService interface defines server operations that can be called on the server.   [ServiceContract] public interface IGameStreamClient { [OperationContract(IsOneWay = true)] void ReceiveTeamData(List<Team> teamData); [OperationContract(IsOneWay = true, AsyncPattern=true)] IAsyncResult BeginReceiveGameData(GameData gameData, AsyncCallback callback, object state); void EndReceiveGameData(IAsyncResult result); } Listing 2. The IGameStreamClient interfaces defines client operations that a server can call.   The IGameStreamService interface is decorated with the standard ServiceContract attribute but also contains a value for the CallbackContract property.  This property is used to define the interface that the client will expose (IGameStreamClient in this example) and use to receive data pushed from the service. Notice that each OperationContract attribute in both interfaces sets the IsOneWay property to true. This means that the operation can be called and passed data as appropriate, however, no data will be passed back. Instead, data will be pushed back to the client as it’s available.  Looking through the IGameStreamService interface you can see that the client can request team data whereas the IGameStreamClient interface allows team and game data to be received by the client. One interesting point about the IGameStreamClient interface is the inclusion of the AsyncPattern property on the BeginReceiveGameData operation. I initially created this operation as a standard one way operation and it worked most of the time. However, as I disconnected clients and reconnected new ones game data wasn’t being passed properly. After researching the problem more I realized that because the service could take up to 7 seconds to return game data, things were getting hung up. By setting the AsyncPattern property to true on the BeginReceivedGameData operation and providing a corresponding EndReceiveGameData operation I was able to get around this problem and get everything running properly. I’ll provide more details on the implementation of these two methods later in this post. Once the interfaces were created I moved on to the game service class. The first order of business was to create a class that implemented the IGameStreamService interface. Since the service can be used by multiple clients wanting game data I added the ServiceBehavior attribute to the class definition so that I could set its InstanceContextMode to InstanceContextMode.Single (in effect creating a Singleton service object). Listing 3 shows the game service class as well as its fields and constructor.   [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, InstanceContextMode = InstanceContextMode.Single)] public class GameStreamService : IGameStreamService { object _Key = new object(); Game _Game = null; Timer _Timer = null; Random _Random = null; Dictionary<string, IGameStreamClient> _ClientCallbacks = new Dictionary<string, IGameStreamClient>(); static AsyncCallback _ReceiveGameDataCompleted = new AsyncCallback(ReceiveGameDataCompleted); public GameStreamService() { _Game = new Game(); _Timer = new Timer { Enabled = false, Interval = 2000, AutoReset = true }; _Timer.Elapsed += new ElapsedEventHandler(_Timer_Elapsed); _Timer.Start(); _Random = new Random(); }} Listing 3. The GameStreamService implements the IGameStreamService interface which defines a callback contract that allows the service class to push data back to the client. By implementing the IGameStreamService interface, GameStreamService must supply a GetTeamData() method which is responsible for supplying information about the teams that are playing as well as individual players.  GetTeamData() also acts as a client subscription method that tracks clients wanting to receive game data.  Listing 4 shows the GetTeamData() method. public void GetTeamData() { //Get client callback channel var context = OperationContext.Current; var sessionID = context.SessionId; var currClient = context.GetCallbackChannel<IGameStreamClient>(); context.Channel.Faulted += Disconnect; context.Channel.Closed += Disconnect; IGameStreamClient client; if (!_ClientCallbacks.TryGetValue(sessionID, out client)) { lock (_Key) { _ClientCallbacks[sessionID] = currClient; } } currClient.ReceiveTeamData(_Game.GetTeamData()); //Start timer which when fired sends updated score information to client if (!_Timer.Enabled) { _Timer.Enabled = true; } } Listing 4. The GetTeamData() method subscribes a given client to the game service and returns. The key the line of code in the GetTeamData() method is the call to GetCallbackChannel<IGameStreamClient>().  This method is responsible for accessing the calling client’s callback channel. The callback channel is defined by the IGameStreamClient interface shown earlier in Listing 2 and used by the server to communicate with the client. Before passing team data back to the client, GetTeamData() grabs the client’s session ID and checks if it already exists in the _ClientCallbacks dictionary object used to track clients wanting callbacks from the server. If the client doesn’t exist it adds it into the collection. It then pushes team data from the Game class back to the client by calling ReceiveTeamData().  Since the service simulates a basketball game, a timer is then started if it’s not already enabled which is then used to randomly send data to the client. When the timer fires, game data is pushed down to the client. Listing 5 shows the _Timer_Elapsed() method that is called when the timer fires as well as the SendGameData() method used to send data to the client. void _Timer_Elapsed(object sender, ElapsedEventArgs e) { int interval = _Random.Next(3000, 7000); lock (_Key) { _Timer.Interval = interval; _Timer.Enabled = false; } SendGameData(_Game.GetGameData()); } private void SendGameData(GameData gameData) { var cbs = _ClientCallbacks.Where(cb => ((IContextChannel)cb.Value).State == CommunicationState.Opened); for (int i = 0; i < cbs.Count(); i++) { var cb = cbs.ElementAt(i).Value; try { cb.BeginReceiveGameData(gameData, _ReceiveGameDataCompleted, cb); } catch (TimeoutException texp) { //Log timeout error } catch (CommunicationException cexp) { //Log communication error } } lock (_Key) _Timer.Enabled = true; } private static void ReceiveGameDataCompleted(IAsyncResult result) { try { ((IGameStreamClient)(result.AsyncState)).EndReceiveGameData(result); } catch (CommunicationException) { // empty } catch (TimeoutException) { // empty } } LIsting 5. _Timer_Elapsed is used to simulate time in a basketball game. When _Timer_Elapsed() fires the SendGameData() method is called which iterates through the clients wanting to be notified of changes. As each client is identified, their respective BeginReceiveGameData() method is called which ultimately pushes game data down to the client. Recall that this method was defined in the client callback interface named IGameStreamClient shown earlier in Listing 2. Notice that BeginReceiveGameData() accepts _ReceiveGameDataCompleted as its second parameter (an AsyncCallback delegate defined in the service class) and passes the client callback as the third parameter. The initial version of the sample application had a standard ReceiveGameData() method in the client callback interface. However, sometimes the client callbacks would work properly and sometimes they wouldn’t which was a little baffling at first glance. After some investigation I realized that I needed to implement an asynchronous pattern for client callbacks to work properly since 3 – 7 second delays are occurring as a result of the timer. Once I added the BeginReceiveGameData() and ReceiveGameDataCompleted() methods everything worked properly since each call was handled in an asynchronous manner. The final task that had to be completed to get the server working properly with HTTP Polling Duplex was adding configuration code into web.config. In the interest of brevity I won’t post all of the code here since the sample application includes everything you need. However, Listing 6 shows the key configuration code to handle creating a custom binding named pollingDuplexBinding and associate it with the service’s endpoint.   <bindings> <customBinding> <binding name="pollingDuplexBinding"> <binaryMessageEncoding /> <pollingDuplex maxPendingSessions="2147483647" maxPendingMessagesPerSession="2147483647" inactivityTimeout="02:00:00" serverPollTimeout="00:05:00"/> <httpTransport /> </binding> </customBinding> </bindings> <services> <service name="GameService.GameStreamService" behaviorConfiguration="GameStreamServiceBehavior"> <endpoint address="" binding="customBinding" bindingConfiguration="pollingDuplexBinding" contract="GameService.IGameStreamService"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>   Listing 6. Configuring an HTTP Polling Duplex binding in web.config and associating an endpoint with it. Calling the Service and Receiving “Pushed” Data Calling the service and handling data that is pushed from the server is a simple and straightforward process in Silverlight. Since the service is configured with a MEX endpoint and exposes a WSDL file, you can right-click on the Silverlight project and select the standard Add Service Reference item. After the web service proxy is created you may notice that the ServiceReferences.ClientConfig file only contains an empty configuration element instead of the normal configuration elements created when creating a standard WCF proxy. You can certainly update the file if you want to read from it at runtime but for the sample application I fed the service URI directly to the service proxy as shown next: var address = new EndpointAddress("http://localhost.:5661/GameStreamService.svc"); var binding = new PollingDuplexHttpBinding(); _Proxy = new GameStreamServiceClient(binding, address); _Proxy.ReceiveTeamDataReceived += _Proxy_ReceiveTeamDataReceived; _Proxy.ReceiveGameDataReceived += _Proxy_ReceiveGameDataReceived; _Proxy.GetTeamDataAsync(); This code creates the proxy and passes the endpoint address and binding to use to its constructor. It then wires the different receive events to callback methods and calls GetTeamDataAsync().  Calling GetTeamDataAsync() causes the server to store the client in the server-side dictionary collection mentioned earlier so that it can receive data that is pushed.  As the server-side timer fires and game data is pushed to the client, the user interface is updated as shown in Listing 7. Listing 8 shows the _Proxy_ReceiveGameDataReceived() method responsible for handling the data and calling UpdateGameData() to process it.   Listing 7. The Silverlight interface. Game data is pushed from the server to the client using HTTP Polling Duplex. void _Proxy_ReceiveGameDataReceived(object sender, ReceiveGameDataReceivedEventArgs e) { UpdateGameData(e.gameData); } private void UpdateGameData(GameData gameData) { //Update Score this.tbTeam1Score.Text = gameData.Team1Score.ToString(); this.tbTeam2Score.Text = gameData.Team2Score.ToString(); //Update ball visibility if (gameData.Action != ActionsEnum.Foul) { if (tbTeam1.Text == gameData.TeamOnOffense) { AnimateBall(this.BB1, this.BB2); } else //Team 2 { AnimateBall(this.BB2, this.BB1); } } if (this.lbActions.Items.Count > 9) this.lbActions.Items.Clear(); this.lbActions.Items.Add(gameData.LastAction); if (this.lbActions.Visibility == Visibility.Collapsed) this.lbActions.Visibility = Visibility.Visible; } private void AnimateBall(Image onBall, Image offBall) { this.FadeIn.Stop(); Storyboard.SetTarget(this.FadeInAnimation, onBall); Storyboard.SetTarget(this.FadeOutAnimation, offBall); this.FadeIn.Begin(); } Listing 8. As the server pushes game data, the client’s _Proxy_ReceiveGameDataReceived() method is called to process the data. In a real-life application I’d go with a ViewModel class to handle retrieving team data, setup data bindings and handle data that is pushed from the server. However, for the sample application I wanted to focus on HTTP Polling Duplex and keep things as simple as possible.   Summary Silverlight supports three options when duplex communication is required in an application including TCP bindins, sockets and HTTP Polling Duplex. In this post you’ve seen how HTTP Polling Duplex interfaces can be created and implemented on the server as well as how they can be consumed by a Silverlight client. HTTP Polling Duplex provides a nice way to “push” data from a server while still allowing the data to flow over port 80 or another port of your choice.   Sample Application Download

    Read the article

  • C#/.NET Little Wonders: The ConcurrentDictionary

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In this series of posts, we will discuss how the concurrent collections have been developed to help alleviate these multi-threading concerns.  Last week’s post began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  Today's post discusses the ConcurrentDictionary<T> (originally I had intended to discuss ConcurrentBag this week as well, but ConcurrentDictionary had enough information to create a very full post on its own!).  Finally next week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. Recap As you'll recall from the previous post, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  While these were convenient because you didn't have to worry about writing your own synchronization logic, they were a bit too finely grained and if you needed to perform multiple operations under one lock, the automatic synchronization didn't buy much. With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  This cuts both ways in that you have a lot more control as a developer over when and how fine-grained you want to synchronize, but on the other hand if you just want simple synchronization it creates more work. With .NET 4.0, we get the best of both worlds in generic collections.  A new breed of collections was born called the concurrent collections in the System.Collections.Concurrent namespace.  These amazing collections are fine-tuned to have best overall performance for situations requiring concurrent access.  They are not meant to replace the generic collections, but to simply be an alternative to creating your own locking mechanisms. Among those concurrent collections were the ConcurrentStack<T> and ConcurrentQueue<T> which provide classic LIFO and FIFO collections with a concurrent twist.  As we saw, some of the traditional methods that required calls to be made in a certain order (like checking for not IsEmpty before calling Pop()) were replaced in favor of an umbrella operation that combined both under one lock (like TryPop()). Now, let's take a look at the next in our series of concurrent collections!For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentDictionary – the fully thread-safe dictionary The ConcurrentDictionary<TKey,TValue> is the thread-safe counterpart to the generic Dictionary<TKey, TValue> collection.  Obviously, both are designed for quick – O(1) – lookups of data based on a key.  If you think of algorithms where you need lightning fast lookups of data and don’t care whether the data is maintained in any particular ordering or not, the unsorted dictionaries are generally the best way to go. Note: as a side note, there are sorted implementations of IDictionary, namely SortedDictionary and SortedList which are stored as an ordered tree and a ordered list respectively.  While these are not as fast as the non-sorted dictionaries – they are O(log2 n) – they are a great combination of both speed and ordering -- and still greatly outperform a linear search. Now, once again keep in mind that if all you need to do is load a collection once and then allow multi-threaded reading you do not need any locking.  Examples of this tend to be situations where you load a lookup or translation table once at program start, then keep it in memory for read-only reference.  In such cases locking is completely non-productive. However, most of the time when we need a concurrent dictionary we are interleaving both reads and updates.  This is where the ConcurrentDictionary really shines!  It achieves its thread-safety with no common lock to improve efficiency.  It actually uses a series of locks to provide concurrent updates, and has lockless reads!  This means that the ConcurrentDictionary gets even more efficient the higher the ratio of reads-to-writes you have. ConcurrentDictionary and Dictionary differences For the most part, the ConcurrentDictionary<TKey,TValue> behaves like it’s Dictionary<TKey,TValue> counterpart with a few differences.  Some notable examples of which are: Add() does not exist in the concurrent dictionary. This means you must use TryAdd(), AddOrUpdate(), or GetOrAdd().  It also means that you can’t use a collection initializer with the concurrent dictionary. TryAdd() replaced Add() to attempt atomic, safe adds. Because Add() only succeeds if the item doesn’t already exist, we need an atomic operation to check if the item exists, and if not add it while still under an atomic lock. TryUpdate() was added to attempt atomic, safe updates. If we want to update an item, we must make sure it exists first and that the original value is what we expected it to be.  If all these are true, we can update the item under one atomic step. TryRemove() was added to attempt atomic, safe removes. To safely attempt to remove a value we need to see if the key exists first, this checks for existence and removes under an atomic lock. AddOrUpdate() was added to attempt an thread-safe “upsert”. There are many times where you want to insert into a dictionary if the key doesn’t exist, or update the value if it does.  This allows you to make a thread-safe add-or-update. GetOrAdd() was added to attempt an thread-safe query/insert. Sometimes, you want to query for whether an item exists in the cache, and if it doesn’t insert a starting value for it.  This allows you to get the value if it exists and insert if not. Count, Keys, Values properties take a snapshot of the dictionary. Accessing these properties may interfere with add and update performance and should be used with caution. ToArray() returns a static snapshot of the dictionary. That is, the dictionary is locked, and then copied to an array as a O(n) operation.  GetEnumerator() is thread-safe and efficient, but allows dirty reads. Because reads require no locking, you can safely iterate over the contents of the dictionary.  The only downside is that, depending on timing, you may get dirty reads. Dirty reads during iteration The last point on GetEnumerator() bears some explanation.  Picture a scenario in which you call GetEnumerator() (or iterate using a foreach, etc.) and then, during that iteration the dictionary gets updated.  This may not sound like a big deal, but it can lead to inconsistent results if used incorrectly.  The problem is that items you already iterated over that are updated a split second after don’t show the update, but items that you iterate over that were updated a split second before do show the update.  Thus you may get a combination of items that are “stale” because you iterated before the update, and “fresh” because they were updated after GetEnumerator() but before the iteration reached them. Let’s illustrate with an example, let’s say you load up a concurrent dictionary like this: 1: // load up a dictionary. 2: var dictionary = new ConcurrentDictionary<string, int>(); 3:  4: dictionary["A"] = 1; 5: dictionary["B"] = 2; 6: dictionary["C"] = 3; 7: dictionary["D"] = 4; 8: dictionary["E"] = 5; 9: dictionary["F"] = 6; Then you have one task (using the wonderful TPL!) to iterate using dirty reads: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); And one task to attempt updates in a separate thread (probably): 1: // attempt updates in a separate thread 2: var updateTask = new Task(() => 3: { 4: // iterates, and updates the value by one 5: foreach (var pair in dictionary) 6: { 7: dictionary[pair.Key] = pair.Value + 1; 8: } 9: }); Now that we’ve done this, we can fire up both tasks and wait for them to complete: 1: // start both tasks 2: updateTask.Start(); 3: iterationTask.Start(); 4:  5: // wait for both to complete. 6: Task.WaitAll(updateTask, iterationTask); Now, if I you didn’t know about the dirty reads, you may have expected to see the iteration before the updates (such as A:1, B:2, C:3, D:4, E:5, F:6).  However, because the reads are dirty, we will quite possibly get a combination of some updated, some original.  My own run netted this result: 1: F:6 2: E:6 3: D:5 4: C:4 5: B:3 6: A:2 Note that, of course, iteration is not in order because ConcurrentDictionary, like Dictionary, is unordered.  Also note that both E and F show the value 6.  This is because the output task reached F before the update, but the updates for the rest of the items occurred before their output (probably because console output is very slow, comparatively). If we want to always guarantee that we will get a consistent snapshot to iterate over (that is, at the point we ask for it we see precisely what is in the dictionary and no subsequent updates during iteration), we should iterate over a call to ToArray() instead: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary.ToArray()) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); The atomic Try…() methods As you can imagine TryAdd() and TryRemove() have few surprises.  Both first check the existence of the item to determine if it can be added or removed based on whether or not the key currently exists in the dictionary: 1: // try add attempts an add and returns false if it already exists 2: if (dictionary.TryAdd("G", 7)) 3: Console.WriteLine("G did not exist, now inserted with 7"); 4: else 5: Console.WriteLine("G already existed, insert failed."); TryRemove() also has the virtue of returning the value portion of the removed entry matching the given key: 1: // attempt to remove the value, if it exists it is removed and the original is returned 2: int removedValue; 3: if (dictionary.TryRemove("C", out removedValue)) 4: Console.WriteLine("Removed C and its value was " + removedValue); 5: else 6: Console.WriteLine("C did not exist, remove failed."); Now TryUpdate() is an interesting creature.  You might think from it’s name that TryUpdate() first checks for an item’s existence, and then updates if the item exists, otherwise it returns false.  Well, note quite... It turns out when you call TryUpdate() on a concurrent dictionary, you pass it not only the new value you want it to have, but also the value you expected it to have before the update.  If the item exists in the dictionary, and it has the value you expected, it will update it to the new value atomically and return true.  If the item is not in the dictionary or does not have the value you expected, it is not modified and false is returned. 1: // attempt to update the value, if it exists and if it has the expected original value 2: if (dictionary.TryUpdate("G", 42, 7)) 3: Console.WriteLine("G existed and was 7, now it's 42."); 4: else 5: Console.WriteLine("G either didn't exist, or wasn't 7."); The composite Add methods The ConcurrentDictionary also has composite add methods that can be used to perform updates and gets, with an add if the item is not existing at the time of the update or get. The first of these, AddOrUpdate(), allows you to add a new item to the dictionary if it doesn’t exist, or update the existing item if it does.  For example, let’s say you are creating a dictionary of counts of stock ticker symbols you’ve subscribed to from a market data feed: 1: public sealed class SubscriptionManager 2: { 3: private readonly ConcurrentDictionary<string, int> _subscriptions = new ConcurrentDictionary<string, int>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public void AddSubscription(string tickerKey) 7: { 8: // add a new subscription with count of 1, or update existing count by 1 if exists 9: var resultCount = _subscriptions.AddOrUpdate(tickerKey, 1, (symbol, count) => count + 1); 10:  11: // now check the result to see if we just incremented the count, or inserted first count 12: if (resultCount == 1) 13: { 14: // subscribe to symbol... 15: } 16: } 17: } Notice the update value factory Func delegate.  If the key does not exist in the dictionary, the add value is used (in this case 1 representing the first subscription for this symbol), but if the key already exists, it passes the key and current value to the update delegate which computes the new value to be stored in the dictionary.  The return result of this operation is the value used (in our case: 1 if added, existing value + 1 if updated). Likewise, the GetOrAdd() allows you to attempt to retrieve a value from the dictionary, and if the value does not currently exist in the dictionary it will insert a value.  This can be handy in cases where perhaps you wish to cache data, and thus you would query the cache to see if the item exists, and if it doesn’t you would put the item into the cache for the first time: 1: public sealed class PriceCache 2: { 3: private readonly ConcurrentDictionary<string, double> _cache = new ConcurrentDictionary<string, double>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public double QueryPrice(string tickerKey) 7: { 8: // check for the price in the cache, if it doesn't exist it will call the delegate to create value. 9: return _cache.GetOrAdd(tickerKey, symbol => GetCurrentPrice(symbol)); 10: } 11:  12: private double GetCurrentPrice(string tickerKey) 13: { 14: // do code to calculate actual true price. 15: } 16: } There are other variations of these two methods which vary whether a value is provided or a factory delegate, but otherwise they work much the same. Oddities with the composite Add methods The AddOrUpdate() and GetOrAdd() methods are totally thread-safe, on this you may rely, but they are not atomic.  It is important to note that the methods that use delegates execute those delegates outside of the lock.  This was done intentionally so that a user delegate (of which the ConcurrentDictionary has no control of course) does not take too long and lock out other threads. This is not necessarily an issue, per se, but it is something you must consider in your design.  The main thing to consider is that your delegate may get called to generate an item, but that item may not be the one returned!  Consider this scenario: A calls GetOrAdd and sees that the key does not currently exist, so it calls the delegate.  Now thread B also calls GetOrAdd and also sees that the key does not currently exist, and for whatever reason in this race condition it’s delegate completes first and it adds its new value to the dictionary.  Now A is done and goes to get the lock, and now sees that the item now exists.  In this case even though it called the delegate to create the item, it will pitch it because an item arrived between the time it attempted to create one and it attempted to add it. Let’s illustrate, assume this totally contrived example program which has a dictionary of char to int.  And in this dictionary we want to store a char and it’s ordinal (that is, A = 1, B = 2, etc).  So for our value generator, we will simply increment the previous value in a thread-safe way (perhaps using Interlocked): 1: public static class Program 2: { 3: private static int _nextNumber = 0; 4:  5: // the holder of the char to ordinal 6: private static ConcurrentDictionary<char, int> _dictionary 7: = new ConcurrentDictionary<char, int>(); 8:  9: // get the next id value 10: public static int NextId 11: { 12: get { return Interlocked.Increment(ref _nextNumber); } 13: } Then, we add a method that will perform our insert: 1: public static void Inserter() 2: { 3: for (int i = 0; i < 26; i++) 4: { 5: _dictionary.GetOrAdd((char)('A' + i), key => NextId); 6: } 7: } Finally, we run our test by starting two tasks to do this work and get the results… 1: public static void Main() 2: { 3: // 3 tasks attempting to get/insert 4: var tasks = new List<Task> 5: { 6: new Task(Inserter), 7: new Task(Inserter) 8: }; 9:  10: tasks.ForEach(t => t.Start()); 11: Task.WaitAll(tasks.ToArray()); 12:  13: foreach (var pair in _dictionary.OrderBy(p => p.Key)) 14: { 15: Console.WriteLine(pair.Key + ":" + pair.Value); 16: } 17: } If you run this with only one task, you get the expected A:1, B:2, ..., Z:26.  But running this in parallel you will get something a bit more complex.  My run netted these results: 1: A:1 2: B:3 3: C:4 4: D:5 5: E:6 6: F:7 7: G:8 8: H:9 9: I:10 10: J:11 11: K:12 12: L:13 13: M:14 14: N:15 15: O:16 16: P:17 17: Q:18 18: R:19 19: S:20 20: T:21 21: U:22 22: V:23 23: W:24 24: X:25 25: Y:26 26: Z:27 Notice that B is 3?  This is most likely because both threads attempted to call GetOrAdd() at roughly the same time and both saw that B did not exist, thus they both called the generator and one thread got back 2 and the other got back 3.  However, only one of those threads can get the lock at a time for the actual insert, and thus the one that generated the 3 won and the 3 was inserted and the 2 got discarded.  This is why on these methods your factory delegates should be careful not to have any logic that would be unsafe if the value they generate will be pitched in favor of another item generated at roughly the same time.  As such, it is probably a good idea to keep those generators as stateless as possible. Summary The ConcurrentDictionary is a very efficient and thread-safe version of the Dictionary generic collection.  It has all the benefits of type-safety that it’s generic collection counterpart does, and in addition is extremely efficient especially when there are more reads than writes concurrently. Tweet Technorati Tags: C#, .NET, Concurrent Collections, Collections, Little Wonders, Black Rabbit Coder,James Michael Hare

    Read the article

  • What is auto-mounting my media volume?

    - by user285277
    Something is repeatedly mounting my "media" share, doing something with it, then quietly un-mounting it with no notifications at the user level. from the little I can gleaned from the console messages below, I thought I'd managed to stop it, if not understand it last night when I followed instructions for deleting all traces of the Google Update Daemon. I've not been using any Google apps whatsoever, so I was surprised to see that in Console. What's more surprising, and perhaps a little distressing, is that the same thing occurred this evening, when the Google Daemon is long gone. I don't have that log because I can't recall precisely what time it occurred. I'll do a search and provide it if you wish, though. In the meantime, any help with this would be extremely appreciated. I've asked over at Apple Discussions but I think it might be a little deeper than those manning the boards this weekend are comfortable with. It's certainly beyond my meager skills. With apologies in advance if this is more lines thank you need. Please note that I've transformed the Google links a little because the forum here requires more reputation points before one can post more than two links. 12/27/13 10:47:31.000 PM kernel[0]: memorystatus_thread: idle exiting pid 53629 [distnoted] 12/27/13 10:48:10.433 PM com.apple.Preview.TrustedBookmarksService[53640]: Failed to resolve bookmark data at index: 0; not stale; error: The file doesn’t exist. 12/27/13 10:48:10.434 PM com.apple.Preview.TrustedBookmarksService[53640]: Failed to resolve bookmark data at index: 1; not stale; error: The file doesn’t exist. 12/27/13 10:48:10.950 PM com.apple.SecurityServer[17]: Session 103257 created 12/27/13 10:48:34.328 PM com.apple.Preview.TrustedBookmarksService[53640]: Failed to resolve bookmark data at index: 2; not stale; error: The file doesn’t exist. 12/27/13 10:48:34.000 PM kernel[0]: AFP_VFS afpfs_mount: /Volumes/Media Archive-1, pid 53641 12/27/13 10:48:34.000 PM kernel[0]: AFP_VFS afpfs_mount : succeeded on volume 0xffffff80d6355008 /Volumes/Media Archive-1 (error = 0, retval = 0) 12/27/13 10:49:32.000 PM kernel[0]: wlEvent: en0 en0 Link DOWN virtIf = 0 12/27/13 10:49:32.000 PM kernel[0]: AirPort: Link Down on en0. Reason 8 (Disassociated because station leaving). 12/27/13 10:49:32.000 PM kernel[0]: en0::IO80211Interface::postMessage bssid changed 12/27/13 10:49:33.681 PM configd[16]: network changed: v4(en0-:10.0.1.12) DNS- Proxy- SMB 12/27/13 10:49:33.697 PM configd[16]: network changed: DNS* Proxy 12/27/13 10:49:35.475 PM KernelEventAgent[57]: tid 00000000 received event(s) VQ_NOTRESP (1) 12/27/13 10:49:35.000 PM kernel[0]: ASP_TCP Disconnect: triggering reconnect by bumping reconnTrigger from curr value 0 on so 0xffffff802176b4a0 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect started /Volumes/Media Archive-1 prevTrigger 0 currTrigger 1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: doing reconnect on /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: posting to KEA EINPROGRESS for /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: Max reconnect time: 600 secs, Connect timeout: 15 secs for /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 1 seconds and then try again 12/27/13 10:49:35.479 PM KernelEventAgent[57]: tid 00000000 type 'afpfs', mounted on '/Volumes/Media Archive-1', from '//Me@Capsule._afpovertcp._tcp.local/Media%20Archive', not responding 12/27/13 10:49:35.487 PM KernelEventAgent[57]: tid 00000000 found 1 filesystem(s) with problem(s) 12/27/13 10:49:36.692 PM com.bourgeoisbits.cloak.agent[14503]: NetworkProfile: (null), (null), (null) (Connected: NO, Airport: NO, Open: NO) [trusted] 12/27/13 10:49:36.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:36.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:36.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 2 seconds and then try again 12/27/13 10:49:38.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:38.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:38.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 4 seconds and then try again 12/27/13 10:49:41.000 PM kernel[0]: CODE SIGNING: cs_invalid_page(0x1000): p=53662[GoogleSoftwareUp] clearing CS_VALID 12/27/13 10:49:42.102 PM GoogleSoftwareUpdateDaemon[53663]: -[KeystoneDaemon logServiceState] GoogleSoftwareUpdate daemon (1.1.0.3659) vending: com.google.Keystone.Daemon.UpdateEngine: 2 connection(s) com.google.Keystone.Daemon.Administration: 0 connection(s) 12/27/13 10:49:42.113 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine updateProductID:] KSUpdateEngine updating product ID: "com.google.Keystone" 12/27/13 10:49:42.116 PM GoogleSoftwareUpdateDaemon[53663]: -[KSCheckAction performAction] KSCheckAction checking 1 ticket(s). 12/27/13 10:49:42.121 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction starting update check for ticket(s): {( <KSTicket:0x531870 productID=com.google.Keystone version=1.1.0.3659 xc=<KSPathExistenceChecker:0x5302d0 path=/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/> serverType=Omaha url=htt[PeeEs]://tools.google.com/service/update2 creationDate=2012-08-12 14:47:10 > )} Using server: <KSOmahaServer:0x534340 engine=<KSDaemonUpdateEngine:0x52e530> params={ EngineVersion = "1.1.0.3659"; ActivesInfo = { "com.google.talkplugin" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; }; "com.google.Keystone" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; LastActivePingDate = 2013-10-06 07:00:00 +0000; LastActiveDate = 2013-12-28 03:49:42 +0000; }; "com.google.picasa" = { LastActiveDate = 2012-08-29 10:15:42 +0000; }; }; UserInitiated = 0; IsSystem = 1; OmahaOSVersion = "10.8.5_i486"; Identity = KeystoneDaemon; AllowedSubdomains = ( ".omaha.sandbox.google.com", ".tools.google.com", ".www.google.com", ".corp.google.com" ); } > 12/27/13 10:49:42.130 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction running KSServerUpdateRequest: <KSOmahaServerUpdateRequest:0x1a31a90 server=<KSOmahaServer:0x534340> url="htt[PeeEs]://tools.google.com/service/update2" runningFetchers=0 tickets=1 activeTickets=1 rollCallTickets=1 body= <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <o:gupdate xmlns:o="htt[Pee]://www.google.com/update2/request" protocol="2.0" version="KeystoneDaemon-1.1.0.3659" ismachine="1"> <o:os platform="mac" version="MacOSX" sp="10.8.5_i486"></o:os> <o:app appid="com.google.Keystone" version="1.1.0.3659" lang="en-us" installage="502" brand="GGLG"> <o:ping r="82" a="82"></o:ping> <o:updatecheck></o:updatecheck> </o:app> </o:gupdate> > 12/27/13 10:49:42.291 PM GoogleSoftwareUpdateDaemon[53663]: -[KSOutOfProcessFetcher(PrivateMethods) helperDidTerminate:] The Internet connection appears to be offline. [NSURLErrorDomain:-1009] 12/27/13 10:49:42.291 PM GoogleSoftwareUpdateDaemon[53663]: -[KSServerUpdateRequest(PrivateMethods) fetcher:failedWithError:] KSServerUpdateRequest fetch failed. (productIDs: com.google.Keystone) [com.google.UpdateEngine.CoreErrorDomain:702 - 'htt[PeeEs]://tools.google.com/service/update2'] (The Internet connection appears to be offline. [NSURLErrorDomain:-1009]) 12/27/13 10:49:42.292 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction(PrivateMethods) finishAction] KSUpdateCheckAction found updates: {( )} 12/27/13 10:49:42.295 PM GoogleSoftwareUpdateDaemon[53663]: -[KSPrefetchAction performAction] KSPrefetchAction no updates to prefetch. 12/27/13 10:49:42.295 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSSilentUpdateAction had no updates to apply. 12/27/13 10:49:42.296 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSPromptAction had no updates to apply. 12/27/13 10:49:42.299 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine(PrivateMethods) updateFinish] KSUpdateEngine update processing complete. 12/27/13 10:49:42.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:42.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:42.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 8 seconds and then try again 12/27/13 10:49:43.152 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine updateAllProducts] KSUpdateEngine updating all installed products. 12/27/13 10:49:43.153 PM GoogleSoftwareUpdateDaemon[53663]: -[KSCheckAction performAction] KSCheckAction checking 2 ticket(s). 12/27/13 10:49:43.158 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction starting update check for ticket(s): {( <KSTicket:0x18367a0 productID=com.google.Keystone version=1.1.0.3659 xc=<KSPathExistenceChecker:0x1837e10 path=/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/> serverType=Omaha url=htt[PeeEs]://tools.google.com/service/update2 creationDate=2012-08-12 14:47:10 >, <KSTicket:0x1834750 productID=com.google.talkplugin version=4.7.0.15362 xc=<KSPathExistenceChecker:0x1833890 path=/Library/Application Support/Google/GoogleTalkPlugin.app> serverType=Omaha url=htt[PeeEs]://tools.google.com/service/update2 creationDate=2012-08-12 14:47:10 > )} Using server: <KSOmahaServer:0x52e930 engine=<KSDaemonUpdateEngine:0x52e530> params={ EngineVersion = "1.1.0.3659"; ActivesInfo = { "com.google.talkplugin" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; }; "com.google.Keystone" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; LastActivePingDate = 2013-10-06 07:00:00 +0000; LastActiveDate = 2013-12-28 03:49:42 +0000; }; "com.google.picasa" = { LastActiveDate = 2012-08-29 10:15:42 +0000; }; }; UserInitiated = 0; IsSystem = 1; OmahaOSVersion = "10.8.5_i486"; Identity = KeystoneDaemon; AllowedSubdomains = ( ".omaha.sandbox.google.com", ".tools.google.com", ".www.google.com", ".corp.google.com" ); } > 12/27/13 10:49:43.159 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction running KSServerUpdateRequest: <KSOmahaServerUpdateRequest:0x53a210 server=<KSOmahaServer:0x52e930> url="htt[PeeEs]://tools.google.com/service/update2" runningFetchers=0 tickets=2 activeTickets=1 rollCallTickets=2 body= <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <o:gupdate xmlns:o="htt[Pee]://www.google.com/update2/request" protocol="2.0" version="KeystoneDaemon-1.1.0.3659" ismachine="1"> <o:os platform="mac" version="MacOSX" sp="10.8.5_i486"></o:os> <o:app appid="com.google.Keystone" version="1.1.0.3659" lang="en-us" installage="502" brand="GGLG"> <o:ping r="82" a="82"></o:ping> <o:updatecheck></o:updatecheck> </o:app> <o:app appid="com.google.talkplugin" version="4.7.0.15362" lang="en-us" installage="502" brand="GGLG"> <o:ping r="82"></o:ping> <o:updatecheck></o:updatecheck> </o:app> </o:gupdate> > 12/27/13 10:49:43.243 PM GoogleSoftwareUpdateDaemon[53663]: -[KSOutOfProcessFetcher(PrivateMethods) helperDidTerminate:] The Internet connection appears to be offline. [NSURLErrorDomain:-1009] 12/27/13 10:49:43.243 PM GoogleSoftwareUpdateDaemon[53663]: -[KSServerUpdateRequest(PrivateMethods) fetcher:failedWithError:] KSServerUpdateRequest fetch failed. (productIDs: com.google.Keystone, ... (2)) [com.google.UpdateEngine.CoreErrorDomain:702 - 'htt[PeeEs]://tools.google.com/service/update2'] (The Internet connection appears to be offline. [NSURLErrorDomain:-1009]) 12/27/13 10:49:43.244 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction(PrivateMethods) finishAction] KSUpdateCheckAction found updates: {( )} 12/27/13 10:49:43.247 PM GoogleSoftwareUpdateDaemon[53663]: -[KSPrefetchAction performAction] KSPrefetchAction no updates to prefetch. 12/27/13 10:49:43.248 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSSilentUpdateAction had no updates to apply. 12/27/13 10:49:43.248 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSPromptAction had no updates to apply. 12/27/13 10:49:43.250 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine(PrivateMethods) updateFinish] KSUpdateEngine update processing complete. 12/27/13 10:49:45.570 PM GoogleSoftwareUpdateDaemon[53663]: -[KeystoneDaemon logServiceState] GoogleSoftwareUpdate daemon (1.1.0.3659) vending: com.google.Keystone.Daemon.UpdateEngine: 1 connection(s) com.google.Keystone.Daemon.Administration: 0 connection(s) 12/27/13 10:49:50.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:50.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:50.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 10 seconds and then try again 12/27/13 10:49:53.828 PM KernelEventAgent[57]: tid 00000000 unmounting 1 filesystems 12/27/13 10:49:53.000 PM kernel[0]: AFP_VFS afpfs_unmount: /Volumes/Media Archive-1, flags 524288, pid 57 12/27/13 10:49:54.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: get the reconnect token 12/27/13 10:49:54.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: GetReconnectToken failed 32 /Volumes/Media Archive-1 12/27/13 10:49:54.000 PM kernel[0]: AFP_VFS afpfs_unmount : afpfs_DoReconnect sent signal for unmount to proceed 12/27/13 10:50:12.104 PM GoogleSoftwareUpdateDaemon[53663]: -[KeystoneDaemon main] GoogleSoftwareUpdateDaemon inactive, shutdown. 12/27/13 10:50:29.396 PM Dock[93157]: no information back from LS about running process

    Read the article

  • How LINQ to Object statements work

    - by rajbk
    This post goes into detail as to now LINQ statements work when querying a collection of objects. This topic assumes you have an understanding of how generics, delegates, implicitly typed variables, lambda expressions, object/collection initializers, extension methods and the yield statement work. I would also recommend you read my previous two posts: Using Delegates in C# Part 1 Using Delegates in C# Part 2 We will start by writing some methods to filter a collection of data. Assume we have an Employee class like so: 1: public class Employee { 2: public int ID { get; set;} 3: public string FirstName { get; set;} 4: public string LastName {get; set;} 5: public string Country { get; set; } 6: } and a collection of employees like so: 1: var employees = new List<Employee> { 2: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 3: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 4: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 5: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" }, 6: }; Filtering We wish to  find all employees that have an even ID. We could start off by writing a method that takes in a list of employees and returns a filtered list of employees with an even ID. 1: static List<Employee> GetEmployeesWithEvenID(List<Employee> employees) { 2: var filteredEmployees = new List<Employee>(); 3: foreach (Employee emp in employees) { 4: if (emp.ID % 2 == 0) { 5: filteredEmployees.Add(emp); 6: } 7: } 8: return filteredEmployees; 9: } The method can be rewritten to return an IEnumerable<Employee> using the yield return keyword. 1: static IEnumerable<Employee> GetEmployeesWithEvenID(IEnumerable<Employee> employees) { 2: foreach (Employee emp in employees) { 3: if (emp.ID % 2 == 0) { 4: yield return emp; 5: } 6: } 7: } We put these together in a console application. 1: using System; 2: using System.Collections.Generic; 3: //No System.Linq 4:  5: public class Program 6: { 7: [STAThread] 8: static void Main(string[] args) 9: { 10: var employees = new List<Employee> { 11: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 12: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 13: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 14: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" }, 15: }; 16: var filteredEmployees = GetEmployeesWithEvenID(employees); 17:  18: foreach (Employee emp in filteredEmployees) { 19: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 20: emp.ID, emp.FirstName, emp.LastName, emp.Country); 21: } 22:  23: Console.ReadLine(); 24: } 25: 26: static IEnumerable<Employee> GetEmployeesWithEvenID(IEnumerable<Employee> employees) { 27: foreach (Employee emp in employees) { 28: if (emp.ID % 2 == 0) { 29: yield return emp; 30: } 31: } 32: } 33: } 34:  35: public class Employee { 36: public int ID { get; set;} 37: public string FirstName { get; set;} 38: public string LastName {get; set;} 39: public string Country { get; set; } 40: } Output: ID 2 First_Name Jim Last_Name Ashlock Country UK ID 4 First_Name Jill Last_Name Anderson Country AUS Our filtering method is too specific. Let us change it so that it is capable of doing different types of filtering and lets give our method the name Where ;-) We will add another parameter to our Where method. This additional parameter will be a delegate with the following declaration. public delegate bool Filter(Employee emp); The idea is that the delegate parameter in our Where method will point to a method that contains the logic to do our filtering thereby freeing our Where method from any dependency. The method is shown below: 1: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 2: foreach (Employee emp in employees) { 3: if (filter(emp)) { 4: yield return emp; 5: } 6: } 7: } Making the change to our app, we create a new instance of the Filter delegate on line 14 with a target set to the method EmployeeHasEvenId. Running the code will produce the same output. 1: public delegate bool Filter(Employee emp); 2:  3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: var employees = new List<Employee> { 9: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 10: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 11: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 12: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 13: }; 14: var filterDelegate = new Filter(EmployeeHasEvenId); 15: var filteredEmployees = Where(employees, filterDelegate); 16:  17: foreach (Employee emp in filteredEmployees) { 18: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 19: emp.ID, emp.FirstName, emp.LastName, emp.Country); 20: } 21: Console.ReadLine(); 22: } 23: 24: static bool EmployeeHasEvenId(Employee emp) { 25: return emp.ID % 2 == 0; 26: } 27: 28: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 29: foreach (Employee emp in employees) { 30: if (filter(emp)) { 31: yield return emp; 32: } 33: } 34: } 35: } 36:  37: public class Employee { 38: public int ID { get; set;} 39: public string FirstName { get; set;} 40: public string LastName {get; set;} 41: public string Country { get; set; } 42: } Lets use lambda expressions to inline the contents of the EmployeeHasEvenId method in place of the method. The next code snippet shows this change (see line 15).  For brevity, the Employee class declaration has been skipped. 1: public delegate bool Filter(Employee emp); 2:  3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: var employees = new List<Employee> { 9: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 10: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 11: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 12: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 13: }; 14: var filterDelegate = new Filter(EmployeeHasEvenId); 15: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 16:  17: foreach (Employee emp in filteredEmployees) { 18: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 19: emp.ID, emp.FirstName, emp.LastName, emp.Country); 20: } 21: Console.ReadLine(); 22: } 23: 24: static bool EmployeeHasEvenId(Employee emp) { 25: return emp.ID % 2 == 0; 26: } 27: 28: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 29: foreach (Employee emp in employees) { 30: if (filter(emp)) { 31: yield return emp; 32: } 33: } 34: } 35: } 36:  The output displays the same two employees.  Our Where method is too restricted since it works with a collection of Employees only. Lets change it so that it works with any IEnumerable<T>. In addition, you may recall from my previous post,  that .NET 3.5 comes with a lot of predefined delegates including public delegate TResult Func<T, TResult>(T arg); We will get rid of our Filter delegate and use the one above instead. We apply these two changes to our code. 1: public class Program 2: { 3: [STAThread] 4: static void Main(string[] args) 5: { 6: var employees = new List<Employee> { 7: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 8: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 9: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 10: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 11: }; 12:  13: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 14:  15: foreach (Employee emp in filteredEmployees) { 16: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 17: emp.ID, emp.FirstName, emp.LastName, emp.Country); 18: } 19: Console.ReadLine(); 20: } 21: 22: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 23: foreach (var x in source) { 24: if (filter(x)) { 25: yield return x; 26: } 27: } 28: } 29: } We have successfully implemented a way to filter any IEnumerable<T> based on a  filter criteria. Projection Now lets enumerate on the items in the IEnumerable<Employee> we got from the Where method and copy them into a new IEnumerable<EmployeeFormatted>. The EmployeeFormatted class will only have a FullName and ID property. 1: public class EmployeeFormatted { 2: public int ID { get; set; } 3: public string FullName {get; set;} 4: } We could “project” our existing IEnumerable<Employee> into a new collection of IEnumerable<EmployeeFormatted> with the help of a new method. We will call this method Select ;-) 1: static IEnumerable<EmployeeFormatted> Select(IEnumerable<Employee> employees) { 2: foreach (var emp in employees) { 3: yield return new EmployeeFormatted { 4: ID = emp.ID, 5: FullName = emp.LastName + ", " + emp.FirstName 6: }; 7: } 8: } The changes are applied to our app. 1: public class Program 2: { 3: [STAThread] 4: static void Main(string[] args) 5: { 6: var employees = new List<Employee> { 7: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 8: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 9: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 10: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 11: }; 12:  13: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 14: var formattedEmployees = Select(filteredEmployees); 15:  16: foreach (EmployeeFormatted emp in formattedEmployees) { 17: Console.WriteLine("ID {0} Full_Name {1}", 18: emp.ID, emp.FullName); 19: } 20: Console.ReadLine(); 21: } 22:  23: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 24: foreach (var x in source) { 25: if (filter(x)) { 26: yield return x; 27: } 28: } 29: } 30: 31: static IEnumerable<EmployeeFormatted> Select(IEnumerable<Employee> employees) { 32: foreach (var emp in employees) { 33: yield return new EmployeeFormatted { 34: ID = emp.ID, 35: FullName = emp.LastName + ", " + emp.FirstName 36: }; 37: } 38: } 39: } 40:  41: public class Employee { 42: public int ID { get; set;} 43: public string FirstName { get; set;} 44: public string LastName {get; set;} 45: public string Country { get; set; } 46: } 47:  48: public class EmployeeFormatted { 49: public int ID { get; set; } 50: public string FullName {get; set;} 51: } Output: ID 2 Full_Name Ashlock, Jim ID 4 Full_Name Anderson, Jill We have successfully selected employees who have an even ID and then shaped our data with the help of the Select method so that the final result is an IEnumerable<EmployeeFormatted>.  Lets make our Select method more generic so that the user is given the freedom to shape what the output would look like. We can do this, like before, with lambda expressions. Our Select method is changed to accept a delegate as shown below. TSource will be the type of data that comes in and TResult will be the type the user chooses (shape of data) as returned from the selector delegate. 1:  2: static IEnumerable<TResult> Select<TSource, TResult>(IEnumerable<TSource> source, Func<TSource, TResult> selector) { 3: foreach (var x in source) { 4: yield return selector(x); 5: } 6: } We see the new changes to our app. On line 15, we use lambda expression to specify the shape of the data. In this case the shape will be of type EmployeeFormatted. 1:  2: public class Program 3: { 4: [STAThread] 5: static void Main(string[] args) 6: { 7: var employees = new List<Employee> { 8: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 9: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 10: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 11: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 12: }; 13:  14: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 15: var formattedEmployees = Select(filteredEmployees, (emp) => 16: new EmployeeFormatted { 17: ID = emp.ID, 18: FullName = emp.LastName + ", " + emp.FirstName 19: }); 20:  21: foreach (EmployeeFormatted emp in formattedEmployees) { 22: Console.WriteLine("ID {0} Full_Name {1}", 23: emp.ID, emp.FullName); 24: } 25: Console.ReadLine(); 26: } 27: 28: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 29: foreach (var x in source) { 30: if (filter(x)) { 31: yield return x; 32: } 33: } 34: } 35: 36: static IEnumerable<TResult> Select<TSource, TResult>(IEnumerable<TSource> source, Func<TSource, TResult> selector) { 37: foreach (var x in source) { 38: yield return selector(x); 39: } 40: } 41: } The code outputs the same result as before. On line 14 we filter our data and on line 15 we project our data. What if we wanted to be more expressive and concise? We could combine both line 14 and 15 into one line as shown below. Assuming you had to perform several operations like this on our collection, you would end up with some very unreadable code! 1: var formattedEmployees = Select(Where(employees, emp => emp.ID % 2 == 0), (emp) => 2: new EmployeeFormatted { 3: ID = emp.ID, 4: FullName = emp.LastName + ", " + emp.FirstName 5: }); A cleaner way to write this would be to give the appearance that the Select and Where methods were part of the IEnumerable<T>. This is exactly what extension methods give us. Extension methods have to be defined in a static class. Let us make the Select and Where extension methods on IEnumerable<T> 1: public static class MyExtensionMethods { 2: static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 3: foreach (var x in source) { 4: if (filter(x)) { 5: yield return x; 6: } 7: } 8: } 9: 10: static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 11: foreach (var x in source) { 12: yield return selector(x); 13: } 14: } 15: } The creation of the extension method makes the syntax much cleaner as shown below. We can write as many extension methods as we want and keep on chaining them using this technique. 1: var formattedEmployees = employees 2: .Where(emp => emp.ID % 2 == 0) 3: .Select (emp => new EmployeeFormatted { ID = emp.ID, FullName = emp.LastName + ", " + emp.FirstName }); Making these changes and running our code produces the same result. 1: using System; 2: using System.Collections.Generic; 3:  4: public class Program 5: { 6: [STAThread] 7: static void Main(string[] args) 8: { 9: var employees = new List<Employee> { 10: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 11: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 12: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 13: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 14: }; 15:  16: var formattedEmployees = employees 17: .Where(emp => emp.ID % 2 == 0) 18: .Select (emp => 19: new EmployeeFormatted { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: } 23: ); 24:  25: foreach (EmployeeFormatted emp in formattedEmployees) { 26: Console.WriteLine("ID {0} Full_Name {1}", 27: emp.ID, emp.FullName); 28: } 29: Console.ReadLine(); 30: } 31: } 32:  33: public static class MyExtensionMethods { 34: static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 35: foreach (var x in source) { 36: if (filter(x)) { 37: yield return x; 38: } 39: } 40: } 41: 42: static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 43: foreach (var x in source) { 44: yield return selector(x); 45: } 46: } 47: } 48:  49: public class Employee { 50: public int ID { get; set;} 51: public string FirstName { get; set;} 52: public string LastName {get; set;} 53: public string Country { get; set; } 54: } 55:  56: public class EmployeeFormatted { 57: public int ID { get; set; } 58: public string FullName {get; set;} 59: } Let’s change our code to return a collection of anonymous types and get rid of the EmployeeFormatted type. We see that the code produces the same output. 1: using System; 2: using System.Collections.Generic; 3:  4: public class Program 5: { 6: [STAThread] 7: static void Main(string[] args) 8: { 9: var employees = new List<Employee> { 10: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 11: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 12: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 13: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 14: }; 15:  16: var formattedEmployees = employees 17: .Where(emp => emp.ID % 2 == 0) 18: .Select (emp => 19: new { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: } 23: ); 24:  25: foreach (var emp in formattedEmployees) { 26: Console.WriteLine("ID {0} Full_Name {1}", 27: emp.ID, emp.FullName); 28: } 29: Console.ReadLine(); 30: } 31: } 32:  33: public static class MyExtensionMethods { 34: public static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 35: foreach (var x in source) { 36: if (filter(x)) { 37: yield return x; 38: } 39: } 40: } 41: 42: public static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 43: foreach (var x in source) { 44: yield return selector(x); 45: } 46: } 47: } 48:  49: public class Employee { 50: public int ID { get; set;} 51: public string FirstName { get; set;} 52: public string LastName {get; set;} 53: public string Country { get; set; } 54: } To be more expressive, C# allows us to write our extension method calls as a query expression. Line 16 can be rewritten a query expression like so: 1: var formattedEmployees = from emp in employees 2: where emp.ID % 2 == 0 3: select new { 4: ID = emp.ID, 5: FullName = emp.LastName + ", " + emp.FirstName 6: }; When the compiler encounters an expression like the above, it simply rewrites it as calls to our extension methods.  So far we have been using our extension methods. The System.Linq namespace contains several extension methods for objects that implement the IEnumerable<T>. You can see a listing of these methods in the Enumerable class in the System.Linq namespace. Let’s get rid of our extension methods (which I purposefully wrote to be of the same signature as the ones in the Enumerable class) and use the ones provided in the Enumerable class. Our final code is shown below: 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; //Added 4:  5: public class Program 6: { 7: [STAThread] 8: static void Main(string[] args) 9: { 10: var employees = new List<Employee> { 11: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 12: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 13: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 14: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 15: }; 16:  17: var formattedEmployees = from emp in employees 18: where emp.ID % 2 == 0 19: select new { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: }; 23:  24: foreach (var emp in formattedEmployees) { 25: Console.WriteLine("ID {0} Full_Name {1}", 26: emp.ID, emp.FullName); 27: } 28: Console.ReadLine(); 29: } 30: } 31:  32: public class Employee { 33: public int ID { get; set;} 34: public string FirstName { get; set;} 35: public string LastName {get; set;} 36: public string Country { get; set; } 37: } 38:  39: public class EmployeeFormatted { 40: public int ID { get; set; } 41: public string FullName {get; set;} 42: } This post has shown you a basic overview of LINQ to Objects work by showning you how an expression is converted to a sequence of calls to extension methods when working directly with objects. It gets more interesting when working with LINQ to SQL where an expression tree is constructed – an in memory data representation of the expression. The C# compiler compiles these expressions into code that builds an expression tree at runtime. The provider can then traverse the expression tree and generate the appropriate SQL query. You can read more about expression trees in this MSDN article.

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • LSI 9285-8e and Supermicro SC837E26-RJBOD1 duplicate enclosure ID and slot numbers

    - by Andy Shinn
    I am working with 2 x Supermicro SC837E26-RJBOD1 chassis connected to a single LSI 9285-8e card in a Supermicro 1U host. There are 28 drives in each chassis for a total of 56 drives in 28 RAID1 mirrors. The problem I am running in to is that there are duplicate slots for the 2 chassis (the slots list twice and only go from 0 to 27). All the drives also show the same enclosure ID (ID 36). However, MegaCLI -encinfo lists the 2 enclosures correctly (ID 36 and ID 65). My question is, why would this happen? Is there an option I am missing to use 2 enclosures effectively? This is blocking me rebuilding a drive that failed in slot 11 since I can only specify enclosure and slot as parameters to replace a drive. When I do this, it picks the wrong slot 11 (device ID 46 instead of device ID 19). Adapter #1 is the LSI 9285-8e, adapter #0 (which I removed due to space limitations) is the onboard LSI. Adapter information: Adapter #1 ============================================================================== Versions ================ Product Name : LSI MegaRAID SAS 9285-8e Serial No : SV12704804 FW Package Build: 23.1.1-0004 Mfg. Data ================ Mfg. Date : 06/30/11 Rework Date : 00/00/00 Revision No : 00A Battery FRU : N/A Image Versions in Flash: ================ BIOS Version : 5.25.00_4.11.05.00_0x05040000 WebBIOS Version : 6.1-20-e_20-Rel Preboot CLI Version: 05.01-04:#%00001 FW Version : 3.140.15-1320 NVDATA Version : 2.1106.03-0051 Boot Block Version : 2.04.00.00-0001 BOOT Version : 06.253.57.219 Pending Images in Flash ================ None PCI Info ================ Vendor Id : 1000 Device Id : 005b SubVendorId : 1000 SubDeviceId : 9285 Host Interface : PCIE ChipRevision : B0 Number of Frontend Port: 0 Device Interface : PCIE Number of Backend Port: 8 Port : Address 0 5003048000ee8e7f 1 5003048000ee8a7f 2 0000000000000000 3 0000000000000000 4 0000000000000000 5 0000000000000000 6 0000000000000000 7 0000000000000000 HW Configuration ================ SAS Address : 500605b0038f9210 BBU : Present Alarm : Present NVRAM : Present Serial Debugger : Present Memory : Present Flash : Present Memory Size : 1024MB TPM : Absent On board Expander: Absent Upgrade Key : Absent Temperature sensor for ROC : Present Temperature sensor for controller : Absent ROC temperature : 70 degree Celcius Settings ================ Current Time : 18:24:36 3/13, 2012 Predictive Fail Poll Interval : 300sec Interrupt Throttle Active Count : 16 Interrupt Throttle Completion : 50us Rebuild Rate : 30% PR Rate : 30% BGI Rate : 30% Check Consistency Rate : 30% Reconstruction Rate : 30% Cache Flush Interval : 4s Max Drives to Spinup at One Time : 2 Delay Among Spinup Groups : 12s Physical Drive Coercion Mode : Disabled Cluster Mode : Disabled Alarm : Enabled Auto Rebuild : Enabled Battery Warning : Enabled Ecc Bucket Size : 15 Ecc Bucket Leak Rate : 1440 Minutes Restore HotSpare on Insertion : Disabled Expose Enclosure Devices : Enabled Maintain PD Fail History : Enabled Host Request Reordering : Enabled Auto Detect BackPlane Enabled : SGPIO/i2c SEP Load Balance Mode : Auto Use FDE Only : No Security Key Assigned : No Security Key Failed : No Security Key Not Backedup : No Default LD PowerSave Policy : Controller Defined Maximum number of direct attached drives to spin up in 1 min : 10 Any Offline VD Cache Preserved : No Allow Boot with Preserved Cache : No Disable Online Controller Reset : No PFK in NVRAM : No Use disk activity for locate : No Capabilities ================ RAID Level Supported : RAID0, RAID1, RAID5, RAID6, RAID00, RAID10, RAID50, RAID60, PRL 11, PRL 11 with spanning, SRL 3 supported, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span Supported Drives : SAS, SATA Allowed Mixing: Mix in Enclosure Allowed Mix of SAS/SATA of HDD type in VD Allowed Status ================ ECC Bucket Count : 0 Limitations ================ Max Arms Per VD : 32 Max Spans Per VD : 8 Max Arrays : 128 Max Number of VDs : 64 Max Parallel Commands : 1008 Max SGE Count : 60 Max Data Transfer Size : 8192 sectors Max Strips PerIO : 42 Max LD per array : 16 Min Strip Size : 8 KB Max Strip Size : 1.0 MB Max Configurable CacheCade Size: 0 GB Current Size of CacheCade : 0 GB Current Size of FW Cache : 887 MB Device Present ================ Virtual Drives : 28 Degraded : 0 Offline : 0 Physical Devices : 59 Disks : 56 Critical Disks : 0 Failed Disks : 0 Supported Adapter Operations ================ Rebuild Rate : Yes CC Rate : Yes BGI Rate : Yes Reconstruct Rate : Yes Patrol Read Rate : Yes Alarm Control : Yes Cluster Support : No BBU : No Spanning : Yes Dedicated Hot Spare : Yes Revertible Hot Spares : Yes Foreign Config Import : Yes Self Diagnostic : Yes Allow Mixed Redundancy on Array : No Global Hot Spares : Yes Deny SCSI Passthrough : No Deny SMP Passthrough : No Deny STP Passthrough : No Support Security : No Snapshot Enabled : No Support the OCE without adding drives : Yes Support PFK : Yes Support PI : No Support Boot Time PFK Change : Yes Disable Online PFK Change : No PFK TrailTime Remaining : 0 days 0 hours Support Shield State : Yes Block SSD Write Disk Cache Change: Yes Supported VD Operations ================ Read Policy : Yes Write Policy : Yes IO Policy : Yes Access Policy : Yes Disk Cache Policy : Yes Reconstruction : Yes Deny Locate : No Deny CC : No Allow Ctrl Encryption: No Enable LDBBM : No Support Breakmirror : No Power Savings : Yes Supported PD Operations ================ Force Online : Yes Force Offline : Yes Force Rebuild : Yes Deny Force Failed : No Deny Force Good/Bad : No Deny Missing Replace : No Deny Clear : No Deny Locate : No Support Temperature : Yes Disable Copyback : No Enable JBOD : No Enable Copyback on SMART : No Enable Copyback to SSD on SMART Error : Yes Enable SSD Patrol Read : No PR Correct Unconfigured Areas : Yes Enable Spin Down of UnConfigured Drives : Yes Disable Spin Down of hot spares : No Spin Down time : 30 T10 Power State : Yes Error Counters ================ Memory Correctable Errors : 0 Memory Uncorrectable Errors : 0 Cluster Information ================ Cluster Permitted : No Cluster Active : No Default Settings ================ Phy Polarity : 0 Phy PolaritySplit : 0 Background Rate : 30 Strip Size : 64kB Flush Time : 4 seconds Write Policy : WB Read Policy : Adaptive Cache When BBU Bad : Disabled Cached IO : No SMART Mode : Mode 6 Alarm Disable : Yes Coercion Mode : None ZCR Config : Unknown Dirty LED Shows Drive Activity : No BIOS Continue on Error : No Spin Down Mode : None Allowed Device Type : SAS/SATA Mix Allow Mix in Enclosure : Yes Allow HDD SAS/SATA Mix in VD : Yes Allow SSD SAS/SATA Mix in VD : No Allow HDD/SSD Mix in VD : No Allow SATA in Cluster : No Max Chained Enclosures : 16 Disable Ctrl-R : Yes Enable Web BIOS : Yes Direct PD Mapping : No BIOS Enumerate VDs : Yes Restore Hot Spare on Insertion : No Expose Enclosure Devices : Yes Maintain PD Fail History : Yes Disable Puncturing : No Zero Based Enclosure Enumeration : No PreBoot CLI Enabled : Yes LED Show Drive Activity : Yes Cluster Disable : Yes SAS Disable : No Auto Detect BackPlane Enable : SGPIO/i2c SEP Use FDE Only : No Enable Led Header : No Delay during POST : 0 EnableCrashDump : No Disable Online Controller Reset : No EnableLDBBM : No Un-Certified Hard Disk Drives : Allow Treat Single span R1E as R10 : No Max LD per array : 16 Power Saving option : Don't Auto spin down Configured Drives Max power savings option is not allowed for LDs. Only T10 power conditions are to be used. Default spin down time in minutes: 30 Enable JBOD : No TTY Log In Flash : No Auto Enhanced Import : No BreakMirror RAID Support : No Disable Join Mirror : No Enable Shield State : Yes Time taken to detect CME : 60s Exit Code: 0x00 Enclosure information: # /opt/MegaRAID/MegaCli/MegaCli64 -encinfo -a1 Number of enclosures on adapter 1 -- 3 Enclosure 0: Device ID : 36 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port B Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 65 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11820 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 48 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 1: Device ID : 65 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port A Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 36 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11760 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 47 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 2: Device ID : 252 Number of Slots : 8 Number of Power Supplies : 0 Number of Fans : 0 Number of Temperature Sensors : 0 Number of Alarms : 0 Number of SIM Modules : 1 Number of Physical Drives : 0 Status : Normal Position : 1 Connector Name : Unavailable Enclosure type : SGPIO Failed in first Inquiry commnad FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : Unavailable Inquiry data : Vendor Identification : LSI Product Identification : SGPIO Product Revision Level : N/A Vendor Specific : Exit Code: 0x00 Now, notice that each slot 11 device shows an enclosure ID of 36, I think this is where the discrepancy happens. One should be 36. But the other should be on enclosure 65. Drives in slot 11: Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 5, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 48 WWN: Sequence Number: 11 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : YES Device Firmware Level: A5C0 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8a53 Connected Port Number: 1(path0) Inquiry Data: MJ1311YNG6YYXAHitachi HDS5C3030ALA630 MEAOA5C0 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 19, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 19 WWN: Sequence Number: 4 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : NO Device Firmware Level: A580 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8e53 Connected Port Number: 0(path0) Inquiry Data: MJ1313YNG1VA5CHitachi HDS5C3030ALA630 MEAOA580 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Update 06/28/12: I finally have some new information about (what we think) the root cause of this problem so I thought I would share. After getting in contact with a very knowledgeable Supermicro tech, they provided us with a tool called Xflash (doesn't appear to be readily available on their FTP). When we gathered some information using this utility, my colleague found something very strange: root@mogile2 test]# ./xflash.dat -i get avail Initializing Interface. Expander: SAS2X36 (SAS2x36) 1) SAS2X36 (SAS2x36) (50030480:00EE917F) (0.0.0.0) 2) SAS2X36 (SAS2x36) (50030480:00E9D67F) (0.0.0.0) 3) SAS2X36 (SAS2x36) (50030480:0112D97F) (0.0.0.0) This lists the connected enclosures. You see the 3 connected (we have since added a 3rd and a 4th which is not yet showing up) with their respective SAS address / WWN (50030480:00EE917F). Now we can use this address to get information on the individual enclosures: [root@mogile2 test]# ./xflash.dat -i 5003048000EE917F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00EE917F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 5003048000E9D67F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00E9D67F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 500304800112D97F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:0112D97F Enclosure Logical Id: 50030480:0112D97F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 Did you catch it? The first 2 enclosures logical ID is partially masked out where the 3rd one (which has a correct unique enclosure ID) is not. We pointed this out to Supermicro and were able to confirm that this address is supposed to be set during manufacturing and there was a problem with a certain batch of these enclosures where the logical ID was not set. We believe that the RAID controller is determining the ID based on the logical ID and since our first 2 enclosures have the same logical ID, they get the same enclosure ID. We also confirmed that 0000007F is the default which comes from LSI as an ID. The next pointer that helps confirm this could be a manufacturing problem with a run of JBODs is the fact that all 6 of the enclosures that have this problem begin with 00E. I believe that between 00E8 and 00EE Supermicro forgot to program the logical IDs correctly and neglected to recall or fix the problem post production. Fortunately for us, there is a tool to manage the WWN and logical ID of the devices from Supermicro: ftp://ftp.supermicro.com/utility/ExpanderXtools_Lite/. Our next step is to schedule a shutdown of these JBODs (after data migration) and reprogram the logical ID and see if it solves the problem. Update 06/28/12 #2: I just discovered this FAQ at Supermicro while Google searching for "lsi 0000007f": http://www.supermicro.com/support/faqs/faq.cfm?faq=11805. I still don't understand why, in the last several times we contacted Supermicro, they would have never directed us to this article :\

    Read the article

  • How to Make a Game like Space Invaders - Ray Wenderlich (why do my space invaders scroll off screen)

    - by Erv Noel
    I'm following this tutorial(http://www.raywenderlich.com/51068/how-to-make-a-game-like-space-invaders-with-sprite-kit-tutorial-part-1) and I've run into a problem right after the part where I add [self determineInvaderMovementDirection]; to my GameScene.m file (specifically to my moveInvadersForUpdate method) The tutorial states that the space invaders should be moving accordingly after adding this piece of code but when I run they move to the left and they do not come back. I'm not sure what I am doing wrong as I have followed this tutorial very carefully. Any help or clarification would be greatly appreciated. Thanks in advance ! Here is the full GameScene.m #import "GameScene.h" #import <CoreMotion/CoreMotion.h> #pragma mark - Custom Type Definitions /* The type definition and constant definitions 1,2,3 take care of the following tasks: 1.Define the possible types of invader enemies. This can be used in switch statements later when things like displaying different sprites images for each enemy type. The typedef makes InvaderType a formal Obj-C type that is type checked for method arguments and variables.This is so that the wrong method argument is not used or assigned to the wrong variable. 2. Define the size of the invaders and that they'll be laid out in a grid of rows and columns on the screen. 3. Define a name that will be used to identify invaders when searching for them. */ //1 typedef enum InvaderType { InvaderTypeA, InvaderTypeB, InvaderTypeC } InvaderType; /* Invaders move in a fixed pattern: right, right, down, left, down, right right. InvaderMovementDirection tracks the invaders' progress through this pattern */ typedef enum InvaderMovementDirection { InvaderMovementDirectionRight, InvaderMovementDirectionLeft, InvaderMovementDirectionDownThenRight, InvaderMovementDirectionDownThenLeft, InvaderMovementDirectionNone } InvaderMovementDirection; //2 #define kInvaderSize CGSizeMake(24,16) #define kInvaderGridSpacing CGSizeMake(12,12) #define kInvaderRowCount 6 #define kInvaderColCount 6 //3 #define kInvaderName @"invader" #define kShipSize CGSizeMake(30, 16) //stores the size of the ship #define kShipName @"ship" // stores the name of the ship stored on the sprite node #define kScoreHudName @"scoreHud" #define kHealthHudName @"healthHud" /* this class extension allows you to add “private” properties to GameScene class, without revealing the properties to other classes or code. You still get the benefit of using Objective-C properties, but your GameScene state is stored internally and can’t be modified by other external classes. As well, it doesn’t clutter the namespace of datatypes that your other classes see. This class extension is used in the method didMoveToView */ #pragma mark - Private GameScene Properties @interface GameScene () @property BOOL contentCreated; @property InvaderMovementDirection invaderMovementDirection; @property NSTimeInterval timeOfLastMove; @property NSTimeInterval timePerMove; @end @implementation GameScene #pragma mark Object Lifecycle Management #pragma mark - Scene Setup and Content Creation /*This method simply invokes createContent using the BOOL property contentCreated to make sure you don’t create your scene’s content more than once. This property is defined in an Objective-C Class Extension found near the top of the file()*/ - (void)didMoveToView:(SKView *)view { if (!self.contentCreated) { [self createContent]; self.contentCreated = YES; } } - (void)createContent { //1 - Invaders begin by moving to the right self.invaderMovementDirection = InvaderMovementDirectionRight; //2 - Invaders take 1 sec for each move. Each step left, right or down // takes 1 second. self.timePerMove = 1.0; //3 - Invaders haven't moved yet, so set the time to zero self.timeOfLastMove = 0.0; [self setupInvaders]; [self setupShip]; [self setupHud]; } /* Creates an invade sprite of a given type 1. Use the invadeType parameterr to determine color of the invader 2. Call spriteNodeWithColor:size: of SKSpriteNode to alloc and init a sprite that renders as a rect of the given color invaderColor with size kInvaderSize */ -(SKNode*)makeInvaderOfType:(InvaderType)invaderType { //1 SKColor* invaderColor; switch (invaderType) { case InvaderTypeA: invaderColor = [SKColor redColor]; break; case InvaderTypeB: invaderColor = [SKColor greenColor]; break; case InvaderTypeC: invaderColor = [SKColor blueColor]; break; } //2 SKSpriteNode* invader = [SKSpriteNode spriteNodeWithColor:invaderColor size:kInvaderSize]; invader.name = kInvaderName; return invader; } -(void)setupInvaders { //1 - loop over the rows CGPoint baseOrigin = CGPointMake(kInvaderSize.width / 2, 180); for (NSUInteger row = 0; row < kInvaderRowCount; ++row) { //2 - Choose a single InvaderType for all invaders // in this row based on the row number InvaderType invaderType; if (row % 3 == 0) invaderType = InvaderTypeA; else if (row % 3 == 1) invaderType = InvaderTypeB; else invaderType = InvaderTypeC; //3 - Does some math to figure out where the first invader // in the row should be positioned CGPoint invaderPosition = CGPointMake(baseOrigin.x, row * (kInvaderGridSpacing.height + kInvaderSize.height) + baseOrigin.y); //4 - Loop over the columns for (NSUInteger col = 0; col < kInvaderColCount; ++col) { //5 - Create an invader for the current row and column and add it // to the scene SKNode* invader = [self makeInvaderOfType:invaderType]; invader.position = invaderPosition; [self addChild:invader]; //6 - update the invaderPosition so that it's correct for the //next invader invaderPosition.x += kInvaderSize.width + kInvaderGridSpacing.width; } } } -(void)setupShip { //1 - creates ship using makeShip. makeShip can easily be used later // to create another ship (ex. to set up more lives) SKNode* ship = [self makeShip]; //2 - Places the ship on the screen. In SpriteKit the origin is at the lower //left corner of the screen. The anchorPoint is based on a unit square with (0, 0) at the lower left of the sprite's area and (1, 1) at its top right. Since SKSpriteNode has a default anchorPoint of (0.5, 0.5), i.e., its center, the ship's position is the position of its center. Positioning the ship at kShipSize.height/2.0f means that half of the ship's height will protrude below its position and half above. If you check the math, you'll see that the ship's bottom aligns exactly with the bottom of the scene. ship.position = CGPointMake(self.size.width / 2.0f, kShipSize.height/2.0f); [self addChild:ship]; } -(SKNode*)makeShip { SKNode* ship = [SKSpriteNode spriteNodeWithColor:[SKColor greenColor] size:kShipSize]; ship.name = kShipName; return ship; } -(void)setupHud { //Sets the score label font to Courier SKLabelNode* scoreLabel = [SKLabelNode labelNodeWithFontNamed:@"Courier"]; //1 - Give the score label a name so it becomes easy to find later when // the score needs to be updated. scoreLabel.name = kScoreHudName; scoreLabel.fontSize = 15; //2 - Color the score label green scoreLabel.fontColor = [SKColor greenColor]; scoreLabel.text = [NSString stringWithFormat:@"Score: %04u", 0]; //3 - Positions the score label near the top left corner of the screen scoreLabel.position = CGPointMake(20 + scoreLabel.frame.size.width/2, self.size.height - (20 + scoreLabel.frame.size.height/2)); [self addChild:scoreLabel]; //Applies the font of the health label SKLabelNode* healthLabel = [SKLabelNode labelNodeWithFontNamed:@"Courier"]; //4 - Give the health label a name so it can be referenced later when it needs // to be updated to display the health healthLabel.name = kHealthHudName; healthLabel.fontSize = 15; //5 - Colors the health label red healthLabel.fontColor = [SKColor redColor]; healthLabel.text = [NSString stringWithFormat:@"Health: %.1f%%", 100.0f]; //6 - Positions the health Label on the upper right hand side of the screen healthLabel.position = CGPointMake(self.size.width - healthLabel.frame.size.width/2 - 20, self.size.height - (20 + healthLabel.frame.size.height/2)); [self addChild:healthLabel]; } #pragma mark - Scene Update - (void)update:(NSTimeInterval)currentTime { //Makes the invaders move [self moveInvadersForUpdate:currentTime]; } #pragma mark - Scene Update Helpers //This method will get invoked by update -(void)moveInvadersForUpdate:(NSTimeInterval)currentTime { //1 - if it's not yet time to move, exit the method. moveInvadersForUpdate: // is invoked 60 times per second, but you don't want the invaders to move // that often since the movement would be too fast to see if (currentTime - self.timeOfLastMove < self.timePerMove) return; //2 - Recall that the scene holds all the invaders as child nodes; which were // added to the scene using addChild: in setupInvaders identifying each invader // by its name property. Invoking enumerateChildNodesWithName:usingBlock only loops over the invaders because they're named kInvaderType; which makes the loop skip the ship and the HUD. The guts og the block moves the invaders 10 pixels either right, left or down depending on the value of invaderMovementDirection [self enumerateChildNodesWithName:kInvaderName usingBlock:^(SKNode *node, BOOL *stop) { switch (self.invaderMovementDirection) { case InvaderMovementDirectionRight: node.position = CGPointMake(node.position.x - 10, node.position.y); break; case InvaderMovementDirectionLeft: node.position = CGPointMake(node.position.x - 10, node.position.y); break; case InvaderMovementDirectionDownThenLeft: case InvaderMovementDirectionDownThenRight: node.position = CGPointMake(node.position.x, node.position.y - 10); break; InvaderMovementDirectionNone: default: break; } }]; //3 - Record that you just moved the invaders, so that the next time this method is invoked (1/60th of a second from when it starts), the invaders won't move again until the set time period of one second has elapsed. self.timeOfLastMove = currentTime; //Makes it so that the invader movement direction changes only when the invaders are actually moving. Invaders only move when the check on self.timeOfLastMove passes (when conditional expression is true) [self determineInvaderMovementDirection]; } #pragma mark - Invader Movement Helpers -(void)determineInvaderMovementDirection { //1 - Since local vars accessed by block are default const(means they cannot be changed), this snippet of code qualifies proposedMovementDirection with __block so that you can modify it in //2 __block InvaderMovementDirection proposedMovementDirection = self.invaderMovementDirection; //2 - Loops over the invaders in the scene and refers to the block with the invader as an argument [self enumerateChildNodesWithName:kInvaderName usingBlock:^(SKNode *node, BOOL *stop) { switch (self.invaderMovementDirection) { case InvaderMovementDirectionRight: //3 - If the invader's right edge is within 1pt of the right edge of the scene, it's about to move offscreen. Sets proposedMovementDirection so that the invaders move down then left. You compare the invader's frame(the frame that contains its content in the scene's coordinate system) with the scene width. Since the scene has an anchorPoint of (0,0) by default and is scaled to fill it's parent view, this comparison ensures you're testing against the view's edges. if (CGRectGetMaxX(node.frame) >= node.scene.size.width - 1.0f) { proposedMovementDirection = InvaderMovementDirectionDownThenLeft; *stop = YES; } break; case InvaderMovementDirectionLeft: //4 - If the invader's left edge is within 1 pt of the left edge of the scene, it's about to move offscreen. Sets the proposedMovementDirection so invaders move down then right if (CGRectGetMinX(node.frame) <= 1.0f) { proposedMovementDirection = InvaderMovementDirectionDownThenRight; *stop = YES; } break; case InvaderMovementDirectionDownThenLeft: //5 - If invaders are moving down then left, they already moved down at this point, so they should now move left. proposedMovementDirection = InvaderMovementDirectionLeft; *stop = YES; break; case InvaderMovementDirectionDownThenRight: //6 - if the invaders are moving down then right, they already moved down so they should now move right. proposedMovementDirection = InvaderMovementDirectionRight; *stop = YES; break; default: break; } }]; //7 - if the proposed invader movement direction is different than the current invader movement direction, update the current direction to the proposed direction if (proposedMovementDirection != self.invaderMovementDirection) { self.invaderMovementDirection = proposedMovementDirection; } } #pragma mark - Bullet Helpers #pragma mark - User Tap Helpers #pragma mark - HUD Helpers #pragma mark - Physics Contact Helpers #pragma mark - Game End Helpers @end

    Read the article

  • client problems - misaligned expectations & not following SDLC protocols

    - by louism
    hi guys, im having some serious problems with a client on a project - i could use some advice please the short version i have been working with this client now for almost 6 months without any problems (a classified website project in the range of 500 hours) over the last few days things have drastically deteriorated to the point where ive had to place the project on-hold whilst i work-out what to do (this has pissed the client off even more) to be simplistic, the root cause of the issue is this: the client doesnt read the specs i make for him, i code the feature, he than wants to change things, i tell him its not to the agreed spec and that that change will have to be postponed and possibly charged for, he gets upset and rants saying 'hes paid for the feature' and im not keeping to the agreement (<- misalignment of expectations) i think the root cause of the root cause is my clients failure to take my SDLC protocols seriously. i have a bug tracking system in place which he practically refuses to use (he still emails me bugs), he doesnt seem to care to much for the protocols i use for dealing with scope creep and change control the whole situation came to a head recently where he 'cracked it' (an aussie term for being fed-up). the more terms like 'postponed for post-launch implementation', 'costed feature addition', and 'not to agreed spec' i kept using, the worse it got finally, he began to bully me - basically insisting i shut-up and do the work im being paid for. i wrote a long-winded email explaining how wrong he was on all these different points, and explaining what all the SDLC protocols do to protect the success of the project. than i deleted that email and wrote a new one in the new email, i suggested as a solution i write up a list of grievances we both had. we than review the list and compromise on different points: he gets some things he wants, i get some things i want. sometimes youve got to give ground to get ground his response to this suggestion was flat-out refusal, and a restatement that i should just get on with the work ive been paid to do so there you have the very subjective short version. if you have the time and inclination, the long version may be a little less bias as it has the email communiques between me and my client the long version (with background) the long version works by me showing you the email communiques which lead to the situation coming to a head. so here it is, judge for yourself where the trouble started... 1. client asked me why something was missing from a feature i just uploaded, my response was to show him what was in the spec: it basically said the item he was looking for was never going to be included 2. [clients response...] Memo Louis, We are following your own title fields and keeping a consistent layout. Why the big fuss about not adding "Part". It simply replaces "model" and is consistent with your current title fields. 3. [my response...] hi [client], the 'part' field appeared to me as a redundancy / mistake. i requested clarification but never received any in a timely manner (about 2 weeks ago) the specification for this feature also indicated it wasnt going to be included: RE: "Why the big fuss about not adding "Part" " it may not appear so, but it would actually be a lot of work for me to now add a 'Part' field it could take me up to 15-20 minutes to properly explain why its such a big undertaking to do this, but i would prefer to use that time instead to work on completing your v1.1 features as a simplistic explanation - it connects to the change in paradigm from a 'generic classified ad' model to a 'specific attributes for specific categories' model basically, i am saying it is a big fuss, but i understand that it doesnt look that way - after all, it is just one ity-bitty field :) if you require a fuller explanation, please let me know and i will commit the time needed to write that out also, if you recall when we first started on the project, i said that with the effort/time required for features, you would likely not know off the top of your head. you may think something is really complex, but in reality its quite simple, you might think something is easy - but it could actually be a massive trauma to code (which is the case here with the 'Part' field). if you also recalled, i said the best course of action is to just ask, and i would let you know on a case-by-case basis 4. [email from me to client...] hi [client], the online catalogue page is now up live (see my email from a few days ago for information on how it works) note: the window of opportunity for input/revisions on what data the catalogue stores has now closed (as i have put the code up live now) RE: the UI/layout of the online catalogue page you may still do visual/ui tweaks to the page at the moment (this window for input/revisions will close in a couple of days time) 5. [email from client to me...] *(note: i had put up the feature & asked the client to review it, never heard back from them for a few days)* Memo Louis, Here you go again. CLOSED without a word of input from the customer. I don't think so. I will reply tomorrow regarding the content and functionality we require from this feature. 5. [from me to client...] hi [client]: RE: from my understanding, you are saying that the mini-sale yard control would change itself based on the fact someone was viewing for parts & accessories <- is that correct? this change is outside the scope of the v1.1 mini-spec and therefore will need to wait 'til post launch for costing/implementation 6. [email from client to me...] Memo Louis, Following your v1.1 mini-spec and all your time paid in full for the work selected. We need to make the situation clear. There will be no further items held for post-launch. Do not expect us to pay for any further items other than those we have agreed upon. You have undertaken to complete the Parts and accessories feature as follows. Obviously, as part of this process the "mini search" will be effected, and will require "adaption to make sense". 7. [email from me to client...] hi [client], RE: "There will be no further items held for post-launch. Do not expect us to pay for any further items other than those we have agreed upon." a few points to consider: 1) the specification for the 'parts & accessories' feature was as follows: (i.e. [what] "...we have agreed upon.") 2) you have received the 'parts & accessories' feature free of charge (you have paid $0 for it). ive spent two days coding that feature as a gesture of good will i would request that you please consider these two facts carefully and sincerely 8. [email from client to me...] Memo Louis, I don't see how you are giving us anything for free. From your original fee proposal you have deleted more than 30 hours of included features. Your title "shelved features". Further you have charged us twice by adding back into the site, at an addition cost, some of those "shelved features" features. See v1.1 mini-spec. Did include in your original fee proposal a change request budget but then charge without discussion items included in v1.1 mini-spec. Included a further Features test plan for a regression test, a fee of 10 hours that would not have been required if the "shelved features" were not left out of the agreed fee proposal. I have made every attempt to satisfy your your uneven business sense by offering you everything your heart desired, in the v1.1 mini-spec, to be left once again with your attitude of "its too hard, lets leave it for post launch". I am no longer accepting anything less than what we have contracted you to do. That is clearly defined in v1.1 mini-spec, and you are paid in advance for delivering those items as an acceptable function. a few notes about the above email... i had to cull features from the original spec because it didnt fit into the budget. i explained this to the client at the start of the project (he wanted more features than he had budget hours to do them all) nothing has been charged for twice, i didnt charge the client for culled features. im charging him to now do those culled features the draft version of the project schedule included a change request budget of 10 hours, but i had to remove that to meet the budget (the client may not have been aware of this to be fair to them) what the client refers to as my attitude of 'too hard/leave it for post-launch', i called a change request protocol and a method for keeping scope creep under control 9. [email from me to client...] hi [client], RE: "...all your grievances..." i had originally written out a long email response; it was fantastic, it had all these great points of how 'you were wrong' and 'i was right', you would of loved it (and by 'loved it', i mean it would of just infuriated you more) so, i decided to deleted it start over, for two reasons: 1) a long email is being disrespectful of your time (youre a busy businessman with things to do) 2) whos wrong or right gets us no closer to fixing the problems we are experiencing what i propose is this... i prepare a bullet point list of your grievances and my grievances (yes, im unhappy too about how things are going - and it has little to do with money) i submit this list to you for you to add to as necessary we then both take a good hard look at this list, and we decide which areas we are willing to give ground on as an example, the list may look something like this: "louis, you keep taking away features you said you would do" [your grievance 2] [your grievance 3] [your grievance ...] "[client], i feel you dont properly read the specs i prepare for you..." [my grievance 2] [my grievance 3] [my grievance ...] if you are willing to give this a try, let me know will it work? who knows. but if it doesnt, we can always go back to arguing some more :) obviously, this will only work if you are willing to give it a genuine try, and you can accept that you may have to 'give some ground to get some ground' what do you think? 10. [email from client to me ...] Memo Louis, Instead of wasting your time listing grievances, I would prefer you complete the items in v1.1 mini-spec, to a satisfactory conclusion. We almost had the website ready for launch until you brought the v1.1 mini-spec into the frame. Obviously I expected you could complete the v1.1 mini-spec in a two-week time frame as you indicated and give the site a more profession presentation. Most of the problems have been caused by you not following our instructions, but deciding to do what you feel like at the time. And then arguing with us how the missing information is not necessary. For instance "Parts and Accessories". Why on earth would you leave out the parts heading, when it ties-in with the fields you have already developed. It replaces "model" and is just as important in the context of information that appears in the "Details" panel. We are at a stage where the the v1.1 mini-spec needs to be completed without further time wasting and the site is complete (subject to all features working). We are on standby at this end to do just that. Let me know when you are back, working on the site and we will process and complete each v1.1 mini-spec, item by item, until the job is complete. 11. [last email from me to client...] hi [client], based on this reply, and your demonstrated unwillingness to compromise/give any ground on issues at hand, i have decided to place your project on-hold for the moment i will be considering further options on how to over-come our challenges over the next few days i will contact you by monday 17/may to discuss any new options i have come up with, and if i believe it is appropriate to restart work on your project at that point or not told you it was long... what do you think?

    Read the article

< Previous Page | 19 20 21 22 23