Search Results

Search found 3019 results on 121 pages for 'arrange act assert'.

Page 120/121 | < Previous Page | 116 117 118 119 120 121  | Next Page >

  • SQL Server &ndash; Undelete a Table and Restore a Single Table from Backup

    - by Mladen Prajdic
    This post is part of the monthly community event called T-SQL Tuesday started by Adam Machanic (blog|twitter) and hosted by someone else each month. This month the host is Sankar Reddy (blog|twitter) and the topic is Misconceptions in SQL Server. You can follow posts for this theme on Twitter by looking at #TSQL2sDay hashtag. Let me start by saying: This code is a crazy hack that is to never be used unless you really, really have to. Really! And I don’t think there’s a time when you would really have to use it for real. Because it’s a hack there are number of things that can go wrong so play with it knowing that. I’ve managed to totally corrupt one database. :) Oh… and for those saying: yeah yeah.. you have a single table in a file group and you’re restoring that, I say “nay nay” to you. As we all know SQL Server can’t do single table restores from backup. This is kind of a obvious thing due to different relational integrity (RI) concerns. Since we have to maintain that we have to restore all tables represented in a RI graph. For this exercise i say BAH! to those concerns. Note that this method “works” only for simple tables that don’t have LOB and off rows data. The code can be expanded to include those but I’ve tried to leave things “simple”. Note that for this to work our table needs to be relatively static data-wise. This doesn’t work for OLTP table. Products are a perfect example of static data. They don’t change much between backups, pretty much everything depends on them and their table is one of those tables that are relatively easy to accidentally delete everything from. This only works if the database is in Full or Bulk-Logged recovery mode for tables where the contents have been deleted or truncated but NOT when a table was dropped. Everything we’ll talk about has to be done before the data pages are reused for other purposes. After deletion or truncation the pages are marked as reusable so you have to act fast. The best thing probably is to put the database into single user mode ASAP while you’re performing this procedure and return it to multi user after you’re done. How do we do it? We will be using an undocumented but known DBCC commands: DBCC PAGE, an undocumented function sys.fn_dblog and a little known DATABASE RESTORE PAGE option. All tests will be on a copy of Production.Product table in AdventureWorks database called Production.Product1 because the original table has FK constraints that prevent us from truncating it for testing. -- create a duplicate table. This doesn't preserve indexes!SELECT *INTO AdventureWorks.Production.Product1FROM AdventureWorks.Production.Product   After we run this code take a full back to perform further testing.   First let’s see what the difference between DELETE and TRUNCATE is when it comes to logging. With DELETE every row deletion is logged in the transaction log. With TRUNCATE only whole data page deallocations are logged in the transaction log. Getting deleted data pages is simple. All we have to look for is row delete entry in the sys.fn_dblog output. But getting data pages that were truncated from the transaction log presents a bit of an interesting problem. I will not go into depths of IAM(Index Allocation Map) and PFS (Page Free Space) pages but suffice to say that every IAM page has intervals that tell us which data pages are allocated for a table and which aren’t. If we deep dive into the sys.fn_dblog output we can see that once you truncate a table all the pages in all the intervals are deallocated and this is shown in the PFS page transaction log entry as deallocation of pages. For every 8 pages in the same extent there is one PFS page row in the transaction log. This row holds information about all 8 pages in CSV format which means we can get to this data with some parsing. A great help for parsing this stuff is Peter Debetta’s handy function dbo.HexStrToVarBin that converts hexadecimal string into a varbinary value that can be easily converted to integer tus giving us a readable page number. The shortened (columns removed) sys.fn_dblog output for a PFS page with CSV data for 1 extent (8 data pages) looks like this: -- [Page ID] is displayed in hex format. -- To convert it to readable int we'll use dbo.HexStrToVarBin function found at -- http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx -- This function must be installed in the master databaseSELECT Context, AllocUnitName, [Page ID], DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE [Current LSN] = '00000031:00000a46:007d' The pages at the end marked with 0x00—> are pages that are allocated in the extent but are not part of a table. We can inspect the raw content of each data page with a DBCC PAGE command: -- we need this trace flag to redirect output to the query window.DBCC TRACEON (3604); -- WITH TABLERESULTS gives us data in table format instead of message format-- we use format option 3 because it's the easiest to read and manipulate further onDBCC PAGE (AdventureWorks, 1, 613, 3) WITH TABLERESULTS   Since the DBACC PAGE output can be quite extensive I won’t put it here. You can see an example of it in the link at the beginning of this section. Getting deleted data back When we run a delete statement every row to be deleted is marked as a ghost record. A background process periodically cleans up those rows. A huge misconception is that the data is actually removed. It’s not. Only the pointers to the rows are removed while the data itself is still on the data page. We just can’t access it with normal means. To get those pointers back we need to restore every deleted page using the RESTORE PAGE option mentioned above. This restore must be done from a full backup, followed by any differential and log backups that you may have. This is necessary to bring the pages up to the same point in time as the rest of the data.  However the restore doesn’t magically connect the restored page back to the original table. It simply replaces the current page with the one from the backup. After the restore we use the DBCC PAGE to read data directly from all data pages and insert that data into a temporary table. To finish the RESTORE PAGE  procedure we finally have to take a tail log backup (simple backup of the transaction log) and restore it back. We can now insert data from the temporary table to our original table by hand. Getting truncated data back When we run a truncate the truncated data pages aren’t touched at all. Even the pointers to rows stay unchanged. Because of this getting data back from truncated table is simple. we just have to find out which pages belonged to our table and use DBCC PAGE to read data off of them. No restore is necessary. Turns out that the problems we had with finding the data pages is alleviated by not having to do a RESTORE PAGE procedure. Stop stalling… show me The Code! This is the code for getting back deleted and truncated data back. It’s commented in all the right places so don’t be afraid to take a closer look. Make sure you have a full backup before trying this out. Also I suggest that the last step of backing and restoring the tail log is performed by hand. USE masterGOIF OBJECT_ID('dbo.HexStrToVarBin') IS NULL RAISERROR ('No dbo.HexStrToVarBin installed. Go to http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx and install it in master database' , 18, 1) SET NOCOUNT ONBEGIN TRY DECLARE @dbName VARCHAR(1000), @schemaName VARCHAR(1000), @tableName VARCHAR(1000), @fullBackupName VARCHAR(1000), @undeletedTableName VARCHAR(1000), @sql VARCHAR(MAX), @tableWasTruncated bit; /* THE FIRST LINE ARE OUR INPUT PARAMETERS In this case we're trying to recover Production.Product1 table in AdventureWorks database. My full backup of AdventureWorks database is at e:\AW.bak */ SELECT @dbName = 'AdventureWorks', @schemaName = 'Production', @tableName = 'Product1', @fullBackupName = 'e:\AW.bak', @undeletedTableName = '##' + @tableName + '_Undeleted', @tableWasTruncated = 0, -- copy the structure from original table to a temp table that we'll fill with restored data @sql = 'IF OBJECT_ID(''tempdb..' + @undeletedTableName + ''') IS NOT NULL DROP TABLE ' + @undeletedTableName + ' SELECT *' + ' INTO ' + @undeletedTableName + ' FROM [' + @dbName + '].[' + @schemaName + '].[' + @tableName + ']' + ' WHERE 1 = 0' EXEC (@sql) IF OBJECT_ID('tempdb..#PagesToRestore') IS NOT NULL DROP TABLE #PagesToRestore /* FIND DATA PAGES WE NEED TO RESTORE*/ CREATE TABLE #PagesToRestore ([ID] INT IDENTITY(1,1), [FileID] INT, [PageID] INT, [SQLtoExec] VARCHAR(1000)) -- DBCC PACE statement to run later RAISERROR ('Looking for deleted pages...', 10, 1) -- use T-LOG direct read to get deleted data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) EXEC('USE [' + @dbName + '];SELECT FileID, PageID, ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), ' + 'CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageIDFROM sys.fn_dblog(NULL, NULL)WHERE AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'' ' + 'AND Context IN (''LCX_MARK_AS_GHOST'', ''LCX_HEAP'') AND Operation in (''LOP_DELETE_ROWS''))t');SELECT *FROM #PagesToRestore -- if upper EXEC returns 0 rows it means the table was truncated so find truncated pages IF (SELECT COUNT(*) FROM #PagesToRestore) = 0 BEGIN RAISERROR ('No deleted pages found. Looking for truncated pages...', 10, 1) -- use T-LOG read to get truncated data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) -- dark magic happens here -- because truncation simply deallocates pages we have to find out which pages were deallocated. -- we can find this out by looking at the PFS page row's Description column. -- for every deallocated extent the Description has a CSV of 8 pages in that extent. -- then it's just a matter of parsing it. -- we also remove the pages in the extent that weren't allocated to the table itself -- marked with '0x00-->00' EXEC ('USE [' + @dbName + '];DECLARE @truncatedPages TABLE(DeallocatedPages VARCHAR(8000), IsMultipleDeallocs BIT);INSERT INTO @truncatedPagesSELECT REPLACE(REPLACE(Description, ''Deallocated '', ''Y''), ''0x00-->00 '', ''N'') + '';'' AS DeallocatedPages, CHARINDEX('';'', Description) AS IsMultipleDeallocsFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageID, DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE Context IN (''LCX_PFS'') AND Description LIKE ''Deallocated%'' AND AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'') t;SELECT FileID, PageID , ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT LEFT(PageAndFile, 1) as WasPageAllocatedToTable , SUBSTRING(PageAndFile, 2, CHARINDEX('':'', PageAndFile) - 2 ) as FileID , CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING(PageAndFile, CHARINDEX('':'', PageAndFile) + 1, LEN(PageAndFile))))) as PageIDFROM ( SELECT SUBSTRING(DeallocatedPages, delimPosStart, delimPosEnd - delimPosStart) as PageAndFile, IsMultipleDeallocs FROM ( SELECT *, CHARINDEX('';'', DeallocatedPages)*(N-1) + 1 AS delimPosStart, CHARINDEX('';'', DeallocatedPages)*N AS delimPosEnd FROM @truncatedPages t1 CROSS APPLY (SELECT TOP (case when t1.IsMultipleDeallocs = 1 then 8 else 1 end) ROW_NUMBER() OVER(ORDER BY number) as N FROM master..spt_values) t2 )t)t)tWHERE WasPageAllocatedToTable = ''Y''') SELECT @tableWasTruncated = 1 END DECLARE @lastID INT, @pagesCount INT SELECT @lastID = 1, @pagesCount = COUNT(*) FROM #PagesToRestore SELECT @sql = 'Number of pages to restore: ' + CONVERT(VARCHAR(10), @pagesCount) IF @pagesCount = 0 RAISERROR ('No data pages to restore.', 18, 1) ELSE RAISERROR (@sql, 10, 1) -- If the table was truncated we'll read the data directly from data pages without restoring from backup IF @tableWasTruncated = 0 BEGIN -- RESTORE DATA PAGES FROM FULL BACKUP IN BATCHES OF 200 WHILE @lastID <= @pagesCount BEGIN -- create CSV string of pages to restore SELECT @sql = STUFF((SELECT ',' + CONVERT(VARCHAR(100), FileID) + ':' + CONVERT(VARCHAR(100), PageID) FROM #PagesToRestore WHERE ID BETWEEN @lastID AND @lastID + 200 ORDER BY ID FOR XML PATH('')), 1, 1, '') SELECT @sql = 'RESTORE DATABASE [' + @dbName + '] PAGE = ''' + @sql + ''' FROM DISK = ''' + @fullBackupName + '''' RAISERROR ('Starting RESTORE command:' , 10, 1) WITH NOWAIT; RAISERROR (@sql , 10, 1) WITH NOWAIT; EXEC(@sql); RAISERROR ('Restore DONE' , 10, 1) WITH NOWAIT; SELECT @lastID = @lastID + 200 END /* If you have any differential or transaction log backups you should restore them here to bring the previously restored data pages up to date */ END DECLARE @dbccSinglePage TABLE ( [ParentObject] NVARCHAR(500), [Object] NVARCHAR(500), [Field] NVARCHAR(500), [VALUE] NVARCHAR(MAX) ) DECLARE @cols NVARCHAR(MAX), @paramDefinition NVARCHAR(500), @SQLtoExec VARCHAR(1000), @FileID VARCHAR(100), @PageID VARCHAR(100), @i INT = 1 -- Get deleted table columns from information_schema view -- Need sp_executeSQL because database name can't be passed in as variable SELECT @cols = 'select @cols = STUFF((SELECT '', ['' + COLUMN_NAME + '']''FROM ' + @dbName + '.INFORMATION_SCHEMA.COLUMNSWHERE TABLE_NAME = ''' + @tableName + ''' AND TABLE_SCHEMA = ''' + @schemaName + '''ORDER BY ORDINAL_POSITIONFOR XML PATH('''')), 1, 2, '''')', @paramDefinition = N'@cols nvarchar(max) OUTPUT' EXECUTE sp_executesql @cols, @paramDefinition, @cols = @cols OUTPUT -- Loop through all the restored data pages, -- read data from them and insert them into temp table -- which you can then insert into the orignial deleted table DECLARE dbccPageCursor CURSOR GLOBAL FORWARD_ONLY FOR SELECT [FileID], [PageID], [SQLtoExec] FROM #PagesToRestore ORDER BY [FileID], [PageID] OPEN dbccPageCursor; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; WHILE @@FETCH_STATUS = 0 BEGIN RAISERROR ('---------------------------------------------', 10, 1) WITH NOWAIT; SELECT @sql = 'Loop iteration: ' + CONVERT(VARCHAR(10), @i); RAISERROR (@sql, 10, 1) WITH NOWAIT; SELECT @sql = 'Running: ' + @SQLtoExec RAISERROR (@sql, 10, 1) WITH NOWAIT; -- if something goes wrong with DBCC execution or data gathering, skip it but print error BEGIN TRY INSERT INTO @dbccSinglePage EXEC (@SQLtoExec) -- make the data insert magic happen here IF (SELECT CONVERT(BIGINT, [VALUE]) FROM @dbccSinglePage WHERE [Field] LIKE '%Metadata: ObjectId%') = OBJECT_ID('['+@dbName+'].['+@schemaName +'].['+@tableName+']') BEGIN DELETE @dbccSinglePage WHERE NOT ([ParentObject] LIKE 'Slot % Offset %' AND [Object] LIKE 'Slot % Column %') SELECT @sql = 'USE tempdb; ' + 'IF (OBJECTPROPERTY(object_id(''' + @undeletedTableName + '''), ''TableHasIdentity'') = 1) ' + 'SET IDENTITY_INSERT ' + @undeletedTableName + ' ON; ' + 'INSERT INTO ' + @undeletedTableName + '(' + @cols + ') ' + STUFF((SELECT ' UNION ALL SELECT ' + STUFF((SELECT ', ' + CASE WHEN VALUE = '[NULL]' THEN 'NULL' ELSE '''' + [VALUE] + '''' END FROM ( -- the unicorn help here to correctly set ordinal numbers of columns in a data page -- it's turning STRING order into INT order (1,10,11,2,21 into 1,2,..10,11...21) SELECT [ParentObject], [Object], Field, VALUE, RIGHT('00000' + O1, 6) AS ParentObjectOrder, RIGHT('00000' + REVERSE(LEFT(O2, CHARINDEX(' ', O2)-1)), 6) AS ObjectOrder FROM ( SELECT [ParentObject], [Object], Field, VALUE, REPLACE(LEFT([ParentObject], CHARINDEX('Offset', [ParentObject])-1), 'Slot ', '') AS O1, REVERSE(LEFT([Object], CHARINDEX('Offset ', [Object])-2)) AS O2 FROM @dbccSinglePage WHERE t.ParentObject = ParentObject )t)t ORDER BY ParentObjectOrder, ObjectOrder FOR XML PATH('')), 1, 2, '') FROM @dbccSinglePage t GROUP BY ParentObject FOR XML PATH('') ), 1, 11, '') + ';' RAISERROR (@sql, 10, 1) WITH NOWAIT; EXEC (@sql) END END TRY BEGIN CATCH SELECT @sql = 'ERROR!!!' + CHAR(10) + CHAR(13) + 'ErrorNumber: ' + ERROR_NUMBER() + '; ErrorMessage' + ERROR_MESSAGE() + CHAR(10) + CHAR(13) + 'FileID: ' + @FileID + '; PageID: ' + @PageID RAISERROR (@sql, 10, 1) WITH NOWAIT; END CATCH DELETE @dbccSinglePage SELECT @sql = 'Pages left to process: ' + CONVERT(VARCHAR(10), @pagesCount - @i) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13), @i = @i+1 RAISERROR (@sql, 10, 1) WITH NOWAIT; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; END CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; EXEC ('SELECT ''' + @undeletedTableName + ''' as TableName; SELECT * FROM ' + @undeletedTableName)END TRYBEGIN CATCH SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() AS ErrorMessage IF CURSOR_STATUS ('global', 'dbccPageCursor') >= 0 BEGIN CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; ENDEND CATCH-- if the table was deleted we need to finish the restore page sequenceIF @tableWasTruncated = 0BEGIN -- take a log tail backup and then restore it to complete page restore process DECLARE @currentDate VARCHAR(30) SELECT @currentDate = CONVERT(VARCHAR(30), GETDATE(), 112) RAISERROR ('Starting Log Tail backup to c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail backup done.', 10, 1) WITH NOWAIT; RAISERROR ('Starting Log Tail restore from c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail restore done.', 10, 1) WITH NOWAIT;END-- The last step is manual. Insert data from our temporary table to the original deleted table The misconception here is that you can do a single table restore properly in SQL Server. You can't. But with little experimentation you can get pretty close to it. One way to possible remove a dependency on a backup to retrieve deleted pages is to quickly run a similar script to the upper one that gets data directly from data pages while the rows are still marked as ghost records. It could be done if we could beat the ghost record cleanup task.

    Read the article

  • Another Marketing Conference, part one – the best morning sessions.

    - by Roger Hart
    Yesterday I went to Another Marketing Conference. I honestly can’t tell if the title is just tipping over into smug, but in the balance of things that doesn’t matter, because it was a good conference. There was an enjoyable blend of theoretical and practical, and enough inter-disciplinary spread to keep my inner dilettante grinning from ear to ear. Sure, there was a bumpy bit in the middle, with two back-to-back sales pitches and a rather thin overview of the state of the web. But the signal:noise ratio at AMC2012 was impressively high. Here’s the first part of my write-up of the sessions. It’s a bit of a mammoth. It’s also a bit of a mash-up of what was said and what I thought about it. I’ll add links to the videos and slides from the sessions as they become available. Although it was in the morning session, I’ve not included Vanessa Northam’s session on the power of internal comms to build brand ambassadors. It’ll be in the next roundup, as this is already pushing 2.5k words. First, the important stuff. I was keeping a tally, and nobody said “synergy” or “leverage”. I did, however, hear the term “marketeers” six times. Shame on you – you know who you are. 1 – Branding in a post-digital world, Graham Hales This initially looked like being a sales presentation for Interbrand, but Graham pulled it out of the bag a few minutes in. He introduced a model for brand management that was essentially Plan >> Do >> Check >> Act, with Do and Check rolled up together, and went on to stress that this looks like on overall business management model for a reason. Brand has to be part of your overall business strategy and metrics if you’re going to care about it at all. This was the first iteration of what proved to be one of the event’s emergent themes: do it throughout the stack or don’t bother. Graham went on to remind us that brands, in so far as they are owned at all, are owned by and co-created with our customers. Advertising can offer a message to customers, but they provide the expression of a brand. This was a preface to talking about an increasingly chaotic marketplace, with increasingly hard-to-manage purchase processes. Services like Amazon reviews and TripAdvisor (four presenters would make this point) saturate customers with information, and give them a kind of vigilante power to comment on and define brands. Consequentially, they experience a number of “moments of deflection” in our sales funnels. Our control is lessened, and failure to engage can negatively-impact buying decisions increasingly poorly. The clearest example given was the failure of NatWest’s “caring bank” campaign, where staff in branches, customer support, and online presences didn’t align. A discontinuity of experience basically made the campaign worthless, and disgruntled customers talked about it loudly on social media. This in turn presented an opportunity to engage and show caring, but that wasn’t taken. What I took away was that brand (co)creation is ongoing and needs monitoring and metrics. But reciprocally, given you get what you measure, strategy and metrics must include brand if any kind of branding is to work at all. Campaigns and messages must permeate product and service design. What that doesn’t mean (and Graham didn’t say it did) is putting Marketing at the top of the pyramid, and having them bawl demands at Product Management, Support, and Development like an entitled toddler. It’s going to have to be collaborative, and session 6 on internal comms handled this really well. The main thing missing here was substantiating data, and the main question I found myself chewing on was: if we’re building brands collaboratively and in the open, what about the cultural politics of trolling? 2 – Challenging our core beliefs about human behaviour, Mark Earls This was definitely the best show of the day. It was also some of the best content. Mark talked us through nudging, behavioural economics, and some key misconceptions around decision making. Basically, people aren’t rational, they’re petty, reactive, emotional sacks of meat, and they’ll go where they’re led. Comforting stuff. Examples given were the spread of the London Riots and the “discovery” of the mountains of Kong, and the popularity of Susan Boyle, which, in turn made me think about Per Mollerup’s concept of “social wayshowing”. Mark boiled his thoughts down into four key points which I completely failed to write down word for word: People do, then think – Changing minds to change behaviour doesn’t work. Post-rationalization rules the day. See also: mere exposure effects. Spock < Kirk - Emotional/intuitive comes first, then we rationalize impulses. The non-thinking, emotive, reactive processes run much faster than the deliberative ones. People are not really rational decision makers, so  intervening with information may not be appropriate. Maximisers or satisficers? – Related to the last point. People do not consistently, rationally, maximise. When faced with an abundance of choice, they prefer to satisfice than evaluate, and will often follow social leads rather than think. Things tend to converge – Behaviour trends to a consensus normal. When faced with choices people overwhelmingly just do what they see others doing. Humans are extraordinarily good at mirroring behaviours and receiving influence. People “outsource the cognitive load” of choices to the crowd. Mark’s headline quote was probably “the real influence happens at the table next to you”. Reference examples, word of mouth, and social influence are tremendously important, and so talking about product experiences may be more important than talking about products. This reminded me of Kathy Sierra’s “creating bad-ass users” concept of designing to make people more awesome rather than products they like. If we can expose user-awesome, and make sharing easy, we can normalise the behaviours we want. If we normalize the behaviours we want, people should make and post-rationalize the buying decisions we want.  Where we need to be: “A bigger boy made me do it” Where we are: “a wizard did it and ran away” However, it’s worth bearing in mind that some purchasing decisions are personal and informed rather than social and reactive. There’s a quadrant diagram, in fact. What was really interesting, though, towards the end of the talk, was some advice for working out how social your products might be. The standard technology adoption lifecycle graph is essentially about social product diffusion. So this idea isn’t really new. Geoffrey Moore’s “chasm” idea may not strictly apply. However, his concepts of beachheads and reference segments are exactly what is required to normalize and thus enable purchase decisions (behaviour change). The final thing is that in only very few categories does a better product actually affect purchase decision. Where the choice is personal and informed, this is true. But where it’s personal and impulsive, or in any way social, “better” is trumped by popularity, endorsement, or “point of sale salience”. UX, UCD, and e-commerce know this to be true. A better (and easier) experience will always beat “more features”. Easy to use, and easy to observe being used will beat “what the user says they want”. This made me think about the astounding stickiness of rational fallacies, “common sense” and the pathological willful simplifications of the media. Rational fallacies seem like they’re basically the heuristics we use for post-rationalization. If I were profoundly grimy and cynical, I’d suggest deploying a boat-load in our messaging, to see if they’re really as sticky and appealing as they look. 4 – Changing behaviour through communication, Stephen Donajgrodzki This was a fantastic follow up to Mark’s session. Stephen basically talked us through some tactics used in public information/health comms that implement the kind of behavioural theory Mark introduced. The session was largely about how to get people to do (good) things they’re predisposed not to do, and how communication can (and can’t) make positive interventions. A couple of things stood out, in particular “implementation intentions” and how they can be linked to goals. For example, in order to get people to check and test their smoke alarms (a goal intention, rarely actualized  an information campaign will attempt to link this activity to the clocks going back or forward (a strong implementation intention, well-actualized). The talk reinforced the idea that making behaviour changes easy and visible normalizes them and makes them more likely to succeed. To do this, they have to be embodied throughout a product and service cycle. Experiential disconnects undermine the normalization. So campaigns, products, and customer interactions must be aligned. This is underscored by the second section of the presentation, which talked about interventions and pre-conditions for change. Taking the examples of drug addiction and stopping smoking, Stephen showed us a framework for attempting (and succeeding or failing in) behaviour change. He noted that when the change is something people fundamentally want to do, and that is easy, this gets a to simpler. Coordinated, easily-observed environmental pressures create preconditions for change and build motivation. (price, pub smoking ban, ad campaigns, friend quitting, declining social acceptability) A triggering even leads to a change attempt. (getting a cold and panicking about how bad the cough is) Interventions can be made to enable an attempt (NHS services, public information, nicotine patches) If it succeeds – yay. If it fails, there’s strong negative enforcement. Triggering events seem largely personal, but messaging can intervene in the creation of preconditions and in supporting decisions. Stephen talked more about systems of thinking and “bounded rationality”. The idea being that to enable change you need to break through “automatic” thinking into “reflective” thinking. Disruption and emotion are great tools for this, but that is only the start of the process. It occurs to me that a great deal of market research is focused on determining triggers rather than analysing necessary preconditions. Although they are presumably related. The final section talked about setting goals. Marketing goals are often seen as deriving directly from business goals. However, marketing may be unable to deliver on these directly where decision and behaviour-change processes are involved. In those cases, marketing and communication goals should be to create preconditions. They should also consider priming and norms. Content marketing and brand awareness are good first steps here, as brands can be heuristics in decision making for choice-saturated consumers, or those seeking education. 5 – The power of engaged communities and how to build them, Harriet Minter (the Guardian) The meat of this was that you need to let communities define and establish themselves, and be quick to react to their needs. Harriet had been in charge of building the Guardian’s community sites, and learned a lot about how they come together, stabilize  grow, and react. Crucially, they can’t be about sales or push messaging. A community is not just an audience. It’s essential to start with what this particular segment or tribe are interested in, then what they want to hear. Eventually you can consider – in light of this – what they might want to buy, but you can’t start with the product. A community won’t cohere around one you’re pushing. Her tips for community building were (again, sorry, not verbatim): Set goals Have some targets. Community building sounds vague and fluffy, but you can have (and adjust) concrete goals. Think like a start-up This is the “lean” stuff. Try things, fail quickly, respond. Don’t restrict platforms Let the audience choose them, and be aware of their differences. For example, LinkedIn is very different to Twitter. Track your stats Related to the first point. Keeping an eye on the numbers lets you respond. They should be qualified, however. If you want a community of enterprise decision makers, headcount alone may be a bad metric – have you got CIOs, or just people who want to get jobs by mingling with CIOs? Build brand advocates Do things to involve people and make them awesome, and they’ll cheer-lead for you. The last part really got my attention. Little bits of drive-by kindness go a long way. But more than that, genuinely helping people turns them into powerful advocates. Harriet gave an example of the Guardian engaging with an aspiring journalist on its Q&A forums. Through a series of serendipitous encounters he became a BBC producer, and now enthusiastically speaks up for the Guardian community sites. Cultivating many small, authentic, influential voices may have a better pay-off than schmoozing the big guys. This could be particularly important in the context of Mark and Stephen’s models of social, endorsement-led, and example-led decision making. There’s a lot here I haven’t covered, and it may be worth some follow-up on community building. Thoughts I was quite sceptical of nudge theory and behavioural economics. First off it sounds too good to be true, and second it sounds too sinister to permit. But I haven’t done the background reading. So I’m going to, and if it seems to hold real water, and if it’s possible to do it ethically (Stephen’s presentations suggests it may be) then it’s probably worth exploring. The message seemed to be: change what people do, and they’ll work out why afterwards. Moreover, the people around them will do it too. Make the things you want them to do extraordinarily easy and very, very visible. Normalize and support the decisions you want them to make, and they’ll make them. In practice this means not talking about the thing, but showing the user-awesome. Glib? Perhaps. But it feels worth considering. Also, if I ever run a marketing conference, I’m going to ban speakers from using examples from Apple. Quite apart from not being consistently generalizable, it’s becoming an irritating cliché.

    Read the article

  • SOLVED Install MythTV & 11.10 on Lenovo S12 (Intel atom) with wireless

    - by keepitsimpleengineer
    This is how I installed Ubuntu 11.10 and MythTV client on my Lenovo S12 (Intel Atom) laptop and use it using WiFi (see additional notes at end). I did this because the upgrade from 11.04 bricked the laptop. Note that the partitions on the Lenovo standard disk were already in place for this installation. Also note that my LAN is setup for fixed IP addresses. Downloaded and burned 11.10 x86 Desktop Ubuntu CD Connected the power supply cord, LAN wire and the external DVD USB drive. Ran Windows XP and made sure performance level "Performance" was set and "Wireless" was enabled. Booted S12 from CD Disabled Networking from icon on upper left panel icon Edited Connections… "Wired connection 1" ? Set IP address, accepted default netmask and set gateway. Also set DNS server. Good idea to check "Connection Information" here to verify everything's O.K. Selected Install Ubuntu from the initial "Install" window Verified the three items were checked (required disk space available, plugged into a power source, & connected to the Internet) Selected Download updates while installing and third party software. Hit Continue… At wireless selected don't want to connect…WiFi…now. Continue… At Installation type, selected Something else. Continue… At partition tale, selected the ext4 Linux partition, set the mount point as "/", and marked for formatting. Here I selected the main disk (/sda) for installing the boot manager. Continue… Selected or verified my Time zone. Continue… Selected my keyboard layout. Continue… Filled in the who are you fields. Make sure password is required to sign in is checked. Continue… Chose a picture. Continue… I selected import no accounts. Continue… Wait as the Install creeps along. If your screen goes blank, tap the space bar ? apparently the screen saver/power plan does this. There are several progress bars. The longest was "Installing system", and it was the next to the last one. Installation Complete window appears, Restart Now… Wait as it stops, The screen blanks then the message "…remove…media…close tray…press enter" I just unplugged the USB DVD and hit enter… It was disheartening but the screen turned Ubuntu Purple-beige and nothing happened, so I help down the power key until it shut down, the pressed it again and the Grub Boot screen appeared. Select Ubuntu… 25.The screen went blank with the little flashing underscore cursor on it and the disk light would occasionally flash. I hit the enter key and eventuality Ubuntu started. After a somewhat long time the unity desktop appeared. 11.10, unlike earlier versions, retains the connection information. Check this by checking the network icon on the upper left applet panel. Here the touch-pad·mouse quit working and I had to reboot. It takes and extremely long time to boot, sometimes requiring several power off/ power on (cold boot). You can try to get the default network manager to work, but it might not, it didn't on mine for WiFi. Thanks to: Chris at URL here's what to do… disconnect your wired Internet connection. input your wireless information into network manager open a terminal (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "terminal". Might be a good idea to drag and drop the terminal icon to the terminal, it's easy to get rid of later. click to open a terminal, and type in: sudo rmmod acer_wmi && echo "blacklist acer_wmi" >> /etc/modprobe.d/blacklist.conf and hit enter. type in your password as asked. if you have correctly entered your WiFi information and you are near your AP, you should connect immediately if not, see the URL above ? you might need to replace "network manager" with "wicd" ? I did with 11.04. Update the new 11.10, in the upper left panel applet weird·gear icon is menu with a line about updating. It's the new way to invoke Update Manager. Your lenovo S12 (intel atom) should now run the new unity Ubuntu. Point your elbow at the ceiling and pat yourself on the back. Installing Mythbuntu Client 24.1 Open mythbuntu.org/repos (I urge you not to directly use Ubuntu Software Center for this) Install Mythbuntu Repos Save the file (in ~/Downloads, the default) Run the file ? it will update your repositories so that you will get the proper installation sources ? it will start Ubuntu Software Center to do this ? Click Install… You will need your password. Debconf window will open, select by making sure check mark is in the little box "Would you like to activate…". Forward… Which version? At the time of writing the current "Stable" version was 24.1, select 0.24.x… Forward… Read the message, then forward… Delete the downloaded file. Install synaptic (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "synaptic". Click on the synaptic icon. Ubuntu Software Center will open and allow you to install synaptic package manager. Open Synaptic (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "Synaptic". Might be a good idea to drag and drop the terminal icon to the terminal, it's easy to get rid of later. Run synaptic, read the intro, and close the intro window. Type in mythbuntu-control-centre in the Quick filter text box, and then select it "Mark for installation" by clicking on the box next to it's name. Marvel at the additional to be installed items, then select "?Mark"… At the top of the synaptic window click on the "? Apply" button. Marvel at the amount of stuff to be installed, the click on "Apply". When finished, close finished window and synaptic. Open mythbuntu-control-centre (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "mythbuntu". Might be a good idea to drag and drop the mythbuntu-control-centre icon to the terminal, it's easy to get rid of later. You can now configure and install the frontend. Go down the icon totem on the right side of the window and click as needed… System roles. ? No Backend, Desktop Frontend, and Ubuntu Desktop. Apply… & Apply changes… & Password… MySQL Configuration ? from backend ? Setup General Alt-N(ext) Alt-N(ext) Stetting Access Setup PIN code: ~~~~ Input Security key and click "Test Connection", if ?, then Apply… & Apply… {note: for some inexplicable reason, control centre hung on this, but when I restarted it, it was set properly} Graphics drivers, When I did this, only the Broadcom wireless driver showed up. I closed without doing anything. Services. I enabled SSH & Samba. Apply… & Apply… Repositories. Asked & Answered. MythExport. Pass, I believe it requires backend on the same system. Proprietary Codec Support. Check to enable, Apply… & Apply… System Updates. No action necessary, will be a part of the Ubuntu update mechanism. Themes and Artwork. For themes, I selected Enable/Update all. Apply… & Apply… Infrared & Startup behavior and Plugins. Defer until you know more. Close software centre. Open mythTV (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "mythTV". Might be a good idea to drag and drop the mythTV icon to the terminal, it's easy to get rid of later. Incorrect Group Membership. Fix this by clicking "Yes"… Log out/end. Do this by clicking "Yes"… For my Lenovo S12, I had to manually restart Ubuntu - and still with the very long restart…/no start/cold boot/reboot/pressing the shift key required Open mythTV (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "mythTV". Might be a good idea to drag and drop the mythTV icon to the terminal, it's easy to get rid of later. Will open with Select country & language. Do so. then get message with "No", hit "Ok" and arrive at the data base Configuration 1/2 screen. You will need your brackend password, from backend ? Setup General Database Configuration 1/2 Password:~? Enter this Hit Alt-n to go to the next page. Select "Use custom id…", then enter a custom ID, I use the machine's name. Hit finish, and MythTV should start up with all default settings. For the lenovo S12, the first thing you want to do is to set Playback profiles to "Normal". From Setup TV Settings Playback Alt-N(ext) Alt-N(ext) Playback Profiles (3/8) : Change Current Video Playback Profile to "Normal". You can fiddle with this setting later. For the lenovo S12, the second thing is to get the sound going. From Setup General Alt-N(ext) Alt-N(ext) Alt-N(ext) Audio System: The top of the screen is a button title "Scan for audio devices", move the highlight there and press the Space bar. Then Tab down to Audio Output Device: and left-right arrow until "ALSA:hw:Card=Intel,DEV=0" is selected. Then Alt-N(ext) until "Finish". Now you should have sound. You should now have MythTV working nicely on the Lenovo S12 Notes about wireless: Running Lenovo S12 on wireless is demanding on both power and WiFi connection. Best results will be obtained when running on power and wired connection. I run my S12 on wireless, actually two serial connections with two access points, something that is not easy to achieve. Here Mythbuntu client-server (in den) <? wireless link 1 <?office LAN? wireless link 2 <? Lenovo S12 Ubuntu 11.10 The office LAN is fixed IP behind an Untangle firewall router. There is another MythTV client on Ubuntu 10.10 computer in the office (which has always worked well). ProblemMythbuntu\Win7 client hangs with frozen frames, short segment of audio repeating. Hardware Rosewill RNX-G300EX IEEE 802.11b/g PCI Wireless Card on client-server 2 Linksys WRT54GL wireless broadband routers on LAN for link1 and link 2 WRT54GL FirmwareDD-WRT v24-sp2(07/22/09) voip set up to act as an access point. Note? many people advised this was an unworkable scheme, and in probably most cases it will be. Solution? Set up DD-WRT with the following Wireless settings… Basic Channel: Different fixed channels at least 4 difference, I use 6 & 11 Basic Sensitivity Range (ACK timing): 50 MAC filter use filter: Enable, Selected Permit only clients listed to access… Requires adding MAC addresses in "Edit MAC Filter List" This causes the 54GL's to ignore any but the listed MAC address, down side, no "guest" capability. Advanced Basic rate: All Advanced CTS Protection Mode: Off Advanced Frame Burst: Enable Advanced Max associate clients: 4 for client link 2, 1 for client-server link 1 Advanced AP isolation: Enable Advanced Preamble: Short Advanced Afterburner: On Advanced Wireless GUI access: Off Advanced WMM support: Off Other settings: default for supplied firmware. Why I suspect this worked? The 54GL Access Points's with the firmware's setting are set to handle a multiple client, wide area situation. With these mods I reconfigured them for a small area, few client situation, disabling Advanced WMM probably the most important. In addition, the client mythtv when used all other users of its access point are turned off except for a Skype phone. Also, the client-server is set up to allow other connections though it's LAN connection, and these are used to connect the TV and disc players, not used when client is being used.

    Read the article

  • CodePlex Daily Summary for Tuesday, November 08, 2011

    CodePlex Daily Summary for Tuesday, November 08, 2011Popular ReleasesFacebook C# SDK: v5.3.2: This is a RTW release which adds new features and bug fixes to v5.2.1. Query/QueryAsync methods uses graph api instead of legacy rest api. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ (experimental) added support for early preview for .NET 4.5 (binaries not distributed in codeplex nor nuget.org, will need to manually build from Facebook-Net45.sln) added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added ne...Delete Inactive TS Ports: List and delete the Inactive TS Ports: List and delete the Inactive TS Ports - The InactiveTSPortList.EXE accepts command line arguments The InactiveTSPortList.Standalone.WithoutPrompt.exe runs as a standalone exe without the need for any command line arguments.ClosedXML - The easy way to OpenXML: ClosedXML 0.60.0: Added almost full support for auto filters (missing custom date filters). See examples Filter Values, Custom Filters Fixed issues 7016, 7391, 7388, 7389, 7198, 7196, 7194, 7186, 7067, 7115, 7144Microsoft Research Boogie: Nightly builds: This download category contains automatically released nightly builds, reflecting the current state of Boogie's development. We try to make sure each nightly build passes the test suite. If you suspect that was not the case, please try the previous nightly build to see if that really is the problem. Also, please see the installation instructions.DotSpatial: Release 821: Updated releaseGoogleMap Control: GoogleMap Control 6.0: Major design changes to the control in order to achieve better scalability and extensibility for the new features comming with GoogleMaps API. GoogleMap control switched to GoogleMaps API v3 and .NET 4.0. GoogleMap control is 100% ScriptControl now, it requires ScriptManager to be registered on the pages where and before it is used. Markers, polylines, polygons and directions were implemented as ExtenderControl, instead of being inner properties of GoogleMap control. Better perfomance. Better...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: V2.1: Version 2.1 (click on the right) this uses V4.0 of .net Version 2.1 adds the following features: (apologize if I forget some, added a lot of little things) Manual Lookup with TV or Movie (finally huh!), you can look up a movie or TV episode directly, you can right click on anythign, and choose manual lookup, then will allow you to type anything you want to look up and it will assign it to the file you right clicked. No Rename: a very popular request, this is an option you can set so that t...SubExtractor: Release 1020: Feature: added "baseline double quotes" character to selector box Feature: added option to save SRT files as ANSI (instead of previous UTF-8 only) Feature: made "Save Sup files to Source directory" apply to both Sup and Idx source files. Fix: removed SDH text (...) or [...] that is split over 2 lines Fix: better decision-making in when to prefix a line with a '-' because SDH was removedAcDown????? - Anime&Comic Downloader: AcDown????? v3.6.1: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6.1?? ??.hlv...Track Folder Changes: Track Folder Changes 1.1: Fixed exception when right-clicking the root nodeMapWindow 4: MapWindow GIS v4.8.6 - Final release - 32Bit: This is the final release of MapWindow v4.8. It has 4.8.6 as version number. This version has been thoroughly tested. If you do get an exception send the exception to us. Don't forget to include your e-mail address. Use the forums at http://www.mapwindow.org/phorum/ for questions. Please consider donating a small portion of the money you have saved by having free GIS tools: http://www.mapwindow.org/pages/donate.php What’s New in 4.8.6 (Final release) · A few minor issues have been fixed Wha...Kinect Paint: Kinect Paint 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Kinect Mouse Cursor: Kinect Mouse Cursor 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Coding4Fun Kinect Toolkit: Coding4Fun Kinect Toolkit 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Async Executor: 1.0: Source code of the AsyncExecutorMedia Companion: MC 3.421b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Fix to show the season-specials.tbn when selecting an episode from season 00. Before, MC would try & load season00.tbn Fix for issue #197 - new show added by 'Manually Add Path' not being picked up. Also made non-visible the same thing in Root Folders...?????????? - ????????: All-In-One Code Framework ??? 2011-11-02: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ??????,11??,?????20????Microsoft OneCode Sample,????6?Program Language Sample,2?Windows Base Sample,2?GDI+ Sample,4?Internet Explorer Sample?6?ASP.NET Sample。?????????????。 ????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 Program Language CSImageFullScreenSlideShow VBImageFullScreenSlideShow CSDynamicallyBuildLambdaExpressionWithFie...Python Tools for Visual Studio: 1.1 Alpha: We’re pleased to announce the release of Python Tools for Visual Studio 1.1 Alpha. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python programming language. This release includes new core IDE features, a couple of new sample libraries for interacting with Kinect and Excel, and many bug fixes for issues reported since the release of 1.0. For the core IDE features we’ve added many new features which improve the basic edit...BExplorer (Better Explorer): Better Explorer 2.0.0.631 Alpha: Changelog: Added: Some new functions in ribbon Added: Possibility to choose displayed columns Added: Basic Search Fixed: Some bugs after navigation Fixed: Attempt to fix slow navigation and slow start Known issues: - BreadcrumbBar fails on some situations - Basic search does not work well in some situations Please if anyone finds bugs be kind and report them at the Issue Tracker! Thanks!DotNetNuke® Community Edition: 05.06.04: Major Highlights Fixed issue with upgrades on systems that had upgraded the Telerik library to 6.0.0 Fixed issue with Razor Host upgrade to 5.6.3 The logic for module administration checks contains incorrect logic in 1 place, opening the possibility of a user with edit permissions gaining access to functionality they should not have through a particularly crafted url Security FixesBrowsers support the ability to remember common strings such as usernames/addresses etc. Code was adde...New ProjectsAddThis for DotNetNuke: A simple AddThis module (plugin) for DotNetNukeAgnisExchange: <AGNIS>as A Growable Network Infor;ation System is developped in <C#>Blogger auto poster: Do you want to have a links blog? Share your favorites links in Google+ and then allow BAP(Blogger auto poster) to read its JSON feed and post its daily summary at your Blogger weblog automatically. CarbonEmissions: CarbonEmissionsCredivale: CredivaleCRM 2011 TreeView for Lookup: CRM 2011 WebResource Utility for showing Self-Joined Lookup data in TreeView form.Dafuscator: Dafuscator is a database data obfuscation system that allows you to tactically obfuscate or delete data out of your production database while leaving most of the data intact.DigDesDevSchoolHomeSaleSystem: This is study project. Done by Russian students. Use ASP.NET MVC.Dynamic Data Extensions: Dynamic Data Extensions add new features to the current ASP.Net 4 Dynamic Data versionEchosoft NoCrash From Scratch: Nocrash From scratch, a operating system designed (but not promised to be) almost invincible from viruses and %20 Windows-like using DoubleTime emulation to make it act like Windows 2000EdiProj: Just a PHP and Asp.Net project.experteer: decision theory class projectFlan Project: Fast-paced Action-RPG (2D Scrolling)GameXNA_Master: XNA MASETR GD 2011 Gyrf: SecretJogo Da Velha em CSharp: Jogo da velha feito para o curso de c#.Jogo das cores: Projeto da aula de extensão de C#Kookaburra: Kookaburra is a framework based on Windows PowerShell 2.0 that can be used for automating SharePoint 2010 administration. It contains a menu, several functions and a well defined structure for adding PowerShell scriptlets.LaunchWithParams: LaunchWithParams is a Windows Explorer add-on that allows you to launch an application executable with specific command-line parameters without having to open a command prompt.MerchantTribe - Shopping Cart and Ecommerce Platform in C# MVC ASP.NET: MerchantTribe is a free, open-source ASP.NET MVC ecommerce platform in C#. For more than a decade, we've been building and selling commercial .NET shopping cart software. Now, we're releasing an open-source project based on our award winning BV Commerce software. Thousands of companies use our shopping cart include Pebble Beach Resorts, DataPipe, TastyKake, MathBlaster and Chesapeake Energy. We need as many people as possible using MerchantTribe for it to thrive. We need Merchants, De...NaiveAlgorithms: Naive C# implementation of basic, general purpose algorithms, with an emphasis on readability, maintainability and correctness. While asymptotic complexity of algorithms is preserved, there is no premature optimisation performed to improve performance. NeonMikaWebserver: NeonMikaWebserver is easy to set up on your .Net MF device, as long as it has an ethernet port. It's espacially created for the Netduino, but I think it's no big work to port it to other plattforms. It provides the following functionalities: -XML Responses ... Just create an Instance of XMLResponse, give it an URI to listen for and fill a Hashtable with your response keys and values... Voila ;) -File Response ... No XML Response fitting your URL? Probably you want to watch a file sa...Next Action: The Next Action web application is a GTD task manager implemented with using MVC3, jQuery, Entity Framework. Orchard Comments Refactoring: Refactoring Orchard's comments moduleoutsource: outsourceScutex: Scutex, pronounced (sec-u-techs), is a 100% managed .Net Framework licensing platform for your applications. Scutex is a flexible licensing system allowing multiple licensing schemes allowing you the most control over how you licensing your products. Unlike any other licensing system on the market, Scutex provides 4 distinct licensing schemes, allowing you to protect your products at different levels using completely different licensing schemes, key types and protection systems. Using Scut...sigem2: Proyecto sigem2SSDL: An easy way to call and manage stored procedures in .NET.Trabalho C# - Datilografia: Software de datilografia produzido para o curso de extensão em C# da UFSCar Sorocaba.triliporra: Triliporra Euro 2012User Interface Toolkit: Ever dreamed about having one framework to write UI independently of your targetted client ? Now your dream has come true !vimrc: my vimrc fileVisuospatial Web Browsing using Kinect: The Kinect Web Browser project explores how users might use the Kinect sensor as a natural interface to browse the visual web. It's developed in C#.VNManager: VNManager or Version Number Manager is a simple Windows command line application which will update your Microsoft Visual Studio AssemblyInfo.cs files AssemblyVersion and AssemblyFileVersion attributes based on rules defined as comments on the same lines.WeddingAssistant: My site will be a ‘gift wish’ site for couples getting married. It will also be a place where people can come and look at ideas for hen and stag nights. Users log in and create a list of items they wish to receive for their wedding or look for ideas for hen and stag nights. The site will allow users to compile a list of items they have found on the internet from specific listed shops and order them in preference. The compiled list is then sent out to friends and relatives who can create a...WMS.NET: thewms is warehouse manager system! For the logistics industry! The team learning project! Built on the mvc3.0 and wcf ! Using jquery js framework as front endWorkflowRobot: Workflow based automation software.

    Read the article

  • CodePlex Daily Summary for Tuesday, October 29, 2013

    CodePlex Daily Summary for Tuesday, October 29, 2013Popular ReleasesCrowd CMS: Crowd CMS FREE - Official Release: This is the original source files for Crowd CMS Free (v1.0.0) and is the latest stable release which has been bug-tested and fixed.Event-Based Components AppBuilder: AB3.AppDesigner.57.12: Iteration 57.12 (Redesign): I'm still not satisfied with the WireDecorator because of it's complexity. Therefore I need some iterations to simplify this class. In this iteration I introduced a new WireArrowDecorator which contains everything that's needed to act with the arrow of a wire. Coming soon: Iteration 58: Add a new wire by mouse (without text editing) Iteration 59: Add a wire segment (without text editing) Iteration 60: Edit existing wire definition (without text editing) ...HandJS: Hand.js v1.1.3: New configuration: HANDJS.doNotProcessCSS to disable CSS scanning.Layered Architecture Solution Guidance (LASG): LASG 1.0.1.0 for Visual Studio 2012: PRE-REQUISITES Open GAX (Please install Oct 4, 2012 version) Microsoft® System CLR Types for Microsoft® SQL Server® 2012 Microsoft® SQL Server® 2012 Shared Management Objects Microsoft Enterprise Library 6.0 (for the generated code) Windows Azure SDK (for layered cloud applications) Silverlight 5 SDK (for Silverlight applications) THE RELEASE This release only works on Visual Studio 2012. Known Issue If you choose the Database project, the solution unfolding time will be slow....DirectX Tool Kit: October 2013: October 28, 2013 Updated for Visual Studio 2013 and Windows 8.1 SDK RTM Added DGSLEffect, DGSLEffectFactory, VertexPositionNormalTangentColorTexture, and VertexPositionNormalTangentColorTextureSkinning Model loading and effect factories support loading skinned models MakeSpriteFont now has a smooth vs. sharp antialiasing option: /sharp Model loading from CMOs now handles UV transforms for texture coordinates A number of small fixes for EffectFactory Minor code and project cleanup ...WPF Extended DataGrid: WPF Extended DataGrid 2.0.0.8 binaries: Fixed issue with auto filter select all checkbox was getting checked automatically after reopening the filter.ExtJS based ASP.NET Controls: FineUI v4.0beta1: +2013-10-28 v4.0 beta1 +?????Collapsed???????????????。 -????:window/group_panel.aspx??,???????,???????,?????????。 +??????SelectedNodeIDArray???????????????。 -????:tree/checkbox/tree_checkall.aspx??,?????,?????,????????????。 -??TimerPicker???????(????、????ing)。 -??????????????????????(???)。 -?????????????,??type=text/css(??~`)。 -MsgTarget???MessageTarget,???None。 -FormOffsetRight?????20px??5px。 -?Web.config?PageManager??FormLabelAlign???。 -ToolbarPosition??Left/Right。 -??Web.conf...CODE Framework: 4.0.31028.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.ResX Resource Manager: 1.0.0.26: 1.0.0.26 WI1141: Improve message when an XML file fails to load. Ignore projects that fail to load. 1.0.0.25 WI1133: Improve error message 1.0.0.24 WI1121: Improve error messages 1.0.0.23 WI1098: VS Crash after changing filter 1.0.0.22 WI1091: Context menu for copy resource key. WI1090: Enable multi-line paste. 1.0.0.21 WI1060: ResX Manager crashes after update. 1.0.0.20 WI958: Reference something in DGX, so we don't need to load the assembly dynamically. WI1055: Add the possibility to ma...VidCoder: 1.5.10 Beta: Broke out all the encoder-specific passthrough options into their own dropdown. This should make what they do a bit more clear and clean up the codec list a bit. Updated HandBrake core to SVN 5855.Indent Guides for Visual Studio: Indent Guides v14: ImportantThis release has a separate download for Visual Studio 2010. The first link is for VS 2012 and later. Version History Changed in v14 Improved performance when scrolling and editing Fixed potential crash when Resharper is installed Fixed highlight of guides split around pragmas in C++/C# Restored VS 2010 support as a separate download Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not...ASP.net MVC Awesome - jQuery Ajax Helpers: 3.5.3 (mvc5): version 3.5.3 - support for mvc5 version 3.5.2 - fix for setting single value to multivalue controls - datepicker min max date offset fix - html encoding for keys fix - enable Column.ClientFormatFunc to be a function call that will return a function version 3.5.1 ========================== - fixed html attributes rendering - fixed loading animation rendering - css improvements version 3.5 ========================== - autosize for all popups ( can be turned off by calling in js...Media Companion: Media Companion MC3.585b: IMDB plot scraping Fixed. New* Movie - Rename Folder using Movie Set, option to move ignored articles to end of Movie Set, only for folder renaming. Fixed* Media Companion - Fixed if using profiles, config files would blown up in size due to some settings duplicating. * Ignore Article of An was cutting of last character of movie title. * If Rescraping title, sort title changed depending on 'Move article to end of Sort Title' setting. * Movie - If changing Poster source order, list would beco...MoreTerra (Terraria World Viewer): MoreTerra 1.11.4: Release 1.11.4 =========== = Compatibility = =========== Updated to add the new tiles/walls in 1.2.1PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.0.7: This is a bug fix release, containing some important fixes! Fixed issue where Session 0 was not detected correctly, resulting in issues when attempting to display a UI when none was allowed Fixed Installation Prompt and Installation Restart Prompt appearing when deploy mode was non-interactive or silent Fixed issue where defer prompt is displayed after force closing multiple applications Fixed issue executing blocked app execution dialog from UNC path (executed instead from local tempo...BlackJumboDog: Ver5.9.7: 2013.10.24 Ver5.9.7 (1)FTP???????、2?????????????shift-jis????????????? (2)????HTTP????、???????POST??????????????????CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.1.0.34322 Alpha 4: This experimental release of the CtrlAltStudio Viewer includes the following significant features: Oculus Rift support. Stereoscopic 3D display support. Based on Firestorm viewer 4.4.2 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-1-0-34322-alpha-4 Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or sup...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 32 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) This release has been tested with Visual Studio 2008, 2010, 2012 and 2013, using TortoiseSVN 1.6, 1.7 and 1.8. It should also still work with Visual Studio 2005, but I couldn't find anyone to test it in VS2005. Build 32 (beta) changelogNew: Added Visual Studio 2013 support New: Added Visual Studio 2012 support New: Added SVN 1.8 support New: Added 'Ch...ABCat: ABCat v.2.0.1a: ?????????? ???????? ? ?????????? ?????? ???? ??? Win7. ????????? ?????? ????????? ?? ???????. ????? ?????, ???? ????? ???????? ????????? ?????????? ????????? "?? ??????? ????? ???????????? ?????????? ??????...", ?? ?????????? ??????? ? ?????????? ?????? Microsoft SQL Ce ?? ????????? ??????: http://www.microsoft.com/en-us/download/details.aspx?id=17876. ???????? ?????? x64 ??? x86 ? ??????????? ?? ?????? ???????????? ???????. ??? ??????? ????????? ?? ?????????? ?????? Entity Framework, ? ???? ...patterns & practices: Data Access Guidance: Data Access Guidance 2013: This is the 2013 release of Data Access Guidance. The documentation for this RI is also available on MSDN: Data Access for Highly-Scalable Solutions: Using SQL, NoSQL, and Polyglot Persistence: http://msdn.microsoft.com/en-us/library/dn271399.aspxNew Projects7COM0152 - Web 2.0 Community Website: This is a project to create a User Content Cenerated community website for the Web Application Development module at the University of Hertfordshire.Airport System: Telerik Academy Teamwork - Airport SystemAppointments and ToDo Manager: Javascript Single Page ApplicationBlack NBT: A library to manage NBT (Named Binary Tag) files at format 19133. This project inclue a test client for the library. It's written en C# for .Net 4.5browser4gp: Applicazione che consente agli sviluppatori di pagine web di comunicare con il sistema operativo sottostante.BS Flow: It's a web project that using bootstrap to show instagram image flow.coLearning .NET Class - Team 3 v.2: .NET CRM tool. Grow your business one contact at a time with an expertly crafted API for generating leads and a simple web interface for managing contacts.Collections2: The Collection2 library allows you to create 2 way dictionaries. Ex: mydictionary[1] == "One", mydictionary["One"] == 1. Database Application: Load MySQL DB to SQL Server Unzip and import excel sales reports to SQL Server Generate PDF from SQL ServerDesign Pattern in C#: design pattern implementations in C#DropLogin&DropUser: Mass drop logins and users from many sql servers ??????? ??????? ?????? ????? SQL Server ? ????????????? ?? ???? ???? ??????.ElencySolutions.MultiProperty.CMS7: This project is simply a EPiServer CMS 7 re-write of the original code found here: https://episerveresmp.codeplex.com/File System Explorer: This application will drill into the Windows File System like Windows File Explorer but is more geared to working with multiple directories at a time. GriffinTeam: GriffinTeamjean1028jabbrchang: dfaLibDxp: Set of libraries for manipulating document-oriented databases for FSharp (F#)Library System: Telerik ASP.NET Web Forms Exam - Library SystemMaruko Toolbox: A Video-prosessing GUIMugen MVVM Toolkit ReSharper plugin: ReSharper plugin for Mugen MVVM Toolkit.RESX Utils: This is a tool to aid RESX resources management. It can detect and remove unused resources, compare RESX files and detect duplicates and discrepancies.Scorpicore2: follows banknotes and understanding their historySharePoint Logs Viewer: Develop & Maintain your SharePoint application quickly by using "SharePoint Logs Viewer"SPCloudMigrator: This project aims to simplify the migration from on-premise sharepoint version to sharepoint online.SSIS Connector for Sharepoint Online: SharePoint Online Source & Destination DataFlow Components to fetch data from SPOnline & push data in SPOnline.Transfer data from SP Online List to DBMS or FileTMS - Testing Management System: Una empresa de desarrollo de software, desea contar con una aplicación Web que le permita manejar los errores de sus productos. Esta aplicación permitirá a losWMI Inventory Client: WMI Inventory Client ?????????????? ?????????????? ?? WMIWPF MDI Metro: Metro Dark Theme for WPF MDI Project on codeplex.Zigbee playground: This is a project for experimenting with Zigbee wireless mesh networking.

    Read the article

  • Adjust timezone of an AVM Fritz!Box 7390

    It's been a while that I purchased an AVM Fritz!Box 7390 but since I'm using this 'PABX' here in Mauritius, I'm not really happy about the wrong time in the logs or handsets connected. Lately, I had some spare time to address this issue, and the following article describes how to adjust the timezone settings in general. The original idea came from an FAQ found in c't 21/11 (for a 7270 written in German language) but I added a couple of things based on other resources online. The following tutorial may be valid for other models, too. Use your common sense and think before you act. Brief introduction to AVM Fritz!Box devices The Fritz!Box series of AVM has been around for more than a decade and those little 'red boxes' have a high level of versatility for your small office or home. High-speed connections, secure WLAN and convenient telephony make a home network out of any network. Whether it's a computer, tablet or smartphone, any device can be connected to the FRITZ!Box. And best of all, installation is so simple that users will be online in a matter of minutes. If you want to have peace of your mind in your small network then a Fritz!Box is the easiest way to achieve that. I'm using my box primarly as WiFi access point, VoIP gateway and media server but only because it came in second after my Linux system. Limitations in the administrative Web UI Unfortunately, there are no possibilities to adjust the timezone settings in the Web UI at all - even not in Expert mode. I assume that this is part of the 'simplification' provided by AVM's design team. That's okay, as long as you reside in Central Europe, and the implicit time handling is correct for your location. Adjusting the timezone I got my device through an order at Amazon Germany already some time ago, and honestly I wasn't bothered too much about the pre-configured (fixed) timezone setting - CET or CEST depending on daylight saving. But you know, it's that kind of splinter at the back of your head that keeps nagging and bothering you indirectly. So, finally I sat down yesterday evening and did a quick research on how to change the timezone. Even though there are a number of results, I read the FAQ from the c't magazine first, as I consider this as a trusted and safe source of information. Of course, it is most important to avoid to 'brick' your device. You've been warned - No support Tinkering with the configuration of any AVM devices seems to be a violation of their official support channels. So, be warned and continue onlyin case that you're sure about what you are going to do. The following solutions are 'as-is' and they worked for my box flawlessly but may cause an issue in your case. Don't blame me... Solution 1 - Backup, modify and restore That's the way as described in the c't article and a couple of other forum postings I found online, mainly from Australia. Login the administrative Web UI and navigate to 'System => Einstellungen sichern' (System => Backup configuration) and store your current configuration to a local file on your machine. Despite some online postings it is not necessary to specify a password in order to secure or encrypt your backup. IMHO, this only adds another unnecessary layer of complexity to the process. Anyway, next you should create a another copy of your settings and keep it unmodified. That's our safety net to restore the current settings in case that we might have to issue a factory setting reset to the box. Now, open the configuration file with an advanced text editor which is capable to deal with Unix carriage returns properly - Windows Notepad doesn't do the job but Wordpad or Notepad++. Personally, I don't care and simply use geany, gedit or nano on Linux. In total there are 3 modifications that we have to apply to the configuration file - one new line and two adjustments. First, we have to add an instruction near the top of file that overrides the device internal checksum validation. Without this line, your settings won't be accepted. Caution: The drectives are case-sensitve and your outcome should read something like this: **** FRITZ!Box Fon WLAN 7390 CONFIGURATION EXPORTPassword=$$$$<ignore>FirmwareVersion=84.05.52CONFIG_INSTALL_TYPE=iks_16MB_xilinx_4eth_2ab_isdn_nt_te_pots_wlan_usb_host_dect_64415OEM=avmCountry=049Language=deNoChecks=yes**** CFGFILE:ar7.cfg/* * /var/flash/ar7.cfg * Mon Jul 29 10:49:18 2013 */ar7cfg {... Then search for the expression 'timezone' and you should find a section like this one (~ line 1113): timezone_manual {        enabled = no;        offset = 0;        dst_enabled = no;        TZ_string = "";        name = "";} We would like to manually handle the timezone setting in our device and therefore we have to enable it and set the proper value for Mauritius. The configuration block should like so afterwards: timezone_manual {        enabled = yes;        offset = 0;        dst_enabled = no;        TZ_string = "MUT-4";        name = "";} We specify the designation and the offset in hours of the timezone we would like to have. Caution: The offset indicates the value one has to add to the local time to arrive at UTC. More details are described in the Explanation of TZ strings. Mauritius has GMT+4 which means that we have to substract 4 hours from the local time to have UTC. Finally, we restore the modified configuration file via the administrative Web UI under 'System => Einstellungen sichern => Wiederherstellen' (System => Backup configuration => Restore). This triggers a reboot of the device, so please be patient and wait until the Web UI displays the login dialog again. Good luck! Solution 2 - Telnet A more elegant, read: technically interesting, way to adjust configuration settings in your Fritz!Box is to access it directly through Telnet. By default AVM disables that protocol channel and you have to enable it with a connected telephone. In order to activate the telnet service dial the following combination: #96*7* #96*8* (to disable telnet again after work has been completed) If you're using an AVM handset like the Fritz!Fon then you will receive a confirmation message on the display like so: telnetd ein Next, depending on your favourite operating system, you either launch a Command prompt in Windows or a terminal in Linux, get your Admin password ready, and you connect to your box like so: $ telnet fritz.box Trying 192.168.1.1...Connected to fritz.box.Escape character is '^]'.password: BusyBox v1.19.3 (2012-10-12 14:52:09 CEST) built-in shell (ash)Enter 'help' for a list of built-in commands.ermittle die aktuelle TTYtty is "/dev/pts/0"Console Ausgaben auf dieses Terminal umgelenkt# That's it, you are connected and we can continue to change the configuration manually. In order to adjust the timezone setting we have to open the ar7.cfg file. As we are now operating in a specialised environment, we only have limited capabilities at hand. One of those is a reduced version of vi - nvi. Let's open a second browser window with the fine manual page of nvi and start to edit our configuration file: # nvi /var/flash/ar7.cfg In our configuration file, we have to navigate to the timezone directives. The easiest way is to search for the expression 'timezone' by typing in the following: /timezone    (press Enter/Return) Now, we should see the exact lines of code like in the backed up version: timezone_manual {                                                                            enabled = no;                                                          offset = 0;                                                         dst_enabled = no;                                                   TZ_string = "";                                                     name = "";                                                        } And of course, we apply the same changes as described in the previous section: timezone_manual {                                                                            enabled = yes;                                                          offset = 0;                                                         dst_enabled = no;                                                   TZ_string = "MUT-4";                                                     name = "";                                                        } Finally, we have to write our changes back to the file and apply the new settings. :wq    (press Enter/Return) # ar7cfgchanged That's it! Finally, close the telnet session by pressing Ctrl+] and enter 'quit'. Additional ideas... There are a couple of more possibilities to enhance and to extend the usability of a Fritz!Box. There are lots of resources available on the net, but I'd like to name a few here. Especially for Linux users it is essential to be able to connect to any device remotely in a  safe and secure way. And the installation of a SSH server on the box would be a first step to improve this situation, also to avoid to run telnet after all. Sometimes, there might be problems in your VoIP connections, feel free to adjust the settings of codecs and connection handling, too. I guess, you'll get the idea... The only frontiers are in your mind.

    Read the article

  • From HttpRuntime.Cache to Windows Azure Caching (Preview)

    - by Jeff
    I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever. When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire. You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure. The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism. So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly. First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this: public interface ICacheProvider {     void Add(string key, object item, int duration);     T Get<T>(string key) where T : class;     void Remove(string key); } Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime: public class AzureCacheProvider : ICacheProvider {     public AzureCacheProvider()     {         _cache = new DataCache("default"); // in Microsoft.ApplicationServer.Caching, see how-to      }         private readonly DataCache _cache;     public void Add(string key, object item, int duration)     {         _cache.Add(key, item, new TimeSpan(0, 0, 0, 0, duration));     }     public T Get<T>(string key) where T : class     {         return _cache.Get(key) as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } public class LocalCacheProvider : ICacheProvider {     public LocalCacheProvider()     {         _cache = HttpRuntime.Cache;     }     private readonly System.Web.Caching.Cache _cache;     public void Add(string key, object item, int duration)     {         _cache.Insert(key, item, null, DateTime.UtcNow.AddMilliseconds(duration), System.Web.Caching.Cache.NoSlidingExpiration);     }     public T Get<T>(string key) where T : class     {         return _cache[key] as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } Feel free to expand these to use whatever cache features you want. I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this: ObjectFactory.Initialize(x =>             {                 x.Scan(scan =>                         {                             scan.AssembliesFromApplicationBaseDirectory();                             scan.WithDefaultConventions();                         });                 if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable)                     x.For<ICacheProvider>().Use<AzureCacheProvider>();                 else                     x.For<ICacheProvider>().Use<LocalCacheProvider>();             }); If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider. Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this: public class MyRepo : IMyRepo {     public MyRepo(ICacheProvider cacheProvider)     {         _context = new MyDataContext();         _cache = cacheProvider;     }     private readonly MyDataContext _context;     private readonly ICacheProvider _cache;     public SomeType Get(int someTypeID)     {         var key = "somename-" + someTypeID;         var cachedObject = _cache.Get<SomeType>(key);         if (cachedObject != null)         {             _context.SomeTypes.Attach(cachedObject);             return cachedObject;         }         var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID);         _cache.Add(key, someType, 60000);         return someType;     } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so  When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it. The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database. So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime. That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences.

    Read the article

  • Content Management for WebCenter Installation Guide

    - by Gary Niu
    Overvew As we known, there are two way to install Content Management for WebCenter. One way is install it by WebCenter installer wizard, another way is to install it use their own installer. This guide is for the later one. For SSO purpose, I also mentioned how to config OID identity store for Content Management for WebCenter. Content Management for WebCenter( 10.1.3.5.1) Oracle Enterprise Linux R5U4 Basic Installation -bash-3.2$ ./setup.sh Please select your locale from the list.           1. Chinese-Simplified           2. Chinese-Traditional           3. Deutsch          *4. English-US           5. English-UK           6. Español           7. Français           8. Italiano           9. Japanese          10. Korean          11. Nederlands          12. Português-Brazil Choice? Throughout the install, when entering a text value, you can press Enter to accept the default that appears between square brackets ([]). When selecting from a list, you can select the choice followed by an asterisk by pressing Enter. Select installation type from the list.         *1. Install new server          2. Update a server Choice? Content Server Installation Directory Please enter the full pathname to the installation directory. Content Server Core Folder [/oracle/ucm/server]:/opt/oracle/ucm/server Create Directory         *1. yes          2. no Choice? Java virtual machine         *1. Sun Java 1.5.0_11 JDK          2. Specify a custom Java virtual machine Choice? Installing with Java version 1.5.0_11. Enter the location of the native file repository. This directory contains the native files checked in by contributors. Content Server Native Vault Folder [/opt/oracle/ucm/server/vault/]: Create Directory         *1. yes          2. no Choice? Enter the location of the web-viewable file repository. This directory contains files that can be accessed through the web server. Content Server Weblayout Folder [/opt/oracle/ucm/server/weblayout/]: Create Directory         *1. yes          2. no Choice? This server can be configured to manage its own authentication or to allow another master to act as an authentication proxy. Configure this server as a master or proxied server.         *1. Configure as a master server.          2. Configure as server proxied by a local master server. Choice? During installation, an admin server can be installed and configured to manage this server. If there is already an admin server on this system, you can have the installer configure it to administrate this server instead. Select admin server configuration.         *1. Install an admin server to manage this server.          2. Configure an existing admin server to manage this server.          3. Don't configure an admin server. Choice? Enter the location of an executable to start your web browser. This browser will be used to display the online help. Web Browser Path [/usr/bin/firefox]: Content Server System locale           1. Chinese-Simplified           2. Chinese-Traditional           3. Deutsch          *4. English-US           5. English-UK           6. Español           7. Français           8. Italiano           9. Japanese          10. Korean          11. Nederlands          12. Português-Brazil Choice? Please select the region for your timezone from the list.         *1. Use the timezone setting for your operating system          2. Pacific          3. America          4. Atlantic          5. Europe          6. Africa          7. Asia          8. Indian          9. Australia Choice? Please enter the port number that will be used to connect to the Content Server. This port must be otherwise unused. Content Server Port [4444]: Please enter the port number that will be used to connect to the Admin Server. This port must be otherwise unused. Admin Server Port [4440]: Enter a security filter for the server port. Hosts which are allowed to communicate directly with the server port may access any resources managed by the server. Insure that hosts which need access are included in the filter. See the installation guide for more details. Incoming connection address filter [127.0.0.1]:*.*.*.* *** Content Server URL Prefix The URL prefix specified here is used when generating HTML pages that refer to the contents of the weblayout directory within the installation. This prefix must be mapped in the web server Additional Document Directories section of the Content Management administration menu to the physical location of the weblayout directory. For example, "/idc/" would be used in your installation to refer to the URL http://ucm.company.com/idc which would be mapped in the web server to the physical location /oracle/ucm/server/weblayout. Web Server Relative Root [/idc/]: Enter the name of the local mail server. The server will contact this system to deliver email. Company Mail Server [mail]: Enter the e-mail address for the system administrator. Administrator E-Mail Address [sysadmin@mail]: *** Web Server Address Many generated HTML pages refer to the web server you are using. The address specified here will be used when generating those pages. The address should include the host and domain name in most cases. If your webserver is running on a port other than 80, append a colon and the port number. Examples: www.company.com, ucm.company.com:90 Web Server HTTP Address [yekki]:yekki.cn.oracle.com:7777 Enter the name for this instance. This name should be unique across your entire enterprise. It may not contain characters other than letters, numbers, and underscores. Server Instance Name [idc]: Enter a short label for this instance. This label is used on web pages to identify this instance. It should be less than 12 characters long. Server Instance Label [idc]: Enter a long description for this instance. Server Description [Content Server idc]: Web Server         *1. Apache          2. Sun ONE          3. Configure manually Choice? Please select a database from the list below to use with the Content Server. Content Server Database         *1. Oracle          2. Microsoft SQL Server 2005          3. Microsoft SQL Server 2000          4. Sybase          5. DB2          6. Custom JDBC settings          7. Skip database configuration Choice? Manually configure JDBC settings for this database          1. yes         *2. no Choice? Oracle Server Hostname [localhost]: Oracle Listener Port Number [1521]: *** Database User ID The user name is used to log into the database used by the content server. Oracle User [user]:YEKKI_OCSERVER *** Database Password The password is used to log into the database used by the content server. Oracle Password []:oracle Oracle Instance Name [ORACLE]:orcl Configure the JVM to find the JDBC driver in a specific jar file          1. yes         *2. no Choice? The installer can attempt to create the database tables or you can manually create them. If you choose to manually create the tables, you should create them now. Attempt to create database tables          1. yes         *2. no Choice? Select components to install.          1. ContentFolios: Collect related items in folios          2. Folders_g: Organize content into hierarchical folders          3. LinkManager8: Hypertext link management support          4. OracleTextSearch: External Oracle 11g database as search indexer support          5. ThreadedDiscussions: Threaded discussion management Enter numbers separated by commas to toggle, 0 to unselect all, F to finish: 1,2,3,4,5         *1. ContentFolios: Collect related items in folios         *2. Folders_g: Organize content into hierarchical folders         *3. LinkManager8: Hypertext link management support         *4. OracleTextSearch: External Oracle 11g database as search indexer support         *5. ThreadedDiscussions: Threaded discussion management Enter numbers separated by commas to toggle, 0 to unselect all, F to finish: F Checking configuration. . . Configuration OK. Review install settings. . . Content Server Core Folder: /opt/oracle/ucm/server Java virtual machine: Sun Java 1.5.0_11 JDK Content Server Native Vault Folder: /opt/oracle/ucm/server/vault/ Content Server Weblayout Folder: /opt/oracle/ucm/server/weblayout/ Proxy authentication through another server: no Install admin server: yes Web Browser Path: /usr/bin/firefox Content Server System locale: English-US Content Server Port: 4444 Admin Server Port: 4440 Incoming connection address filter: *.*.*.* Web Server Relative Root: /idc/ Company Mail Server: mail Administrator E-Mail Address: sysadmin@mail Web Server HTTP Address: yekki.cn.oracle.com:7777 Server Instance Name: idc Server Instance Label: idc Server Description: Content Server idc Web Server: Apache Content Server Database: Oracle Manually configure JDBC settings for this database: false Oracle Server Hostname: localhost Oracle Listener Port Number: 1521 Oracle User: YEKKI_OCSERVER Oracle Password: 6GP1gBgzSyKa4JW10U8UqqPznr/lzkNn/Ojf6M8GJ8I= Oracle Instance Name: orcl Configure the JVM to find the JDBC driver in a specific jar file: false Attempt to create database tables: no Components: ContentFolios,Folders_g,LinkManager8,OracleTextSearch,ThreadedDiscussions Proceed with install         *1. Proceed          2. Change configuration          3. Recheck the configuration          4. Abort installation Choice? Finished install type Install with warnings at 4/2/10 12:32 AM. Run Scripts -bash-3.2$ ./wc_contentserverconfig.sh /opt/oracle/ucm/server /mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/CS10gR35UpdateBundle.zip' Service 'DELETE_DOC' Extended Service 'DELETE_BYREV_REVISION' Extended Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/ContentAccess/ContentAccess-linux.zip' (internal)      04.02 00:40:38.019      main    updateDocMetaDefinitionV11: adding decimal column Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/Folders_g.zip' Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/FusionLibraries.zip' Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/JpsUserProvider.zip' Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/WcConfigure.zip' Apr 2, 2010 12:41:24 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:24 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Apr 2, 2010 12:41:27 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:27 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Apr 2, 2010 12:41:28 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:28 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Restart Content Server to apply updates. Configuring Apache Web Server append the following lines at httpd.conf: include "/opt/oracle/ucm/server/data/users/apache22/apache.conf" Configuring the Identity Store( Optional ) 1.  Stop Oracle Content Server and the Admin Server 2.  Update the Oracle Content Server's JPS configuration file, jps-config.xml: a. add a service instance <serviceInstance provider="idstore.ldap.provider" name="idstore.oid"> <property name="subscriber.name" value="dc=cn,dc=oracle,dc=com"></property> <property name="idstore.type" value="OID"></property> <property name="security.principal.key" value="ldap.credential"></property> <property name="security.principal.alias" value="JPS"></property> <property name="ldap.url" value="ldap://yekki.cn.oracle.com:3060"></property> <extendedProperty> <name>user.search.bases</name> <values> <value>cn=users,dc=cn,dc=oracle,dc=com</value> </values> </extendedProperty> <extendedProperty> <name>group.search.bases</name> <values> <value>cn=groups,dc=cn,dc=oracle,dc=com</value> </values> </extendedProperty> <property name="username.attr" value="uid"></property> <property name="user.login.attr" value="uid"></property> <property name="groupname.attr" value="cn"></property> </serviceInstance> b. Ensure that the <jpsContext> entry in the jps-config.xml file refers to the new serviceInstance, that is, idstore.oid and not idstore.ldap: <jpsContext name="default"> <serviceInstanceRef ref="idstore.oid"/> 3. Run the new script to setup the credentials for idstore.oid in the credential store: cd CONTENT_SERVER_HOME/custom/FusionLibraries/tools -bash-3.2$ ./run_credtool.sh Buildfile: ./../tools/credtool.xml     [input] skipping input as property action has already been set.     [input] Alias: [JPS]     [input] Key: [ldap.credential]     [input] User Name: cn=orcladmin     [input] Password: welcome1     [input] JPS Config: [/opt/oracle/ucm/server/custom/FusionLibraries/tools/../../../config/jps-config.xml] manage-creds:      [echo] @@@ Help: run 'ant manage-creds' command to see the detailed usage      [java] Using default context in /opt/oracle/ucm/server/custom/FusionLibraries/tools/../../../config/jps-config.xml file for credential store.      [java] Credential store location : /opt/oracle/ucm/server/config      [java] Credential with map JPS key ldap.credential stored successfully!      [java]      [java]      [java]     Credential for map JPS and key ldap.credential is:      [java]             PasswordCredential name : cn=orcladmin      [java]             PasswordCredential password : welcome1 BUILD SUCCESSFUL Total time: 1 minute 27 seconds Testing 1. acces http://yekki.cn.oracle.com:7777/idc 2. login in with OID user, for example: orcladmin/welcome1 3. make sure your JpsUserProvider status is "good"

    Read the article

  • Entity Framework Code-First, OData & Windows Phone Client

    - by Jon Galloway
    Entity Framework Code-First is the coolest thing since sliced bread, Windows  Phone is the hottest thing since Tickle-Me-Elmo and OData is just too great to ignore. As part of the Full Stack project, we wanted to put them together, which turns out to be pretty easy… once you know how.   EF Code-First CTP5 is available now and there should be very few breaking changes in the release edition, which is due early in 2011.  Note: EF Code-First evolved rapidly and many of the existing documents and blog posts which were written with earlier versions, may now be obsolete or at least misleading.   Code-First? With traditional Entity Framework you start with a database and from that you generate “entities” – classes that bridge between the relational database and your object oriented program. With Code-First (Magic-Unicorn) (see Hanselman’s write up and this later write up by Scott Guthrie) the Entity Framework looks at classes you created and says “if I had created these classes, the database would have to have looked like this…” and creates the database for you! By deriving your entity collections from DbSet and exposing them via a class that derives from DbContext, you "turn on" database backing for your POCO with a minimum of code and no hidden designer or configuration files. POCO == Plain Old CLR Objects Your entity objects can be used throughout your applications - in web applications, console applications, Silverlight and Windows Phone applications, etc. In our case, we'll want to read and update data from a Windows Phone client application, so we'll expose the entities through a DataService and hook the Windows Phone client application to that data via proxies.  Piece of Pie.  Easy as cake. The Demo Architecture To see this at work, we’ll create an ASP.NET/MVC application which will act as the host for our Data Service.  We’ll create an incredibly simple data layer using EF Code-First on top of SQLCE4 and we’ll expose the data in a WCF Data Service using the oData protocol.  Our Windows Phone 7 client will instantiate  the data context via a URI and load the data asynchronously. Setting up the Server project with MVC 3, EF Code First, and SQL CE 4 Create a new application of type ASP.NET MVC 3 and name it DeadSimpleServer.  We need to add the latest SQLCE4 and Entity Framework Code First CTP's to our project. Fortunately, NuGet makes that really easy. Open the Package Manager Console (View / Other Windows / Package Manager Console) and type in "Install-Package EFCodeFirst.SqlServerCompact" at the PM> command prompt. Since NuGet handles dependencies for you, you'll see that it installs everything you need to use Entity Framework Code First in your project. PM> install-package EFCodeFirst.SqlServerCompact 'SQLCE (= 4.0.8435.1)' not installed. Attempting to retrieve dependency from source... Done 'EFCodeFirst (= 0.8)' not installed. Attempting to retrieve dependency from source... Done 'WebActivator (= 1.0.0.0)' not installed. Attempting to retrieve dependency from source... Done You are downloading SQLCE from Microsoft, the license agreement to which is available at http://173.203.67.148/licenses/SQLCE/EULA_ENU.rtf. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'SQLCE 4.0.8435.1' You are downloading EFCodeFirst from Microsoft, the license agreement to which is available at http://go.microsoft.com/fwlink/?LinkID=206497. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'EFCodeFirst 0.8' Successfully installed 'WebActivator 1.0.0.0' You are downloading EFCodeFirst.SqlServerCompact from Microsoft, the license agreement to which is available at http://173.203.67.148/licenses/SQLCE/EULA_ENU.rtf. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'EFCodeFirst.SqlServerCompact 0.8' Successfully added 'SQLCE 4.0.8435.1' to EfCodeFirst-CTP5 Successfully added 'EFCodeFirst 0.8' to EfCodeFirst-CTP5 Successfully added 'WebActivator 1.0.0.0' to EfCodeFirst-CTP5 Successfully added 'EFCodeFirst.SqlServerCompact 0.8' to EfCodeFirst-CTP5 Note: We're using SQLCE 4 with Entity Framework here because they work really well together from a development scenario, but you can of course use Entity Framework Code First with other databases supported by Entity framework. Creating The Model using EF Code First Now we can create our model class. Right-click the Models folder and select Add/Class. Name the Class Person.cs and add the following code: using System.Data.Entity; namespace DeadSimpleServer.Models { public class Person { public int ID { get; set; } public string Name { get; set; } } public class PersonContext : DbContext { public DbSet<Person> People { get; set; } } } Notice that the entity class Person has no special interfaces or base class. There's nothing special needed to make it work - it's just a POCO. The context we'll use to access the entities in the application is called PersonContext, but you could name it anything you wanted. The important thing is that it inherits DbContext and contains one or more DbSet which holds our entity collections. Adding Seed Data We need some testing data to expose from our service. The simplest way to get that into our database is to modify the CreateCeDatabaseIfNotExists class in AppStart_SQLCEEntityFramework.cs by adding some seed data to the Seed method: protected virtual void Seed( TContext context ) { var personContext = context as PersonContext; personContext.People.Add( new Person { ID = 1, Name = "George Washington" } ); personContext.People.Add( new Person { ID = 2, Name = "John Adams" } ); personContext.People.Add( new Person { ID = 3, Name = "Thomas Jefferson" } ); personContext.SaveChanges(); } The CreateCeDatabaseIfNotExists class name is pretty self-explanatory - when our DbContext is accessed and the database isn't found, a new one will be created and populated with the data in the Seed method. There's one more step to make that work - we need to uncomment a line in the Start method at the top of of the AppStart_SQLCEEntityFramework class and set the context name, as shown here, public static class AppStart_SQLCEEntityFramework { public static void Start() { DbDatabase.DefaultConnectionFactory = new SqlCeConnectionFactory("System.Data.SqlServerCe.4.0"); // Sets the default database initialization code for working with Sql Server Compact databases // Uncomment this line and replace CONTEXT_NAME with the name of your DbContext if you are // using your DbContext to create and manage your database DbDatabase.SetInitializer(new CreateCeDatabaseIfNotExists<PersonContext>()); } } Now our database and entity framework are set up, so we can expose data via WCF Data Services. Note: This is a bare-bones implementation with no administration screens. If you'd like to see how those are added, check out The Full Stack screencast series. Creating the oData Service using WCF Data Services Add a new WCF Data Service to the project (right-click the project / Add New Item / Web / WCF Data Service). We’ll be exposing all the data as read/write.  Remember to reconfigure to control and minimize access as appropriate for your own application. Open the code behind for your service. In our case, the service was called PersonTestDataService.svc so the code behind class file is PersonTestDataService.svc.cs. using System.Data.Services; using System.Data.Services.Common; using System.ServiceModel; using DeadSimpleServer.Models; namespace DeadSimpleServer { [ServiceBehavior( IncludeExceptionDetailInFaults = true )] public class PersonTestDataService : DataService<PersonContext> { // This method is called only once to initialize service-wide policies. public static void InitializeService( DataServiceConfiguration config ) { config.SetEntitySetAccessRule( "*", EntitySetRights.All ); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; config.UseVerboseErrors = true; } } } We're enabling a few additional settings to make it easier to debug if you run into trouble. The ServiceBehavior attribute is set to include exception details in faults, and we're using verbose errors. You can remove both of these when your service is working, as your public production service shouldn't be revealing exception information. You can view the output of the service by running the application and browsing to http://localhost:[portnumber]/PersonTestDataService.svc/: <service xml:base="http://localhost:49786/PersonTestDataService.svc/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:app="http://www.w3.org/2007/app" xmlns="http://www.w3.org/2007/app"> <workspace> <atom:title>Default</atom:title> <collection href="People"> <atom:title>People</atom:title> </collection> </workspace> </service> This indicates that the service exposes one collection, which is accessible by browsing to http://localhost:[portnumber]/PersonTestDataService.svc/People <?xml version="1.0" encoding="iso-8859-1" standalone="yes"?> <feed xml:base=http://localhost:49786/PersonTestDataService.svc/ xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <title type="text">People</title> <id>http://localhost:49786/PersonTestDataService.svc/People</id> <updated>2010-12-29T01:01:50Z</updated> <link rel="self" title="People" href="People" /> <entry> <id>http://localhost:49786/PersonTestDataService.svc/People(1)</id> <title type="text"></title> <updated>2010-12-29T01:01:50Z</updated> <author> <name /> </author> <link rel="edit" title="Person" href="People(1)" /> <category term="DeadSimpleServer.Models.Person" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:ID m:type="Edm.Int32">1</d:ID> <d:Name>George Washington</d:Name> </m:properties> </content> </entry> <entry> ... </entry> </feed> Let's recap what we've done so far. But enough with services and XML - let's get this into our Windows Phone client application. Creating the DataServiceContext for the Client Use the latest DataSvcUtil.exe from http://odata.codeplex.com. As of today, that's in this download: http://odata.codeplex.com/releases/view/54698 You need to run it with a few options: /uri - This will point to the service URI. In this case, it's http://localhost:59342/PersonTestDataService.svc  Pick up the port number from your running server (e.g., the server formerly known as Cassini). /out - This is the DataServiceContext class that will be generated. You can name it whatever you'd like. /Version - should be set to 2.0 /DataServiceCollection - Include this flag to generate collections derived from the DataServiceCollection base, which brings in all the ObservableCollection goodness that handles your INotifyPropertyChanged events for you. Here's the console session from when we ran it: <ListBox x:Name="MainListBox" Margin="0,0,-12,0" ItemsSource="{Binding}" SelectionChanged="MainListBox_SelectionChanged"> Next, to keep things simple, change the Binding on the two TextBlocks within the DataTemplate to Name and ID, <ListBox x:Name="MainListBox" Margin="0,0,-12,0" ItemsSource="{Binding}" SelectionChanged="MainListBox_SelectionChanged"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Margin="0,0,0,17" Width="432"> <TextBlock Text="{Binding Name}" TextWrapping="Wrap" Style="{StaticResource PhoneTextExtraLargeStyle}" /> <TextBlock Text="{Binding ID}" TextWrapping="Wrap" Margin="12,-6,12,0" Style="{StaticResource PhoneTextSubtleStyle}" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> Getting The Context In the code-behind you’ll first declare a member variable to hold the context from the Entity Framework. This is named using convention over configuration. The db type is Person and the context is of type PersonContext, You initialize it by providing the URI, in this case using the URL obtained from the Cassini web server, PersonContext context = new PersonContext( new Uri( "http://localhost:49786/PersonTestDataService.svc/" ) ); Create a second member variable of type DataServiceCollection<Person> but do not initialize it, DataServiceCollection<Person> people; In the constructor you’ll initialize the DataServiceCollection using the PersonContext, public MainPage() { InitializeComponent(); people = new DataServiceCollection<Person>( context ); Finally, you’ll load the people collection using the LoadAsync method, passing in the fully specified URI for the People collection in the web service, people.LoadAsync( new Uri( "http://localhost:49786/PersonTestDataService.svc/People" ) ); Note that this method runs asynchronously and when it is finished the people  collection is already populated. Thus, since we didn’t need or want to override any of the behavior we don’t implement the LoadCompleted. You can use the LoadCompleted event if you need to do any other UI updates, but you don't need to. The final code is as shown below: using System; using System.Data.Services.Client; using System.Windows; using System.Windows.Controls; using DeadSimpleServer.Models; using Microsoft.Phone.Controls; namespace WindowsPhoneODataTest { public partial class MainPage : PhoneApplicationPage { PersonContext context = new PersonContext( new Uri( "http://localhost:49786/PersonTestDataService.svc/" ) ); DataServiceCollection<Person> people; // Constructor public MainPage() { InitializeComponent(); // Set the data context of the listbox control to the sample data // DataContext = App.ViewModel; people = new DataServiceCollection<Person>( context ); people.LoadAsync( new Uri( "http://localhost:49786/PersonTestDataService.svc/People" ) ); DataContext = people; this.Loaded += new RoutedEventHandler( MainPage_Loaded ); } // Handle selection changed on ListBox private void MainListBox_SelectionChanged( object sender, SelectionChangedEventArgs e ) { // If selected index is -1 (no selection) do nothing if ( MainListBox.SelectedIndex == -1 ) return; // Navigate to the new page NavigationService.Navigate( new Uri( "/DetailsPage.xaml?selectedItem=" + MainListBox.SelectedIndex, UriKind.Relative ) ); // Reset selected index to -1 (no selection) MainListBox.SelectedIndex = -1; } // Load data for the ViewModel Items private void MainPage_Loaded( object sender, RoutedEventArgs e ) { if ( !App.ViewModel.IsDataLoaded ) { App.ViewModel.LoadData(); } } } } With people populated we can set it as the DataContext and run the application; you’ll find that the Name and ID are displayed in the list on the Mainpage. Here's how the pieces in the client fit together: Complete source code available here

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Metro: Using Templates

    - by Stephen.Walther
    The goal of this blog post is to describe how templates work in the WinJS library. In particular, you learn how to use a template to display both a single item and an array of items. You also learn how to load a template from an external file. Why use Templates? Imagine that you want to display a list of products in a page. The following code is bad: var products = [ { name: "Tesla", price: 80000 }, { name: "VW Rabbit", price: 200 }, { name: "BMW", price: 60000 } ]; var productsHTML = ""; for (var i = 0; i < products.length; i++) { productsHTML += "<h1>Product Details</h1>" + "<div>Product Name: " + products[i].name + "</div>" + "<div>Product Price: " + products[i].price + "</div>"; } document.getElementById("productContainer").innerHTML = productsHTML; In the code above, an array of products is displayed by creating a for..next loop which loops through each element in the array. A string which represents a list of products is built through concatenation. The code above is a designer’s nightmare. You cannot modify the appearance of the list of products without modifying the JavaScript code. A much better approach is to use a template like this: <div id="productTemplate"> <h1>Product Details</h1> <div> Product Name: <span data-win-bind="innerText:name"></span> </div> <div> Product Price: <span data-win-bind="innerText:price"></span> </div> </div> A template is simply a fragment of HTML that contains placeholders. Instead of displaying a list of products by concatenating together a string, you can render a template for each product. Creating a Simple Template Let’s start by using a template to render a single product. The following HTML page contains a template and a placeholder for rendering the template: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <!-- Product Template --> <div id="productTemplate"> <h1>Product Details</h1> <div> Product Name: <span data-win-bind="innerText:name"></span> </div> <div> Product Price: <span data-win-bind="innerText:price"></span> </div> </div> <!-- Place where Product Template is Rendered --> <div id="productContainer"></div> </body> </html> In the page above, the template is defined in a DIV element with the id productTemplate. The contents of the productTemplate are not displayed when the page is opened in the browser. The contents of a template are automatically hidden when you convert the productTemplate into a template in your JavaScript code. Notice that the template uses data-win-bind attributes to display the product name and price properties. You can use both data-win-bind and data-win-bindsource attributes within a template. To learn more about these attributes, see my earlier blog post on WinJS data binding: http://stephenwalther.com/blog/archive/2012/02/26/windows-web-applications-declarative-data-binding.aspx The page above also includes a DIV element named productContainer. The rendered template is added to this element. Here’s the code for the default.js script which creates and renders the template: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var product = { name: "Tesla", price: 80000 }; var productTemplate = new WinJS.Binding.Template(document.getElementById("productTemplate")); productTemplate.render(product, document.getElementById("productContainer")); } }; app.start(); })(); In the code above, a single product object is created with the following line of code: var product = { name: "Tesla", price: 80000 }; Next, the productTemplate element from the page is converted into an actual WinJS template with the following line of code: var productTemplate = new WinJS.Binding.Template(document.getElementById("productTemplate")); The template is rendered to the templateContainer element with the following line of code: productTemplate.render(product, document.getElementById("productContainer")); The result of this work is that the product details are displayed: Notice that you do not need to call WinJS.Binding.processAll(). The Template render() method takes care of the binding for you. Displaying an Array in a Template If you want to display an array of products using a template then you simply need to create a for..next loop and iterate through the array calling the Template render() method for each element. (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var products = [ { name: "Tesla", price: 80000 }, { name: "VW Rabbit", price: 200 }, { name: "BMW", price: 60000 } ]; var productTemplate = new WinJS.Binding.Template(document.getElementById("productTemplate")); var productContainer = document.getElementById("productContainer"); var i, product; for (i = 0; i < products.length; i++) { product = products[i]; productTemplate.render(product, productContainer); } } }; app.start(); })(); After each product in the array is rendered with the template, the result is appended to the productContainer element. No changes need to be made to the HTML page discussed in the previous section to display an array of products instead of a single product. The same product template can be used in both scenarios. Rendering an HTML TABLE with a Template When using the WinJS library, you create a template by creating an HTML element in your page. One drawback to this approach of creating templates is that your templates are part of your HTML page. In order for your HTML page to validate, the HTML within your templates must also validate. This means, for example, that you cannot enclose a single HTML table row within a template. The following HTML is invalid because you cannot place a TR element directly within the body of an HTML document:   <!-- Product Template --> <tr> <td data-win-bind="innerText:name"></td> <td data-win-bind="innerText:price"></td> </tr> This template won’t validate because, in a valid HTML5 document, a TR element must appear within a THEAD or TBODY element. Instead, you must create the entire TABLE element in the template. The following HTML page illustrates how you can create a template which contains a TR element: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <!-- Product Template --> <div id="productTemplate"> <table> <tbody> <tr> <td data-win-bind="innerText:name"></td> <td data-win-bind="innerText:price"></td> </tr> </tbody> </table> </div> <!-- Place where Product Template is Rendered --> <table> <thead> <tr> <th>Name</th><th>Price</th> </tr> </thead> <tbody id="productContainer"> </tbody> </table> </body> </html>   In the HTML page above, the product template includes TABLE and TBODY elements: <!-- Product Template --> <div id="productTemplate"> <table> <tbody> <tr> <td data-win-bind="innerText:name"></td> <td data-win-bind="innerText:price"></td> </tr> </tbody> </table> </div> We discard these elements when we render the template. The only reason that we include the TABLE and THEAD elements in the template is to make the HTML page validate as valid HTML5 markup. Notice that the productContainer (the target of the template) in the page above is a TBODY element. We want to add the rows rendered by the template to the TBODY element in the page. The productTemplate is rendered in the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var products = [ { name: "Tesla", price: 80000 }, { name: "VW Rabbit", price: 200 }, { name: "BMW", price: 60000 } ]; var productTemplate = new WinJS.Binding.Template(document.getElementById("productTemplate")); var productContainer = document.getElementById("productContainer"); var i, product, row; for (i = 0; i < products.length; i++) { product = products[i]; productTemplate.render(product).then(function (result) { row = WinJS.Utilities.query("tr", result).get(0); productContainer.appendChild(row); }); } } }; app.start(); })(); When the product template is rendered, the TR element is extracted from the rendered template by using the WinJS.Utilities.query() method. Next, only the TR element is added to the productContainer: productTemplate.render(product).then(function (result) { row = WinJS.Utilities.query("tr", result).get(0); productContainer.appendChild(row); }); I discuss the WinJS.Utilities.query() method in depth in a previous blog entry: http://stephenwalther.com/blog/archive/2012/02/23/windows-web-applications-query-selectors.aspx When everything gets rendered, the products are displayed in an HTML table: You can see the actual HTML rendered by looking at the Visual Studio DOM Explorer window:   Loading an External Template Instead of embedding a template in an HTML page, you can place your template in an external HTML file. It makes sense to create a template in an external file when you need to use the same template in multiple pages. For example, you might need to use the same product template in multiple pages in your application. The following HTML page does not contain a template. It only contains a container that will act as a target for the rendered template: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <!-- Place where Product Template is Rendered --> <div id="productContainer"></div> </body> </html> The template is contained in a separate file located at the path /templates/productTemplate.html:   Here’s the contents of the productTemplate.html file: <!-- Product Template --> <div id="productTemplate"> <h1>Product Details</h1> <div> Product Name: <span data-win-bind="innerText:name"></span> </div> <div> Product Price: <span data-win-bind="innerText:price"></span> </div> </div> Notice that the template file only contains the template and not the standard opening and closing HTML elements. It is an HTML fragment. If you prefer, you can include all of the standard opening and closing HTML elements in your external template – these elements get stripped away automatically: <html> <head><title>product template</title></head> <body> <!-- Product Template --> <div id="productTemplate"> <h1>Product Details</h1> <div> Product Name: <span data-win-bind="innerText:name"></span> </div> <div> Product Price: <span data-win-bind="innerText:price"></span> </div> </div> </body> </html> Either approach – using a fragment or using a full HTML document  — works fine. Finally, the following default.js file loads the external template, renders the template for each product, and appends the result to the product container: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var products = [ { name: "Tesla", price: 80000 }, { name: "VW Rabbit", price: 200 }, { name: "BMW", price: 60000 } ]; var productTemplate = new WinJS.Binding.Template(null, { href: "/templates/productTemplate.html" }); var productContainer = document.getElementById("productContainer"); var i, product, row; for (i = 0; i < products.length; i++) { product = products[i]; productTemplate.render(product, productContainer); } } }; app.start(); })(); The path to the external template is passed to the constructor for the Template class as one of the options: var productTemplate = new WinJS.Binding.Template(null, {href:"/templates/productTemplate.html"}); When a template is contained in a page then you use the first parameter of the WinJS.Binding.Template constructor to represent the template – instead of null, you pass the element which contains the template. When a template is located in an external file, you pass the href for the file as part of the second parameter for the WinJS.Binding.Template constructor. Summary The goal of this blog entry was to describe how you can use WinJS templates to render either a single item or an array of items to a page. We also explored two advanced topics. You learned how to render an HTML table by extracting the TR element from a template. You also learned how to place a template in an external file.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • Oracle 12cR1 : Evaluación "What-If" de un comando crsctl con Oracle Clusterware

    - by grantunez-Oracle
    Oracle en su nueva version 12cR1 introdujo una nueva y pequeña característica  al Oracle Clusterware, pero el que sea pequeña, no significa que no sea de gran utilidad. En versiones anteriores, si queríamos saber que iba a pasar al ejecutar un comando con la herramienta crsctl, teníamos que hacerlo en un ambiente de pruebas, ya que si no sabíamos de que se trataba el comando, se convertía en algo muy peligroso hacerlo sobre producción. En Oracle Clusterware 12cR1 se introduce la evaluación de comando tipo "What-If" en la herramienta mencionada anteriormente, crsctl eval, que lo que nos permite es ver , que va a suceder si ejecuta el comando, sin que realmente se ejecute el comando. Primero vamos a ver que recursos tenemos arriba  [oracle@oel6-112-rac1 ~]$ crsctl stat res -t--------------------------------------------------------------------------------Name           Target  State        Server                   State details       --------------------------------------------------------------------------------Local Resources--------------------------------------------------------------------------------ora.ASMNET1LSNR_ASM.lsnr               ONLINE  ONLINE       oel6-112-rac1            STABLE               ONLINE  ONLINE       oel6-112-rac2            STABLEora.DATA.dg               ONLINE  ONLINE       oel6-112-rac1            STABLE               ONLINE  ONLINE       oel6-112-rac2            STABLEora.LISTENER.lsnr               ONLINE  ONLINE       oel6-112-rac1            STABLE               ONLINE  ONLINE       oel6-112-rac2            STABLEora.net1.network               ONLINE  ONLINE       oel6-112-rac1            STABLE               ONLINE  ONLINE       oel6-112-rac2            STABLEora.ons               ONLINE  ONLINE       oel6-112-rac1            STABLE               ONLINE  ONLINE       oel6-112-rac2            STABLEora.proxy_advm               ONLINE  ONLINE       oel6-112-rac1            STABLE               ONLINE  OFFLINE      oel6-112-rac2            CLEANINGora.LISTENER_SCAN1.lsnr      1        ONLINE  ONLINE       oel6-112-rac2            STABLEora.LISTENER_SCAN2.lsnr      1        ONLINE  ONLINE       oel6-112-rac1            STABLEora.LISTENER_SCAN3.lsnr      1        ONLINE  ONLINE       oel6-112-rac1            STABLEora.MGMTLSNR      1        ONLINE  ONLINE       oel6-112-rac1            169.254.247.50 192.1                                                             68.1.111,STABLEora.asm      1        ONLINE  ONLINE       oel6-112-rac1            STABLE      2        ONLINE  ONLINE       oel6-112-rac2            STABLE      3        OFFLINE OFFLINE                               STABLEora.cvu      1        ONLINE  ONLINE       oel6-112-rac1            STABLEora.gns      1        ONLINE  ONLINE       oel6-112-rac1            STABLEora.gns.vip      1        ONLINE  ONLINE       oel6-112-rac1            STABLEora.mgmtdb      1        ONLINE  ONLINE       oel6-112-rac1            Open,STABLEora.oc4j      1        ONLINE  ONLINE       oel6-112-rac1            STABLEora.oel6-112-rac1.vip      1        ONLINE  ONLINE       oel6-112-rac1            STABLEora.oel6-112-rac2.vip      1        ONLINE  ONLINE       oel6-112-rac2            STABLEora.orcl.db      1        OFFLINE OFFLINE      oel6-112-rac2            Instance Shutdown,STABLE       2        ONLINE  ONLINE       oel6-112-rac1            Open,STABLEora.scan1.vip      1        ONLINE  ONLINE       oel6-112-rac2            STABLEora.scan2.vip      1        ONLINE  ONLINE       oel6-112-rac1            STABLEora.scan3.vip      1        ONLINE  ONLINE       oel6-112-rac1            STABLE Ahora lo que vamos a hacer , es evaluar que pasaría, si por ejemplo, el recurso de ASM llegara a fallar en nuestro nodo [oracle@oel6-112-rac1 ~]$ crsctl eval fail resource ora.asm Stage Group 1: -------------------------------------------------------------------------------- Stage Number Required Action --------------------------------------------------------------------------------      1    N Create new group (Stage Group = 2)    Y Resource 'ora.asm' (1/1) will be in state [ONLINE|INTERMEDIATE] on server [oel6-112-rac1]    Y Resource 'ora.asm' (2/1) will be in state [ONLINE|INTERMEDIATE] on server [oel6-112-rac2] -------------------------------------------------------------------------------- Stage Group 2: -------------------------------------------------------------------------------- Stage Number Required Action --------------------------------------------------------------------------------      1    N Resource 'ora.proxy_advm' (oel6-112-rac2) will be in state [ONLINE|INTERMEDIATE] on server [oel6-112-rac2] --------------------------------------------------------------------------------  Como vamos a ver a continuación, no es lo mismo se decidiéramos detener el recurso, en este caso tenemos que forzarlo , ya que es un recurso que no se puede detener sin la opción "-f":  [oracle@oel6-112-rac1 ~]$ crsctl eval stop resource ora.asm Stage Group 1: -------------------------------------------------------------------------------- Stage Number Required Action --------------------------------------------------------------------------------      1    N Error code [222] for entity [ora.asm]. Message is [CRS-2529: Unable to act on 'ora.asm' because that would require stopping or relocating 'ora.DATA.dg', but the force option was not specified]. -------------------------------------------------------------------------------- [oracle@oel6-112-rac1 ~]$ crsctl eval stop resource ora.asm -f Stage Group 1: -------------------------------------------------------------------------------- Stage Number Required Action --------------------------------------------------------------------------------      1    Y Resource 'ora.DATA.dg' (oel6-112-rac1) will be in state [OFFLINE]    Y Resource 'ora.DATA.dg' (oel6-112-rac2) will be in state [OFFLINE]    Y Resource 'ora.orcl.db' (2/1) will be in state [OFFLINE]    Y Resource 'ora.proxy_advm' (oel6-112-rac1) will be in state [OFFLINE]      2    Y Resource 'ora.asm' (1/1) will be in state [OFFLINE]    Y Resource 'ora.asm' (2/1) will be in state [OFFLINE] --------------------------------------------------------------------------------  Como puedes ver, es una característica nueva y pequeña, pero bastante util para evaluar todos tus comandos de crsctl sin impactar a ninguno de tus recursos. Así te permitira valorar el impacto que tendra el comando que vas a ejecutar. Puedes encontrar mas información en: Utilizando el comando eval

    Read the article

  • How to shoot yourself in the foot (DO NOT Read in the office)

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/21/how-to-shoot-yourself-in-the-foot-do-not-read.aspxLet me make it absolutely clear - the following is:merely collated by your Geek from http://www.codeproject.com/Lounge.aspx?msg=3917012#xx3917012xxvery, very very funny so you read it in the presence of others at your own riskso here is the list - you have been warned!C You shoot yourself in the foot.   C++ You accidently create a dozen instances of yourself and shoot them all in the foot. Providing emergency medical assistance is impossible since you can't tell which are bitwise copies and which are just pointing at others and saying "That's me, over there."   FORTRAN You shoot yourself in each toe, iteratively, until you run out of toes, then you read in the next foot and repeat. If you run out of bullets, you continue anyway because you have no exception-handling facility.   Modula-2 After realizing that you can't actually accomplish anything in this language, you shoot yourself in the head.   COBOL USEing a COLT 45 HANDGUN, AIM gun at LEG.FOOT, THEN place ARM.HAND.FINGER on HANDGUN.TRIGGER and SQUEEZE. THEN return HANDGUN to HOLSTER. CHECK whether shoelace needs to be retied.   Lisp You shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds...   BASIC Shoot yourself in the foot with a water pistol. On big systems, continue until entire lower body is waterlogged.   Forth Foot yourself in the shoot.   APL You shoot yourself in the foot; then spend all day figuring out how to do it in fewer characters.   Pascal The compiler won't let you shoot yourself in the foot.   Snobol If you succeed, shoot yourself in the left foot. If you fail, shoot yourself in the right foot.   HyperTalk Put the first bullet of the gun into foot left of leg of you. Answer the result.   Prolog You tell your program you want to be shot in the foot. The program figures out how to do it, but the syntax doesn't allow it to explain.   370 JCL You send your foot down to MIS with a 4000-page document explaining how you want it to be shot. Three years later, your foot comes back deep-fried.   FORTRAN-77 You shoot yourself in each toe, iteratively, until you run out of toes, then you read in the next foot and repeat. If you run out of bullets, you continue anyway because you still can't do exception-processing.   Modula-2 (alternative) You perform a shooting on what might be currently a foot with what might be currently a bullet shot by what might currently be a gun.   BASIC (compiled) You shoot yourself in the foot with a BB using a SCUD missile launcher.   Visual Basic You'll really only appear to have shot yourself in the foot, but you'll have so much fun doing it that you won't care.   Forth (alternative) BULLET DUP3 * GUN LOAD FOOT AIM TRIGGER PULL BANG! EMIT DEAD IF DROP ROT THEN (This takes about five bytes of memory, executes in two to ten clock cycles on any processor and can be used to replace any existing function of the language as well as in any future words). (Welcome to bottom up programming - where you, too, can perform compiler pre-processing instead of writing code)   APL (alternative) You hear a gunshot and there's a hole in your foot, but you don't remember enough linear algebra to understand what happened. or @#&^$%&%^ foot   Pascal (alternative) Same as Modula-2 except that the bullet is not the right type for the gun and your hand is blown off.   Snobol (alternative) You grab your foot with your hand, then rewrite your hand to be a bullet. The act of shooting the original foot then changes your hand/bullet into yet another foot (a left foot).   Prolog (alternative) You attempt to shoot yourself in the foot, but the bullet, failing to find its mark, backtracks to the gun, which then explodes in your face.   COMAL You attempt to shoot yourself in the foot with a water pistol, but the bore is clogged, and the pressure build-up blows apart both the pistol and your hand. or draw_pistol aim_at_foot(left) pull_trigger hop(swearing)   Scheme As Lisp, but none of the other appendages are aware of this happening.   Algol You shoot yourself in the foot with a musket. The musket is aesthetically fascinating and the wound baffles the adolescent medic in the emergency room.   Ada If you are dumb enough to actually use this language, the United States Department of Defense will kidnap you, stand you up in front of a firing squad and tell the soldiers, "Shoot at the feet." or The Department of Defense shoots you in the foot after offering you a blindfold and a last cigarette. or After correctly packaging your foot, you attempt to concurrently load the gun, pull the trigger, scream and shoot yourself in the foot. When you try, however, you discover that your foot is of the wrong type. or After correctly packing your foot, you attempt to concurrently load the gun, pull the trigger, scream, and confidently aim at your foot knowing it is safe. However the cordite in the round does an Unchecked Conversion, fires and shoots you in the foot anyway.   Eiffel   You create a GUN object, two FOOT objects and a BULLET object. The GUN passes both the FOOT objects a reference to the BULLET. The FOOT objects increment their hole counts and forget about the BULLET. A little demon then drives a garbage truck over your feet and grabs the bullet (both of it) on the way. Smalltalk You spend so much time playing with the graphics and windowing system that your boss shoots you in the foot, takes away your workstation and makes you develop in COBOL on a character terminal. or You send the message shoot to gun, with selectors bullet and myFoot. A window pops up saying Gunpowder doesNotUnderstand: spark. After several fruitless hours spent browsing the methods for Trigger, FiringPin and IdealGas, you take the easy way out and create ShotFoot, a subclass of Foot with an additional instance variable bulletHole. Object Oriented Pascal You perform a shooting on what might currently be a foot with what might currently be a bullet fired from what might currently be a gun.   PL/I You consume all available system resources, including all the offline bullets. The Data Processing & Payroll Department doubles its size, triples its budget, acquires four new mainframes and drops the original one on your foot. Postscript foot bullets 6 locate loadgun aim gun shoot showpage or It takes the bullet ten minutes to travel from the gun to your foot, by which time you're long since gone out to lunch. The text comes out great, though.   PERL You stab yourself in the foot repeatedly with an incredibly large and very heavy Swiss Army knife. or You pick up the gun and begin to load it. The gun and your foot begin to grow to huge proportions and the world around you slows down, until the gun fires. It makes a tiny hole, which you don't feel. Assembly Language You crash the OS and overwrite the root disk. The system administrator arrives and shoots you in the foot. After a moment of contemplation, the administrator shoots himself in the foot and then hops around the room rabidly shooting at everyone in sight. or You try to shoot yourself in the foot only to discover you must first reinvent the gun, the bullet, and your foot.or The bullet travels to your foot instantly, but it took you three weeks to load the round and aim the gun.   BCPL You shoot yourself somewhere in the leg -- you can't get any finer resolution than that. Concurrent Euclid You shoot yourself in somebody else's foot.   Motif You spend days writing a UIL description of your foot, the trajectory, the bullet and the intricate scrollwork on the ivory handles of the gun. When you finally get around to pulling the trigger, the gun jams.   Powerbuilder While attempting to load the gun you discover that the LoadGun system function is buggy; as a work around you tape the bullet to the outside of the gun and unsuccessfully attempt to fire it with a nail. In frustration you club your foot with the butt of the gun and explain to your client that this approximates the functionality of shooting yourself in the foot and that the next version of Powerbuilder will fix it.   Standard ML By the time you get your code to typecheck, you're using a shoot to foot yourself in the gun.   MUMPS You shoot 583149 AK-47 teflon-tipped, hollow-point, armour-piercing bullets into even-numbered toes on odd-numbered feet of everyone in the building -- with one line of code. Three weeks later you shoot yourself in the head rather than try to modify that line.   Java You locate the Gun class, but discover that the Bullet class is abstract, so you extend it and write the missing part of the implementation. Then you implement the ShootAble interface for your foot, and recompile the Foot class. The interface lets the bullet call the doDamage method on the Foot, so the Foot can damage itself in the most effective way. Now you run the program, and call the doShoot method on the instance of the Gun class. First the Gun creates an instance of Bullet, which calls the doFire method on the Gun. The Gun calls the hit(Bullet) method on the Foot, and the instance of Bullet is passed to the Foot. But this causes an IllegalHitByBullet exception to be thrown, and you die.   Unix You shoot yourself in the foot or % ls foot.c foot.h foot.o toe.c toe.o % rm * .o rm: .o: No such file or directory % ls %   370 JCL (alternative) You shoot yourself in the head just thinking about it.   DOS JCL You first find the building you're in in the phone book, then find your office number in the corporate phone book. Then you have to write this down, then describe, in cubits, your exact location, in relation to the door (right hand side thereof). Then you need to write down the location of the gun (loading it is a proprietary utility), then you load it, and the COBOL program, and run them, and, with luck, it may be run tonight.   VMS   $ MOUNT/DENSITY=.45/LABEL=BULLET/MESSAGE="BYE" BULLET::BULLET$GUN SYS$BULLET $ SET GUN/LOAD/SAFETY=OFF/SIGHT=NONE/HAND=LEFT/CHAMBER=1/ACTION=AUTOMATIC/ LOG/ALL/FULL SYS$GUN_3$DUA3:[000000]GUN.GNU $ SHOOT/LOG/AUTO SYS$GUN SYS$SYSTEM:[FOOT]FOOT.FOOT   %DCL-W-ACTIMAGE, error activating image GUN -CLI-E-IMGNAME, image file $3$DUA240:[GUN]GUN.EXE;1 -IMGACT-F-NOTNATIVE, image is not an OpenVMS Alpha AXP image or %SYS-F-FTSHT, foot shot (fifty lines of traceback omitted) sh,csh, etc You can't remember the syntax for anything, so you spend five hours reading manual pages, then your foot falls asleep. You shoot the computer and switch to C.   Apple System 7 Double click the gun icon and a window giving a selection for guns, target areas, plus balloon help with medical remedies, and assorted sound effects. Click "shoot" button and a small bomb appears with note "Error of Type 1 has occurred."   Windows 3.1 Double click the gun icon and wait. Eventually a window opens giving a selection for guns, target areas, plus balloon help with medical remedies, and assorted sound effects. Click "shoot" button and a small box appears with note "Unable to open Shoot.dll, check that path is correct."   Windows 95 Your gun is not compatible with this OS and you must buy an upgrade and install it before you can continue. Then you will be informed that you don't have enough memory.   CP/M I remember when shooting yourself in the foot with a BB gun was a big deal.   DOS You finally found the gun, but can't locate the file with the foot for the life of you.   MSDOS You shoot yourself in the foot, but can unshoot yourself with add-on software.   Access You try to point the gun at your foot, but it shoots holes in all your Borland distribution diskettes instead.   Paradox Not only can you shoot yourself in the foot, your users can too.   dBase You squeeze the trigger, but the bullet moves so slowly that by the time your foot feels the pain, you've forgotten why you shot yourself anyway. or You buy a gun. Bullets are only available from another company and are promised to work so you buy them. Then you find out that the next version of the gun is the one scheduled to actually shoot bullets.   DBase IV, V1.0 You pull the trigger, but it turns out that the gun was a poorly designed hand grenade and the whole building blows up.   SQL You cut your foot off, send it out to a service bureau and when it returns, it has a hole in it but will no longer fit the attachment at the end of your leg. or Insert into Foot Select Bullet >From Gun.Hand Where Chamber = 'LOADED' And Trigger = 'PULLED'   Clipper You grab a bullet, get ready to insert it in the gun so that you can shoot yourself in the foot and discover that the gun that the bullets fits has not yet been built, but should be arriving in the mail _REAL_SOON_NOW_. Oracle The menus for coding foot_shooting have not been implemented yet and you can't do foot shooting in SQL.   English You put your foot in your mouth, then bite it off. (For those who don't know, English is a McDonnell Douglas/PICK query language which allegedly requires 110% of system resources to run happily.) Revelation [an implementation of the PICK Operating System] You'll be able to shoot yourself in the foot just as soon as you figure out what all these bullets are for.   FlagShip Starting at the top of your head, you aim the gun at yourself repeatedly until, half an hour later, the gun is finally pointing at your foot and you pull the trigger. A new foot with a hole in it appears but you can't work out how to get rid of the old one and your gun doesn't work anymore.   FidoNet You put your foot in your mouth, then echo it internationally.   PicoSpan [a UNIX-based computer conferencing system] You can't shoot yourself in the foot because you're not a host. or (host variation) Whenever you shoot yourself in the foot, someone opens a topic in policy about it.   Internet You put your foot in your mouth, shoot it, then spam the bullet so that everybody gets shot in the foot.   troff rmtroff -ms -Hdrwp | lpr -Pwp2 & .*place bullet in footer .B .NR FT +3i .in 4 .bu Shoot! .br .sp .in -4 .br .bp NR HD -2i .*   Genetic Algorithms You create 10,000 strings describing the best way to shoot yourself in the foot. By the time the program produces the optimal solution, humans have evolved wings and the problem is moot.   CSP (Communicating Sequential Processes) You only fail to shoot everything that isn't your foot.   MS-SQL Server MS-SQL Server’s gun comes pre-loaded with an unlimited supply of Teflon coated bullets, and it only has two discernible features: the muzzle and the trigger. If that wasn't enough, MS-SQL Server also puts the gun in your hand, applies local anesthetic to the skin of your forefinger and stitches it to the gun's trigger. Meanwhile, another process has set up a spinal block to numb your lower body. It will then proceeded to surgically remove your foot, cryogenically freeze it for preservation, and attach it to the muzzle of the gun so that no matter where you aim, you will shoot your foot. In order to avoid shooting yourself in the foot, you need to unstitch your trigger finger, remove your foot from the muzzle of the gun, and have it surgically reattached. Then you probably want to get some crutches and go out to buy a book on SQL Server Performance Tuning.   Sybase Sybase's gun requires assembly, and you need to go out and purchase your own clip and bullets to load the gun. Assembly is complicated by the fact that Sybase has hidden the gun behind a big stack of reference manuals, but it hasn't told you where that stack is. While you were off finding the gun, assembling it, buying bullets, etc., Sybase was also busy surgically removing your foot and cryogenically freezing it for preservation. Instead of attaching it to the muzzle of the gun, though, it packed your foot on dry ice and sent it UPS-Ground to an unnamed hookah bar somewhere in the middle east. In order to shoot your foot, you must modify your gun with a GPS system for targeting and hire some guy named "Indy" to find the hookah bar and wire the coordinates back to you. By this time, you've probably become so daunted at the tasks stand between you and shooting your foot that you hire a guy who's read all the books on Sybase to help you shoot your foot. If you're lucky, he'll be smart enough both to find your foot and to stop you from shooting it.   Magic software You spend 1 week looking up the correct syntax for GUN. When you find it, you realise that GUN will not let you shoot in your own foot. It will allow you to shoot almost anything but your foot. You then decide to build your own gun. You can't use the standard barrel since this will only allow for standard bullets, which will not fire if the barrel is pointed at your foot. After four weeks, you have created your own custom gun. It blows up in your hand without warning, because you failed to initialise the safety catch and it doesn't know whether the initial state is "0", 0, NULL, "ZERO", 0.0, 0,0, "0.0", or "0,00". You fix the problem with your remaining hand by nesting 12 safety catches, and then decide to build the gun without safety catch. You then shoot the management and retire to a happy life where you code in languages that will allow you to shoot your foot in under 10 days.FirefoxLets you shoot yourself in as many feet as you'd like, while using multiple great addons! IEA moving target in terms of standard ammunition size and doesn't always work properly with non-Microsoft ammunition, so sometimes you shoot something other than your foot. However, it's the corporate world's standard foot-shooting apparatus. Hackers seem to enjoy rigging websites up to trigger cascading foot-shooting failures. Windows 98 About the same as Windows 95 in terms of overall bullet capacity and triggering mechanisms. Includes updated DirectShot API. A new version was released later on to support USB guns, Windows 98 SE.WPF:You get your baseball glove and a ball and you head out to your backyard, where you throw balls to your pitchback. Then your unkempt-haired-cargo-shorts-and-sandals-with-white-socks-wearing neighbor uses XAML to sculpt your arm into a gun, the ball into a bullet and the pitchback into your foot. By now, however, only the neighbor can get it to work and he's only around from 6:30 PM - 3:30 AM. LOGO: You very carefully lay out the trajectory of the bullet. Then you start the gun, which fires very slowly. You walk precisely to the point where the bullet will travel and wait, but just before it gets to you, your class time is up and one of the other kids has already used the system to hack into Sony's PS3 network. Flash: Someone has designed a beautiful-looking gun that anyone can shoot their feet with for free. It weighs six hundred pounds. All kinds of people are shooting themselves in the feet, and sending the link to everyone else so that they can too. That is, except for the criminals, who are all stealing iOS devices that the gun won't work with.APL: Its (mostly) all greek to me. Lisp: Place ((gun in ((hand sight (foot then shoot))))) (Lots of Insipid Stupid Parentheses)Apple OS/X and iOS Once a year, Steve Jobs returns from sick leave to tell millions of unwavering fans how they will be able to shoot themselves in the foot differently this year. They retweet and blog about it ad nauseam, and wait in line to be the first to experience "shoot different".Windows ME Usually fails, even at shooting you in the foot. Yo dawg, I heard you like shooting yourself in the foot. So I put a gun in your gun, so you can shoot yourself in the foot while you shoot yourself in the foot. (Okay, I'm not especially proud of this joke.) Windows 2000 Now you really do have to log in, before you are allowed to shoot yourself in the foot.Windows XPYou thought you learned your lesson: Don't use Windows ME. Then, along came this new creature, built on top of Windows NT! So you spend the next couple days installing antivirus software, patches and service packs, just so you can get that driver to install, and then proceed to shoot yourself in the foot. Windows Vista Newer! Glossier! Shootier! Windows 7 The bullets come out a lot smoother. Active Directory Each bullet now has an attached Bullet Identifier, and can be uniquely identified. Policies can be applied to dictate fragmentation, and the gun will occasionally have a confusing delay after the trigger has been pulled. PythonYou try to use import foot; foot.shoot() only to realize that's only available in 3.0, to which you can't yet upgrade from 2.7 because of all those extension libs lacking support. Solaris Shoots best when used on SPARC hardware, but still runs the trigger GUI under Java. After weeks of learning the appropriate STOP command to prevent the trigger from automatically being pressed on boot, you think you've got it under control. Then the one time you ever use dtrace, it hits a bug that fires the gun. MySQL The feature that allows you to shoot yourself in the foot has been in development for about 6 years, and they are adding it into the next version, which is coming out REAL SOON NOW, promise! But you can always check it out of source control and try it yourself (just not in any environment where data integrity is important because it will probably explode.) PostgreSQLAllows you to have a smug look on your face while you shoot yourself in the foot, because those MySQL guys STILL don't have that feature. NoSQL Barrel? Who needs a barrel? Just put the bullet on your foot, and strike it with a hammer. See? It's so much simpler and more efficient that way. You can even strike multiple bullets in one swing if you swing with a good enough arc, because hammers are easy to use. Getting them to synchronize is a little difficult, though.Eclipse There are about a dozen different packages for shooting yourself in the foot, with weird interdependencies on outdated components. Once you finally navigate the morass and get one installed, you then have something to look at while you shoot yourself in the foot with that package: You can watch the screen redraw.Outlook Makes it really easy to let everyone know you shot yourself in the foot!Shooting yourself in the foot using delegates.You really need to shoot yourself in the foot but you hate firearms (you don't want any dependency on the specifics of shooting) so you delegate it to somebody else. You don't care how it is done as long is shooting your foot. You can do it asynchronously in case you know you may faint so you are called back/slapped in the face by your shooter/friend (or background worker) when everything is done.C#You prepare the gun and the bullet, carefully modeling all of the physics of a bullet traveling through a foot. Just before you're about to pull the trigger, you stumble on System.Windows.BodyParts.Foot.ShootAt(System.Windows.Firearms.IGun gun) in the extended framework, realize you just wasted the entire afternoon, and shoot yourself in the head.PHP<?phprequire("foot_safety_check.php");?><!DOCTYPE HTML><html><head> <!--Lower!--><title>Shooting me in the foot</title></head> <body> <!--LOWER!!!--><leg> <!--OK, I made this one up...--><footer><?php echo (dungSift($_SERVER['HTTP_USER_AGENT'], "ie"))?("Your foot is safe, but you might want to wear a hard hat!"):("<div class=\"shot\">BANG!</div>"); ?></footer></leg> </body> </html>

    Read the article

  • Ubuntu missing from the Grub menu

    - by varevarao
    Recently I've had some audio issues with Ubuntu (using precise), and in the process of trying to resolve that I ran a dist-upgrade. Everything went just fine, and the sound seemed good, until I rebooted my machine for the first time since the dist-upgrade. All I see now in the Grub menu at startup is memtest86+, another memtest variant, and Windows 7. It's not showing any of the linux kernels that Ubuntu is running on. I am attaching my bootinfoscript: Boot Info Script 0.61.full + Boot-Repair extra info [Boot-Info November 20th 2012] ============================= Boot Info Summary: =============================== => Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos6)/boot/grub on this drive. sda1: __________________________________________________________________________ File system: vfat Boot sector type: Dell Utility: FAT16 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda2: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda3: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: sda4: __________________________________________________________________________ File system: Extended Partition Boot sector type: Unknown Boot sector info: sda5: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: According to the info in the boot sector, sda5 starts at sector 2048. Operating System: Boot files: sda6: __________________________________________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99-2.00) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sda6 and looks at sector 220046240 of the same hard drive for core.img. core.img is at this location and looks for (,msdos6)/boot/grub on this drive. Operating System: Ubuntu 12.04.1 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sda7: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 63 273,104 273,042 de Dell Utility /dev/sda2 * 274,432 19,406,847 19,132,416 7 NTFS / exFAT / HPFS /dev/sda3 19,406,848 218,274,364 198,867,517 7 NTFS / exFAT / HPFS /dev/sda4 218,275,838 625,139,711 406,863,874 f W95 Extended (LBA) /dev/sda5 328,630,272 625,139,711 296,509,440 7 NTFS / exFAT / HPFS /dev/sda6 218,275,840 324,030,463 105,754,624 83 Linux /dev/sda7 324,032,512 328,626,175 4,593,664 82 Linux swap / Solaris "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 07DA-0512 vfat DellUtility /dev/sda2 8834146034145392 ntfs RECOVERY /dev/sda3 48E2189DE21890F4 ntfs OS /dev/sda5 BC2A44C02A447982 ntfs Varshneya /dev/sda6 34731459-4b0f-46ac-a9bf-cb360a2c947c ext4 /dev/sda7 dcb9ce9b-799a-4c65-b008-887b01775670 swap /dev/sr0 iso9660 Ubuntu 12.04.1 LTS i386 ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /rofs squashfs (ro,noatime) /dev/sda6 /mnt ext4 (rw) /dev/sr0 /cdrom iso9660 (ro,noatime) =========================== sda6/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 34731459-4b0f-46ac-a9bf-cb360a2c947c if loadfont /boot/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 34731459-4b0f-46ac-a9bf-cb360a2c947c set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "${linux_gfx_mode}" != "text" ]; then load_video; fi ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 34731459-4b0f-46ac-a9bf-cb360a2c947c linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 34731459-4b0f-46ac-a9bf-cb360a2c947c linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda2)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 8834146034145392 chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- =============================== sda6/etc/fstab: ================================ -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=34731459-4b0f-46ac-a9bf-cb360a2c947c / ext4 errors=remount-ro,user_xattr 0 1 # swap was on /dev/sda7 during installation UUID=dcb9ce9b-799a-4c65-b008-887b01775670 none swap sw 0 0 -------------------------------------------------------------------------------- =================== sda6: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) 104.851909637 = 112.583880704 boot/grub/core.img 1 121.191410065 = 130.128285696 boot/grub/grub.cfg 1 ======================== Unknown MBRs/Boot Sectors/etc: ======================== Unknown BootLoader on sda4 00000000 eb 0f 2a 5d f4 b7 75 f2 e9 56 12 b8 50 b4 79 ec |..*]..u..V..P.y.| 00000010 89 91 ca c3 16 40 31 d0 ae c4 53 3d c7 dd d7 98 |[email protected]=....| 00000020 bd a4 f2 a4 e8 ab fc ea 36 30 1b 34 cf 8a 28 30 |........60.4..(0| 00000030 43 95 6c 31 3e 76 93 58 84 37 99 c3 ae 3a 88 a3 |C.l1>v.X.7...:..| 00000040 c2 a6 36 2a f8 e0 e1 03 91 8d a1 50 cd ad b0 b5 |..6*.......P....| 00000050 ad 69 3a 49 63 1f 4a 33 97 6e 0c 71 bf 7d bd 35 |.i:Ic.J3.n.q.}.5| 00000060 86 c5 17 93 b4 9f e5 af e0 c4 6f f4 6f f9 4b dd |..........o.o.K.| 00000070 14 39 e2 9e b9 36 ca b1 56 5b d9 b1 66 2c 05 b2 |.9...6..V[..f,..| 00000080 5d 5b 99 c0 db e6 81 27 ab c2 e1 55 00 ac 0b 2c |][.....'...U...,| 00000090 24 d3 8e 54 b0 3d ab 58 e4 23 fc 3a 79 93 fb 5e |$..T.=.X.#.:y..^| 000000a0 94 5a 3a c2 16 4e 56 cb 1b 7f 7e b3 4c 38 ca 5b |.Z:..NV...~.L8.[| 000000b0 ca ab c1 2c 2a 64 e7 77 fe 2a ba ee 08 33 b5 9b |...,*d.w.*...3..| 000000c0 d0 c2 b4 a8 fc 73 4f 01 fd 03 61 75 eb 6d 1a 74 |.....sO...au.m.t| 000000d0 5f 79 31 7f ed e6 f5 99 21 36 16 ed 25 d9 6d 2b |_y1.....!6..%.m+| 000000e0 5f f4 42 b8 9d 01 89 10 fe df a4 98 e7 ab ab ea |_.B.............| 000000f0 1d 1c 44 e1 49 d9 19 c9 ab f5 41 eb 4a 32 c2 39 |..D.I.....A.J2.9| 00000100 87 57 f6 f6 f3 b5 4d 17 72 f2 b1 16 19 aa ec 24 |.W....M.r......$| 00000110 39 bd e3 b1 68 b3 b0 7f fa 2a 3a 2e 99 ed db 8a |9...h....*:.....| 00000120 f8 61 b4 ef 9d 7d 85 95 ed ad eb 9e 71 f4 27 d3 |.a...}......q.'.| 00000130 f3 04 8b 8a 69 98 02 72 df e1 f9 83 27 5b 01 4c |....i..r....'[.L| 00000140 d4 9a b9 3b db ca 1e 40 35 db 6f c1 52 c0 7f 27 |...;[email protected]..'| 00000150 8a 1d bc 34 89 24 b6 e3 fd ec a1 2a e5 9e d1 8f |...4.$.....*....| 00000160 77 e0 d5 52 c0 4c c4 38 38 3c 28 19 bf 20 f0 03 |w..R.L.88<(.. ..| 00000170 38 a4 b1 b5 ed 6a b8 f7 a9 7b 65 b1 7b 64 4a 33 |8....j...{e.{dJ3| 00000180 66 1a 60 29 38 1d 5b 52 40 31 de a5 0c 0f cc 6f |f.`)8.[[email protected]| 00000190 dd 31 6d 3d f0 2a 32 85 67 66 ca 4f 02 aa 0d 30 |.1m=.*2.gf.O...0| 000001a0 66 c9 b2 33 c2 4b 8a fa 3c 7b 52 02 00 88 8e cf |f..3.K..<{R.....| 000001b0 67 1e d4 20 49 1d 1a b8 71 ad c2 d4 37 9d 00 fe |g.. I...q...7...| 000001c0 ff ff 07 fe ff ff 02 e0 93 06 00 60 ac 11 00 fe |...........`....| 000001d0 ff ff 05 fe ff ff 01 00 00 00 01 b0 4d 06 00 00 |............M...| 000001e0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.| 00000200 ADDITIONAL INFORMATION : =================== log of boot-repair 2012-11-24__09h45 =================== boot-repair version : 3.195~ppa2~precise boot-sav version : 3.195~ppa2~precise glade2script version : 3.2.2~ppa45~precise boot-sav-extra version : 3.195~ppa2~precise boot-repair is executed in live-session (Ubuntu 12.04.1 LTS, precise, Ubuntu, i686) CPU op-mode(s): 32-bit, 64-bit file=/cdrom/preseed/ubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- =================== os-prober: /dev/sda2:Windows 7 (loader):Windows:chain /dev/sda6:Ubuntu 12.04.1 LTS (12.04):Ubuntu:linux =================== blkid: /dev/sda1: SEC_TYPE="msdos" LABEL="DellUtility" UUID="07DA-0512" TYPE="vfat" /dev/sda2: LABEL="RECOVERY" UUID="8834146034145392" TYPE="ntfs" /dev/sda3: LABEL="OS" UUID="48E2189DE21890F4" TYPE="ntfs" /dev/sda5: LABEL="Varshneya" UUID="BC2A44C02A447982" TYPE="ntfs" /dev/loop0: TYPE="squashfs" /dev/sda6: UUID="34731459-4b0f-46ac-a9bf-cb360a2c947c" TYPE="ext4" /dev/sda7: UUID="dcb9ce9b-799a-4c65-b008-887b01775670" TYPE="swap" /dev/sr0: LABEL="Ubuntu 12.04.1 LTS i386" TYPE="iso9660" 1 disks with OS, 2 OS : 1 Linux, 0 MacOS, 1 Windows, 0 unknown type OS. Windows not detected by os-prober on sda3. Warning: extended partition does not start at a cylinder boundary. DOS and Linux will interpret the contents differently. =================== /mnt/etc/default/grub : # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" =================== /mnt/etc/grub.d/ : drwxr-xr-x 2 root root 4096 Nov 22 16:15 grub.d total 56 -rwxr-xr-x 1 root root 6743 Sep 12 20:19 00_header -rwxr-xr-x 1 root root 5522 Sep 12 20:05 05_debian_theme -rwxr-xr-x 1 root root 7407 Sep 12 20:19 10_linux -rwxr-xr-x 1 root root 6335 Sep 12 20:19 20_linux_xen -rwxr-xr-x 1 root root 1588 Sep 24 2010 20_memtest86+ -rwxr-xr-x 1 root root 7603 Sep 12 20:19 30_os-prober -rwxr-xr-x 1 root root 214 Sep 12 20:19 40_custom -rwxr-xr-x 1 root root 95 Sep 12 20:19 41_custom -rw-r--r-- 1 root root 483 Sep 12 20:19 README =================== No kernel in /mnt/boot: grub memtest86+.bin memtest86+_multiboot.bin =================== UEFI/Legacy mode: This live-session is not EFI-compatible. SecureBoot maybe enabled. =================== PARTITIONS & DISKS: sda1 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, not-far, /mnt/boot-sav/sda1. sda2 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, is-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, bootmgr, is-winboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, not-far, /mnt/boot-sav/sda2. sda3 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, is-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, haswinload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, farbios, /mnt/boot-sav/sda3. sda5 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, farbios, /mnt/boot-sav/sda5. sda6 : sda, not-sepboot, grubenv-ok grub2, grub-pc, update-grub, 64, no-kernel, is-os, not--efi--part, fstab-without-boot, fstab-without-efi, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, apt-get, grub-install, with--usr, fstab-without-usr, not-sep-usr, standard, farbios, /mnt. sda : not-GPT, BIOSboot-not-needed, has-no-EFIpart, not-usb, has-os, 63 sectors * 512 bytes =================== parted -l: Model: ATA ST9320423AS (scsi) Disk /dev/sda: 320GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 140MB 140MB primary fat16 diag 2 141MB 9936MB 9796MB primary ntfs boot 3 9936MB 112GB 102GB primary ntfs 4 112GB 320GB 208GB extended lba 6 112GB 166GB 54.1GB logical ext4 7 166GB 168GB 2352MB logical linux-swap(v1) 5 168GB 320GB 152GB logical ntfs Model: HL-DT-ST DVD+-RW GA31N (scsi) Disk /dev/sr0: 4700MB Sector size (logical/physical): 2048B/2048B Partition Table: msdos Number Start End Size Type File system Flags 1 131kB 2916MB 2916MB primary boot, hidden =================== parted -lm: BYT; /dev/sda:320GB:scsi:512:512:msdos:ATA ST9320423AS; 1:32.3kB:140MB:140MB:fat16::diag; 2:141MB:9936MB:9796MB:ntfs::boot; 3:9936MB:112GB:102GB:ntfs::; 4:112GB:320GB:208GB:::lba; 6:112GB:166GB:54.1GB:ext4::; 7:166GB:168GB:2352MB:linux-swap(v1)::; 5:168GB:320GB:152GB:ntfs::; BYT; /dev/sr0:4700MB:scsi:2048:2048:msdos:HL-DT-ST DVD+-RW GA31N; 1:131kB:2916MB:2916MB:::boot, hidden; =================== mount: /cow on / type overlayfs (rw) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) /dev/sr0 on /cdrom type iso9660 (ro,noatime) /dev/loop0 on /rofs type squashfs (ro,noatime) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) tmpfs on /tmp type tmpfs (rw,nosuid,nodev) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) gvfs-fuse-daemon on /home/ubuntu/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=ubuntu) /dev/sda6 on /mnt type ext4 (rw) /dev on /mnt/dev type none (rw,bind) /proc on /mnt/proc type none (rw,bind) /sys on /mnt/sys type none (rw,bind) /usr on /mnt/usr type none (rw,bind) /dev/sda1 on /mnt/boot-sav/sda1 type vfat (rw) /dev/sda2 on /mnt/boot-sav/sda2 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda3 on /mnt/boot-sav/sda3 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda5 on /mnt/boot-sav/sda5 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) =================== ls: /sys/block/sda (filtered): alignment_offset bdi capability dev device discard_alignment events events_async events_poll_msecs ext_range holders inflight power queue range removable ro sda1 sda2 sda3 sda4 sda5 sda6 sda7 size slaves stat subsystem trace uevent /sys/block/sr0 (filtered): alignment_offset bdi capability dev device discard_alignment events events_async events_poll_msecs ext_range holders inflight power queue range removable ro size slaves stat subsystem trace uevent /dev (filtered): autofs block bsg btrfs-control bus cdrom cdrw char console core cpu cpu_dma_latency disk dri dvd dvdrw ecryptfs fb0 fd full fuse fw0 hidraw0 hpet input kmsg log mapper mcelog mei mem net network_latency network_throughput null oldmem port ppp psaux ptmx pts random rfkill rtc rtc0 sda sda1 sda2 sda3 sda4 sda5 sda6 sda7 sg0 sg1 shm snapshot snd sr0 stderr stdin stdout uinput urandom usbmon0 usbmon1 usbmon2 v4l vga_arbiter video0 zero ls /dev/mapper: control =================== df -Th: Filesystem Type Size Used Avail Use% Mounted on /cow overlayfs 1.9G 113M 1.8G 6% / udev devtmpfs 1.9G 12K 1.9G 1% /dev tmpfs tmpfs 777M 872K 776M 1% /run /dev/sr0 iso9660 696M 696M 0 100% /cdrom /dev/loop0 squashfs 667M 667M 0 100% /rofs tmpfs tmpfs 1.9G 20K 1.9G 1% /tmp none tmpfs 5.0M 0 5.0M 0% /run/lock none tmpfs 1.9G 176K 1.9G 1% /run/shm /dev/sda6 ext4 51G 27G 22G 56% /mnt /dev/sda1 vfat 134M 9.1M 125M 7% /mnt/boot-sav/sda1 /dev/sda2 fuseblk 9.2G 5.6G 3.6G 61% /mnt/boot-sav/sda2 /dev/sda3 fuseblk 95G 80G 16G 84% /mnt/boot-sav/sda3 /dev/sda5 fuseblk 142G 130G 12G 92% /mnt/boot-sav/sda5 =================== fdisk -l: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb8000000 Device Boot Start End Blocks Id System /dev/sda1 63 273104 136521 de Dell Utility /dev/sda2 * 274432 19406847 9566208 7 HPFS/NTFS/exFAT /dev/sda3 19406848 218274364 99433758+ 7 HPFS/NTFS/exFAT /dev/sda4 218275838 625139711 203431937 f W95 Ext'd (LBA) /dev/sda5 328630272 625139711 148254720 7 HPFS/NTFS/exFAT /dev/sda6 218275840 324030463 52877312 83 Linux /dev/sda7 324032512 328626175 2296832 82 Linux swap / Solaris Partition table entries are not in disk order =================== Repair blockers 64bits detected. Please use this software in a 64bits session. (Please use Ubuntu-Secure-Remix-64bits (www.sourceforge.net/p/ubuntu-secured) which contains a 64bits-compatible version of this software.) This will enable this feature. =================== Final advice in case of recommended repair The boot files of [Ubuntu 12.04.1 LTS] are far from the start of the disk. Your BIOS may not detect them. You may want to retry after creating a /boot partition (EXT4, >200MB, start of the disk). This can be performed via tools such as gParted. Then select this partition via the [Separate /boot partition:] option of [Boot Repair]. (https://help.ubuntu.com/community/BootPartition) =================== Default settings Recommended-Repair This setting would reinstall the grub2 of sda6 into the MBR of sda, using the following options: kernel-purge Additional repair would be performed: unhide-bootmenu-10s fix-windows-boot =================== Settings chosen by the user Boot-Info This setting will not act on the MBR. No change has been performed on your computer. See you soon! pastebinit packages needed dpkg-preconfigure: unable to re-open stdin: No such file or directory pastebin.com ko (), using paste.ubuntu Please report this message to [email protected] Any help would be great, I'm really missing Ubuntu (hate being stuck in the Windows world).

    Read the article

  • Retrieving Json Array

    - by Rahul Varma
    Hi, I am trying to retrieve the values from the following url: http://rentopoly.com/ajax.php?query=Bo. I want to get the values of all the suggestions to be displayed in a list view one by one. This is how i want to do... public class AlertsAdd { public ArrayList<JSONObject> retrieveJSONArray(String urlString) { String result = queryRESTurl(urlString); ArrayList<JSONObject> ALERTS = new ArrayList<JSONObject>(); if (result != null) { try { JSONObject json = new JSONObject(result); JSONArray alertsArray = json.getJSONArray("suggestions"); for (int a = 0; a < alertsArray.length(); a++) { JSONObject alertitem = alertsArray.getJSONObject(a); ALERTS.add(alertitem); } return ALERTS; } catch (JSONException e) { Log.e("JSON", "There was an error parsing the JSON", e); } } JSONObject myObject = new JSONObject(); try { myObject.put("suggestions",myObject.getJSONArray("suggestions")); ALERTS.add(myObject); } catch (JSONException e1) { Log.e("JSON", "There was an error creating the JSONObject", e1); } return ALERTS; } private String queryRESTurl(String url) { // URLConnection connection; HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(url); HttpResponse response; try { response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); if (entity != null) { InputStream instream = entity.getContent(); String result = convertStreamToString(instream); instream.close(); return result; } } catch (ClientProtocolException e) { Log.e("REST", "There was a protocol based error", e); } catch (IOException e) { Log.e("REST", "There was an IO Stream related error", e); } return null; } /** * To convert the InputStream to String we use the * BufferedReader.readLine() method. We iterate until the BufferedReader * return null which means there's no more data to read. Each line will * appended to a StringBuilder and returned as String. */ private String convertStreamToString(InputStream is) { BufferedReader reader = new BufferedReader(new InputStreamReader(is)); StringBuilder sb = new StringBuilder(); String line = null; try { while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } } catch (IOException e) { e.printStackTrace(); } finally { try { is.close(); } catch (IOException e) { e.printStackTrace(); } } return sb.toString(); } } Here's the adapter code... public class AlertsAdapter extends ArrayAdapter<JSONObject> { public AlertsAdapter(Activity activity, List<JSONObject> alerts) { super(activity, 0, alerts); } @Override public View getView(int position, View convertView, ViewGroup parent) { Activity activity = (Activity) getContext(); LayoutInflater inflater = activity.getLayoutInflater(); View rowView = inflater.inflate(R.layout.list_text, null); JSONObject imageAndText = getItem(position); TextView textView = (TextView) rowView.findViewById(R.id.last_build_stat); try { textView.setText((String)imageAndText.get("suggestions")); } catch (JSONException e) { textView.setText("JSON Exception"); } return rowView; } } Here's the logcat... 04-30 13:09:46.656: INFO/ActivityManager(584): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.WorldToyota/.Alerts } 04-30 13:09:50.417: ERROR/JSON(924): There was an error parsing the JSON 04-30 13:09:50.417: ERROR/JSON(924): org.json.JSONException: JSONArray[0] is not a JSONObject. 04-30 13:09:50.417: ERROR/JSON(924): at org.json.JSONArray.getJSONObject(JSONArray.java:268) 04-30 13:09:50.417: ERROR/JSON(924): at com.WorldToyota.AlertsAdd.retrieveJSONArray(AlertsAdd.java:30) 04-30 13:09:50.417: ERROR/JSON(924): at com.WorldToyota.Alerts.onCreate(Alerts.java:20) 04-30 13:09:50.417: ERROR/JSON(924): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1123) 04-30 13:09:50.417: ERROR/JSON(924): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2364) 04-30 13:09:50.417: ERROR/JSON(924): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2417) 04-30 13:09:50.417: ERROR/JSON(924): at android.app.ActivityThread.access$2100(ActivityThread.java:116) 04-30 13:09:50.417: ERROR/JSON(924): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1794) 04-30 13:09:50.417: ERROR/JSON(924): at android.os.Handler.dispatchMessage(Handler.java:99) 04-30 13:09:50.417: ERROR/JSON(924): at android.os.Looper.loop(Looper.java:123) 04-30 13:09:50.417: ERROR/JSON(924): at android.app.ActivityThread.main(ActivityThread.java:4203) 04-30 13:09:50.417: ERROR/JSON(924): at java.lang.reflect.Method.invokeNative(Native Method) 04-30 13:09:50.417: ERROR/JSON(924): at java.lang.reflect.Method.invoke(Method.java:521) 04-30 13:09:50.417: ERROR/JSON(924): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 04-30 13:09:50.417: ERROR/JSON(924): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 04-30 13:09:50.417: ERROR/JSON(924): at dalvik.system.NativeStart.main(Native Method) 04-30 13:09:50.688: ERROR/JSON(924): There was an error creating the JSONObject 04-30 13:09:50.688: ERROR/JSON(924): org.json.JSONException: JSONObject["suggestions"] not found. 04-30 13:09:50.688: ERROR/JSON(924): at org.json.JSONObject.get(JSONObject.java:287) 04-30 13:09:50.688: ERROR/JSON(924): at org.json.JSONObject.getJSONArray(JSONObject.java:362) 04-30 13:09:50.688: ERROR/JSON(924): at com.WorldToyota.AlertsAdd.retrieveJSONArray(AlertsAdd.java:41) 04-30 13:09:50.688: ERROR/JSON(924): at com.WorldToyota.Alerts.onCreate(Alerts.java:20) 04-30 13:09:50.688: ERROR/JSON(924): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1123) 04-30 13:09:50.688: ERROR/JSON(924): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2364) 04-30 13:09:50.688: ERROR/JSON(924): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2417) 04-30 13:09:50.688: ERROR/JSON(924): at android.app.ActivityThread.access$2100(ActivityThread.java:116) 04-30 13:09:50.688: ERROR/JSON(924): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1794) 04-30 13:09:50.688: ERROR/JSON(924): at android.os.Handler.dispatchMessage(Handler.java:99) 04-30 13:09:50.688: ERROR/JSON(924): at android.os.Looper.loop(Looper.java:123) 04-30 13:09:50.688: ERROR/JSON(924): at android.app.ActivityThread.main(ActivityThread.java:4203) 04-30 13:09:50.688: ERROR/JSON(924): at java.lang.reflect.Method.invokeNative(Native Method) 04-30 13:09:50.688: ERROR/JSON(924): at java.lang.reflect.Method.invoke(Method.java:521) 04-30 13:09:50.688: ERROR/JSON(924): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 04-30 13:09:50.688: ERROR/JSON(924): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 04-30 13:09:50.688: ERROR/JSON(924): at dalvik.system.NativeStart.main(Native Method) Plz help me parsing this script and displaying the values in list format....

    Read the article

  • actionscript3: reflect-class applied on rotationY

    - by algro
    Hi, I'm using a class which applies a visual reflection-effect to defined movieclips. I use a reflection-class from here: link to source. It works like a charm except when I apply a rotation to the movieclip. In my case the reflection is still visible but only a part of it. What am I doing wrong? How could I pass/include the rotation to the Reflection-Class ? Thanks in advance! This is how you apply the Reflection Class to your movieclip: var ref_mc:MovieClip = new MoviClip(); addChild(ref_mc); var r1:Reflect = new Reflect({mc:ref_mc, alpha:50, ratio:50,distance:0, updateTime:0,reflectionDropoff:1}); Now I apply a rotation to my movieclip: ref_mc.rotationY = 30; And Here the Reflect-Class: package com.pixelfumes.reflect{ import flash.display.MovieClip; import flash.display.DisplayObject; import flash.display.BitmapData; import flash.display.Bitmap; import flash.geom.Matrix; import flash.display.GradientType; import flash.display.SpreadMethod; import flash.utils.setInterval; import flash.utils.clearInterval; public class Reflect extends MovieClip{ //Created By Ben Pritchard of Pixelfumes 2007 //Thanks to Mim, Jasper, Jason Merrill and all the others who //have contributed to the improvement of this class //static var for the version of this class private static var VERSION:String = "4.0"; //reference to the movie clip we are reflecting private var mc:MovieClip; //the BitmapData object that will hold a visual copy of the mc private var mcBMP:BitmapData; //the BitmapData object that will hold the reflected image private var reflectionBMP:Bitmap; //the clip that will act as out gradient mask private var gradientMask_mc:MovieClip; //how often the reflection should update (if it is video or animated) private var updateInt:Number; //the size the reflection is allowed to reflect within private var bounds:Object; //the distance the reflection is vertically from the mc private var distance:Number = 0; function Reflect(args:Object){ /*the args object passes in the following variables /we set the values of our internal vars to math the args*/ //the clip being reflected mc = args.mc; //the alpha level of the reflection clip var alpha:Number = args.alpha/100; //the ratio opaque color used in the gradient mask var ratio:Number = args.ratio; //update time interval var updateTime:Number = args.updateTime; //the distance at which the reflection visually drops off at var reflectionDropoff:Number = args.reflectionDropoff; //the distance the reflection starts from the bottom of the mc var distance:Number = args.distance; //store width and height of the clip var mcHeight = mc.height; var mcWidth = mc.width; //store the bounds of the reflection bounds = new Object(); bounds.width = mcWidth; bounds.height = mcHeight; //create the BitmapData that will hold a snapshot of the movie clip mcBMP = new BitmapData(bounds.width, bounds.height, true, 0xFFFFFF); mcBMP.draw(mc); //create the BitmapData the will hold the reflection reflectionBMP = new Bitmap(mcBMP); //flip the reflection upside down reflectionBMP.scaleY = -1; //move the reflection to the bottom of the movie clip reflectionBMP.y = (bounds.height*2) + distance; //add the reflection to the movie clip's Display Stack var reflectionBMPRef:DisplayObject = mc.addChild(reflectionBMP); reflectionBMPRef.name = "reflectionBMP"; //add a blank movie clip to hold our gradient mask var gradientMaskRef:DisplayObject = mc.addChild(new MovieClip()); gradientMaskRef.name = "gradientMask_mc"; //get a reference to the movie clip - cast the DisplayObject that is returned as a MovieClip gradientMask_mc = mc.getChildByName("gradientMask_mc") as MovieClip; //set the values for the gradient fill var fillType:String = GradientType.LINEAR; var colors:Array = [0xFFFFFF, 0xFFFFFF]; var alphas:Array = [alpha, 0]; var ratios:Array = [0, ratio]; var spreadMethod:String = SpreadMethod.PAD; //create the Matrix and create the gradient box var matr:Matrix = new Matrix(); //set the height of the Matrix used for the gradient mask var matrixHeight:Number; if (reflectionDropoff<=0) { matrixHeight = bounds.height; } else { matrixHeight = bounds.height/reflectionDropoff; } matr.createGradientBox(bounds.width, matrixHeight, (90/180)*Math.PI, 0, 0); //create the gradient fill gradientMask_mc.graphics.beginGradientFill(fillType, colors, alphas, ratios, matr, spreadMethod); gradientMask_mc.graphics.drawRect(0,0,bounds.width,bounds.height); //position the mask over the reflection clip gradientMask_mc.y = mc.getChildByName("reflectionBMP").y - mc.getChildByName("reflectionBMP").height; //cache clip as a bitmap so that the gradient mask will function gradientMask_mc.cacheAsBitmap = true; mc.getChildByName("reflectionBMP").cacheAsBitmap = true; //set the mask for the reflection as the gradient mask mc.getChildByName("reflectionBMP").mask = gradientMask_mc; //if we are updating the reflection for a video or animation do so here if(updateTime > -1){ updateInt = setInterval(update, updateTime, mc); } } public function setBounds(w:Number,h:Number):void{ //allows the user to set the area that the reflection is allowed //this is useful for clips that move within themselves bounds.width = w; bounds.height = h; gradientMask_mc.width = bounds.width; redrawBMP(mc); } public function redrawBMP(mc:MovieClip):void { // redraws the bitmap reflection - Mim Gamiet [2006] mcBMP.dispose(); mcBMP = new BitmapData(bounds.width, bounds.height, true, 0xFFFFFF); mcBMP.draw(mc); } private function update(mc):void { //updates the reflection to visually match the movie clip mcBMP = new BitmapData(bounds.width, bounds.height, true, 0xFFFFFF); mcBMP.draw(mc); reflectionBMP.bitmapData = mcBMP; } public function destroy():void{ //provides a method to remove the reflection mc.removeChild(mc.getChildByName("reflectionBMP")); reflectionBMP = null; mcBMP.dispose(); clearInterval(updateInt); mc.removeChild(mc.getChildByName("gradientMask_mc")); } } }

    Read the article

  • ASP.NET Creating a Rich Repeater, DataBind wiping out custom added controls...

    - by tonyellard
    So...I had this clever idea that I'd create my own Repeater control that implements paging and sorting by inheriting from Repeater and extending it's capabilities. I found some information and bits and pieces on how to go about this and everything seemed ok... I created a WebControlLibrary to house my custom controls. Along with the enriched repeater, I created a composite control that would act as the "pager bar", having forward, back and page selection. My pager bar works 100% on it's own, properly firing a paged changed event when the user interacts with it. The rich repeater databinds without issue, but when the databind fires (when I call base.databind()), the control collection is cleared out and my pager bars are removed. This screws up the viewstate for the pager bars making them unable to fire their events properly or maintain their state. I've tried adding the controls back to the collection after base.databind() fires, but that doesn't solve the issue. I start to get very strange results including problems with altering the hierarchy of the control tree (resolved by adding [ViewStateModeById]). Before I go back to the drawing board and create a second composite control which contains a repeater and the pager bars (so that the repeater isn't responsible for the pager bars viewstate) are there any thoughts about how to resolve the issue? In the interest of share and share alike, the code for the repeater itself is below, the pagerbars aren't as significant as the issue is really the maintaining of state for any additional child controls. (forgive the roughness of some of the code...it's still a work in progress) using System; using System.Collections.Generic; using System.ComponentModel; using System.Text; using System.Data; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; [ViewStateModeById] public class SortablePagedRepeater : Repeater, INamingContainer { private SuperRepeaterPagerBar topBar = new SuperRepeaterPagerBar(); private SuperRepeaterPagerBar btmBar = new SuperRepeaterPagerBar(); protected override void OnInit(EventArgs e) { Page.RegisterRequiresControlState(this); InitializeControls(); base.OnInit(e); EnsureChildControls(); } protected void InitializeControls() { topBar.ID = this.ID + "__topPagerBar"; topBar.NumberOfPages = this._currentProperties.numOfPages; topBar.CurrentPage = this.CurrentPageNumber; topBar.PageChanged += new SuperRepeaterPagerBar.PageChangedEventHandler(PageChanged); btmBar.ID = this.ID + "__btmPagerBar"; btmBar.NumberOfPages = this._currentProperties.numOfPages; btmBar.CurrentPage = this.CurrentPageNumber; btmBar.PageChanged += new SuperRepeaterPagerBar.PageChangedEventHandler(PageChanged); } protected override void CreateChildControls() { EnsureDataBound(); this.Controls.Add(topBar); this.Controls.Add(btmBar); //base.CreateChildControls(); } private void PageChanged(object sender, int newPage) { this.CurrentPageNumber = newPage; } public override void DataBind() { //pageDataSource(); //DataBind removes all controls from control collection... base.DataBind(); Controls.Add(topBar); Controls.Add(btmBar); } private void pageDataSource() { //Create paged data source PagedDataSource pds = new PagedDataSource(); pds.PageSize = this.ItemsPerPage; pds.AllowPaging = true; // first get a PagedDataSource going and perform sort if possible... if (base.DataSource is System.Collections.IEnumerable) { pds.DataSource = (System.Collections.IEnumerable)base.DataSource; } else if (base.DataSource is System.Data.DataView) { DataView data = (DataView)DataSource; if (this.SortBy != null && data.Table.Columns.Contains(this.SortBy)) { data.Sort = this.SortBy; } pds.DataSource = data.Table.Rows; } else if (base.DataSource is System.Data.DataTable) { DataTable data = (DataTable)DataSource; if (this.SortBy != null && data.Columns.Contains(this.SortBy)) { data.DefaultView.Sort = this.SortBy; } pds.DataSource = data.DefaultView; } else if (base.DataSource is System.Data.DataSet) { DataSet data = (DataSet)DataSource; if (base.DataMember != null && data.Tables.Contains(base.DataMember)) { if (this.SortBy != null && data.Tables[base.DataMember].Columns.Contains(this.SortBy)) { data.Tables[base.DataMember].DefaultView.Sort = this.SortBy; } pds.DataSource = data.Tables[base.DataMember].DefaultView; } else if (data.Tables.Count > 0) { if (this.SortBy != null && data.Tables[0].Columns.Contains(this.SortBy)) { data.Tables[0].DefaultView.Sort = this.SortBy; } pds.DataSource = data.Tables[0].DefaultView; } else { throw new Exception("DataSet doesn't have any tables."); } } else if (base.DataSource == null) { // don't do anything? } else { throw new Exception("DataSource must be of type System.Collections.IEnumerable. The DataSource you provided is of type " + base.DataSource.GetType().ToString()); } if (pds != null && base.DataSource != null) { //Make sure that the page doesn't exceed the maximum number of pages //available if (this.CurrentPageNumber >= pds.PageCount) { this.CurrentPageNumber = pds.PageCount - 1; } //Set up paging values... btmBar.CurrentPage = topBar.CurrentPage = pds.CurrentPageIndex = this.CurrentPageNumber; this._currentProperties.numOfPages = btmBar.NumberOfPages = topBar.NumberOfPages = pds.PageCount; base.DataSource = pds; } } public override object DataSource { get { return base.DataSource; } set { //init(); //reset paging/sorting values since we've potentially changed data sources. base.DataSource = value; pageDataSource(); } } protected override void Render(HtmlTextWriter writer) { topBar.RenderControl(writer); base.Render(writer); btmBar.RenderControl(writer); } [Serializable] protected struct CurrentProperties { public int pageNum; public int itemsPerPage; public int numOfPages; public string sortBy; public bool sortDir; } protected CurrentProperties _currentProperties = new CurrentProperties(); protected override object SaveControlState() { return this._currentProperties; } protected override void LoadControlState(object savedState) { this._currentProperties = (CurrentProperties)savedState; } [Category("Status")] [Browsable(true)] [NotifyParentProperty(true)] [DefaultValue("")] [Localizable(false)] public string SortBy { get { return this._currentProperties.sortBy; } set { //If sorting by the same column, swap the sort direction. if (this._currentProperties.sortBy == value) { this.SortAscending = !this.SortAscending; } else { this.SortAscending = true; } this._currentProperties.sortBy = value; } } [Category("Status")] [Browsable(true)] [NotifyParentProperty(true)] [DefaultValue(true)] [Localizable(false)] public bool SortAscending { get { return this._currentProperties.sortDir; } set { this._currentProperties.sortDir = value; } } [Category("Status")] [Browsable(true)] [NotifyParentProperty(true)] [DefaultValue(25)] [Localizable(false)] public int ItemsPerPage { get { return this._currentProperties.itemsPerPage; } set { this._currentProperties.itemsPerPage = value; } } [Category("Status")] [Browsable(true)] [NotifyParentProperty(true)] [DefaultValue(1)] [Localizable(false)] public int CurrentPageNumber { get { return this._currentProperties.pageNum; } set { this._currentProperties.pageNum = value; pageDataSource(); } } }

    Read the article

  • Moving items from one tableView to another tableView with extra's

    - by Totumus Maximus
    Let's say I have 2 UITableViews next to eachother on an ipad in landscape-mode. Now I want to move multiple items from one tableView to the other. They are allowed to be inserted on the bottom of the other tableView. Both have multiSelection activated. Now the movement itself is no problem with normal cells. But in my program each cell has an object which contains the consolidationState of the cell. There are 4 states a cell can have: Basic, Holding, Parent, Child. Basic = an ordinary cell. Holding = a cell which contains multiple childs but which wont be shown in this state. Parent = a cell which contains multiple childs and are shown directly below this cell. Child = a cell created by the Parent cell. The object in each cell also has some array which contains its children. The object also holds a quantityValue, which is displayed on the cell itself. Now the movement gets tricky. Holding and Parent cells can't move at all. Basic cells can move freely. Child cells can move freely but based on how many Child cells are left in the Parent. The parent will change or be deleted all together. If a Parent cell has more then 1 Child cell left it will stay a Parent cell. Else the Parent has no or 1 Child cell left and is useless. It will then be deleted. The items that are moved will always be of the same state. They will all be Basic cells. This is how I programmed the movement: *First I determine which of the tableViews is the sender and which is the receiver. *Second I ask all indexPathsForSelectedRows and sort them from highest row to lowest. *Then I build the data to be transferred. This I do by looping through the selectedRows and ask their object from the sender's listOfItems. *When I saved all the data I need I delete all the items from the sender TableView. This is why I sorted the selectedRows so I can start at the highest indexPath.row and delete without screwing up the other indexPaths. *When I loop through the selectedRows I check whether I found a cell with state Basic or Child. *If its a Basic cell I do nothing and just delete the cell. (this works fine with all Basic Cells) *If its a Child cell I go and check it's Parent cell immidiately. Since all Child cells are directly below the Parent cell and no other the the Parent's Childs are below that Parent I can safely get the path of the selected Childcell and move upwards and find it's Parent cell. When this Parent cell is found (this will always happen, no exceptions) it has to change accordingly. *The Parent cell will either be deleted or the object inside will have its quantity and children reduced. *After the Parent cell has changed accordingly the Child cell is deleted similarly like the Basic cells *After the deletion of the cells the receiver tableView will build new indexPaths so the movedObjects will have a place to go. *I then insert the objects into the listOfItems of the receiver TableView. The code works in the following ways: Only Basic cells are moved. Basic cells and just 1 child for each parent is moved. A single Basic/Child cell is moved. The code doesn't work when: I select more then 1 or all childs of some parent cell. The problem happens somewhere into updating the parent cells. I'm staring blindly at the code now so maybe a fresh look will help fix things. Any help will be appreciated. Here is the method that should do the movement: -(void)moveSelectedItems { UITableView *senderTableView = //retrieves the table with the data here. UITableView *receiverTableView = //retrieves the table which gets the data here. NSArray *selectedRows = senderTableView.indexPathsForSelectedRows; //sort selected rows from lowest indexPath.row to highest selectedRows = [selectedRows sortedArrayUsingSelector:@selector(compare:)]; //build up target rows (all objects to be moved) NSMutableArray *targetRows = [[NSMutableArray alloc] init]; for (int i = 0; i<selectedRows.count; i++) { NSIndexPath *path = [selectedRows objectAtIndex:i]; [targetRows addObject:[senderTableView.listOfItems objectAtIndex:path.row]]; } //delete rows at active for (int i = selectedRows.count-1; i >= 0; i--) { NSIndexPath *path = [selectedRows objectAtIndex:i]; //check what item you are deleting. act upon the status. Parent- and HoldingCells cant be selected so only check for basic and childs MyCellObject *item = [senderTableView.listOfItems objectAtIndex:path.row]; if (item.consolidatedState == ConsolidationTypeChild) { for (int j = path.row; j >= 0; j--) { MyCellObject *consolidatedItem = [senderTableView.listOfItems objectAtIndex:j]; if (consolidatedItem.consolidatedState == ConsolidationTypeParent) { //copy the consolidated item but with 1 less quantity MyCellObject *newItem = [consolidatedItem copyWithOneLessQuantity]; //creates a copy of the object with 1 less quantity. if (newItem.quantity > 1) { newItem.consolidatedState = ConsolidationTypeParent; [senderTableView.listOfItems replaceObjectAtIndex:j withObject:newItem]; } else if (newItem.quantity == 1) { newItem.consolidatedState = ConsolidationTypeBasic; [senderTableView.listOfItems removeObjectAtIndex:j]; MyCellObject *child = [senderTableView.listOfItems objectAtIndex:j+1]; child.consolidatedState = ConsolidationTypeBasic; [senderTableView.listOfItems replaceObjectAtIndex:j+1 withObject:child]; } else { [senderTableView.listOfItems removeObject:consolidatedItem]; } [senderTableView reloadData]; } } } [senderTableView.listOfItems removeObjectAtIndex:path.row]; } [senderTableView deleteRowsAtIndexPaths:selectedRows withRowAnimation:UITableViewRowAnimationTop]; //make new indexpaths for row animation NSMutableArray *newRows = [[NSMutableArray alloc] init]; for (int i = 0; i < targetRows.count; i++) { NSIndexPath *newPath = [NSIndexPath indexPathForRow:i+receiverTableView.listOfItems.count inSection:0]; [newRows addObject:newPath]; DLog(@"%i", i); //scroll to newest items [receiverTableView setContentOffset:CGPointMake(0, fmaxf(receiverTableView.contentSize.height - recieverTableView.frame.size.height, 0.0)) animated:YES]; } //add rows at target for (int i = 0; i < targetRows.count; i++) { MyCellObject *insertedItem = [targetRows objectAtIndex:i]; //all moved items will be brought into the standard (basic) consolidationType insertedItem.consolidatedState = ConsolidationTypeBasic; [receiverTableView.ListOfItems insertObject:insertedItem atIndex:receiverTableView.ListOfItems.count]; } [receiverTableView insertRowsAtIndexPaths:newRows withRowAnimation:UITableViewRowAnimationNone]; } If anyone has some fresh ideas of why the movement is bugging out let me know. If you feel like you need some extra information I'll be happy to add it. Again the problem is in the movement of ChildCells and updating the ParentCells properly. I could use some fresh looks and outsider ideas on this. Thanks in advance. *updated based on comments

    Read the article

  • getting SIGSEGV in std::_List_const_iterator<Exiv2::Exifdatum>::operator++ whilst using jni

    - by HJED
    Hi I'm using jni to access the exiv2 API in my Java project and I'm getting a SIGSEGV error in std::_List_const_iterator::operator++. I'm uncertain how to fix this error. I've tried using high -Xmx values as well as running on both jdk1.6.0 (server and cacao JVMs) and 1.7.0 (server JVM). gdb traceback: #0 0x00007fffa36f2363 in std::_List_const_iterator<Exiv2::Exifdatum>::operator++ (this=0x7ffff7fd3500) at /usr/include/c++/4.4/bits/stl_list.h:223 #1 0x00007fffa36f2310 in std::__distance<std::_List_const_iterator<Exiv2::Exifdatum> > (__first=..., __last=...) at /usr/include/c++/4.4/bits/stl_iterator_base_funcs.h:79 #2 0x00007fffa36f224d in std::distance<std::_List_const_iterator<Exiv2::Exifdatum> > (__first=..., __last=...) at /usr/include/c++/4.4/bits/stl_iterator_base_funcs.h:114 #3 0x00007fffa36f1f27 in std::list<Exiv2::Exifdatum, std::allocator<Exiv2::Exifdatum> >::size (this=0x7fffa4030910) at /usr/include/c++/4.4/bits/stl_list.h:805 #4 0x00007fffa36f1d50 in Exiv2::ExifData::count (this=0x7fffa4030910) at /usr/local/include/exiv2/exif.hpp:518 #5 0x00007fffa36f1d30 in Exiv2::ExifData::empty (this=0x7fffa4030910) at /usr/local/include/exiv2/exif.hpp:516 #6 0x00007fffa36f1763 in getVars (path=0x7fffa401d2f0 "/home/hjed/PC100001.JPG", env=0x6131c8, obj=0x7ffff7fd37a8) at src/main.cpp:146 #7 0x00007fffa36f19d8 in Java_photo_exiv2_Exiv2MetaDataStore_impl_1loadFromExiv (env=0x6131c8, obj=0x7ffff7fd37a8, path=0x7ffff7fd37a0, obj2=0x7ffff7fd3798) at src/main.cpp:160 #8 0x00007ffff21d9cc8 in ?? () #9 0x00000000fffffffe in ?? () #10 0x00007ffff7fd3740 in ?? () #11 0x0000000000613000 in ?? () #12 0x00007ffff7fd3738 in ?? () #13 0x00007fffaa1076e0 in ?? () #14 0x00007ffff7fd37a8 in ?? () #15 0x00007fffaa108d10 in ?? () #16 0x0000000000000000 in ?? () Java error: # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007fac11223363, pid=11905, tid=140378349111040 # # JRE version: 6.0_20-b20 # Java VM: OpenJDK 64-Bit Server VM (19.0-b09 mixed mode linux-amd64 ) # Derivative: IcedTea6 1.9.2 # Distribution: Ubuntu 10.10, package 6b20-1.9.2-0ubuntu2 # Problematic frame: # C [libExiff2-binding.so+0x4363] _ZNSt20_List_const_iteratorIN5Exiv29ExifdatumEEppEv+0xf # # If you would like to submit a bug report, please include # instructions how to reproduce the bug and visit: # https://bugs.launchpad.net/ubuntu/+source/openjdk-6/ # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # --------------- T H R E A D --------------- Current thread (0x0000000000dbf000): JavaThread "main" [_thread_in_native, id=11909, stack(0x00007fac61920000,0x00007fac61a21000)] siginfo:si_signo=SIGSEGV: si_errno=0, si_code=128 (), si_addr=0x0000000000000000 Registers: ... Register to memory mapping: RAX=0x6c8948f0245c8948 0x6c8948f0245c8948 is pointing to unknown location RBX=0x00007fac0c042c00 0x00007fac0c042c00 is pointing to unknown location RCX=0x0000000000000000 0x0000000000000000 is pointing to unknown location RDX=0x6c8948f0245c8948 0x6c8948f0245c8948 is pointing to unknown location RSP=0x00007fac61a1f4e0 0x00007fac61a1f4e0 is pointing into the stack for thread: 0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE RBP=0x00007fac61a1f4e0 0x00007fac61a1f4e0 is pointing into the stack for thread: 0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE RSI=0x00007fac61a1f4f0 0x00007fac61a1f4f0 is pointing into the stack for thread: 0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE RDI=0x00007fac61a1f500 0x00007fac61a1f500 is pointing into the stack for thread: 0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE R8 =0x00007fac0c054630 0x00007fac0c054630 is pointing to unknown location R9 =0x00007fac61a1f358 0x00007fac61a1f358 is pointing into the stack for thread: 0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE R10=0x00007fac61a1f270 0x00007fac61a1f270 is pointing into the stack for thread: 0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE R11=0x00007fac11223354 0x00007fac11223354: _ZNSt20_List_const_iteratorIN5Exiv29ExifdatumEEppEv+0 in /home/hjed/libExiff2-binding.so at 0x00007fac1121f000 R12=0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE R13=0x00007fac13ad1be8 {method} - klass: {other class} R14=0x00007fac61a1f7a8 0x00007fac61a1f7a8 is pointing into the stack for thread: 0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE R15=0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE Top of Stack: (sp=0x00007fac61a1f4e0) ... Instructions: (pc=0x00007fac11223363) ... Stack: [0x00007fac61920000,0x00007fac61a21000], sp=0x00007fac61a1f4e0, free space=1021k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C [libExiff2-binding.so+0x4363] _ZNSt20_List_const_iteratorIN5Exiv29ExifdatumEEppEv+0xf C [libExiff2-binding.so+0x4310] _ZSt10__distanceISt20_List_const_iteratorIN5Exiv29ExifdatumEEENSt15iterator_traitsIT_E15difference_typeES5_S5_St18input_iterator_tag+0x26 C [libExiff2-binding.so+0x424d] _ZSt8distanceISt20_List_const_iteratorIN5Exiv29ExifdatumEEENSt15iterator_traitsIT_E15difference_typeES5_S5_+0x36 C [libExiff2-binding.so+0x3f27] _ZNKSt4listIN5Exiv29ExifdatumESaIS1_EE4sizeEv+0x33 C [libExiff2-binding.so+0x3d50] _ZNK5Exiv28ExifData5countEv+0x18 C [libExiff2-binding.so+0x3d30] _ZNK5Exiv28ExifData5emptyEv+0x18 C [libExiff2-binding.so+0x3763] _Z7getVarsPKcP7JNIEnv_P8_jobject+0x3e3 C [libExiff2-binding.so+0x39d8] Java_photo_exiv2_Exiv2MetaDataStore_impl_1loadFromExiv+0x4b j photo.exiv2.Exiv2MetaDataStore.impl_loadFromExiv(Ljava/lang/String;Lphoto/exiv2/Exiv2MetaDataStore;)V+0 j photo.exiv2.Exiv2MetaDataStore.loadFromExiv2()V+9 j photo.exiv2.Exiv2MetaDataStore.loadData()V+1 j photo.exiv2.Exiv2MetaDataStore.<init>(Lphoto/ImageFile;)V+10 j photo.ImageFile.<init>(Ljava/lang/String;)V+11 j test.Main.main([Ljava/lang/String;)V+67 v ~StubRoutines::call_stub V [libjvm.so+0x428698] V [libjvm.so+0x4275c8] V [libjvm.so+0x432943] V [libjvm.so+0x447f91] C [java+0x3495] JavaMain+0xd75 Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) j photo.exiv2.Exiv2MetaDataStore.impl_loadFromExiv(Ljava/lang/String;Lphoto/exiv2/Exiv2MetaDataStore;)V+0 j photo.exiv2.Exiv2MetaDataStore.loadFromExiv2()V+9 j photo.exiv2.Exiv2MetaDataStore.loadData()V+1 j photo.exiv2.Exiv2MetaDataStore.<init>(Lphoto/ImageFile;)V+10 j photo.ImageFile.<init>(Ljava/lang/String;)V+11 j test.Main.main([Ljava/lang/String;)V+67 v ~StubRoutines::call_stub --------------- P R O C E S S --------------- Java Threads: ( => current thread ) 0x00007fac0c028000 JavaThread "Low Memory Detector" daemon [_thread_blocked, id=11924, stack(0x00007fac11532000,0x00007fac11633000)] 0x00007fac0c025800 JavaThread "CompilerThread1" daemon [_thread_blocked, id=11923, stack(0x00007fac11633000,0x00007fac11734000)] 0x00007fac0c022000 JavaThread "CompilerThread0" daemon [_thread_blocked, id=11922, stack(0x00007fac11734000,0x00007fac11835000)] 0x00007fac0c01f800 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=11921, stack(0x00007fac11835000,0x00007fac11936000)] 0x00007fac0c001000 JavaThread "Finalizer" daemon [_thread_blocked, id=11920, stack(0x00007fac11e2d000,0x00007fac11f2e000)] 0x0000000000e36000 JavaThread "Reference Handler" daemon [_thread_blocked, id=11919, stack(0x00007fac11f2e000,0x00007fac1202f000)] =>0x0000000000dbf000 JavaThread "main" [_thread_in_native, id=11909, stack(0x00007fac61920000,0x00007fac61a21000)] Other Threads: 0x0000000000e2f800 VMThread [stack: 0x00007fac1202f000,0x00007fac12130000] [id=11918] 0x00007fac0c02b000 WatcherThread [stack: 0x00007fac11431000,0x00007fac11532000] [id=11925] ... Heap PSYoungGen total 18432K, used 632K [0x00007fac47210000, 0x00007fac486a0000, 0x00007fac5bc10000) eden space 15808K, 4% used [0x00007fac47210000,0x00007fac472ae188,0x00007fac48180000) from space 2624K, 0% used [0x00007fac48410000,0x00007fac48410000,0x00007fac486a0000) to space 2624K, 0% used [0x00007fac48180000,0x00007fac48180000,0x00007fac48410000) PSOldGen total 42240K, used 0K [0x00007fac1de10000, 0x00007fac20750000, 0x00007fac47210000) object space 42240K, 0% used [0x00007fac1de10000,0x00007fac1de10000,0x00007fac20750000) PSPermGen total 21248K, used 2831K [0x00007fac13810000, 0x00007fac14cd0000, 0x00007fac1de10000) object space 21248K, 13% used [0x00007fac13810000,0x00007fac13ad3d80,0x00007fac14cd0000) Dynamic libraries: ... VM Arguments: jvm_args: -Dfile.encoding=UTF-8 java_command: test.Main Launcher Type: SUN_STANDARD Environment Variables: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games USERNAME=hjed LD_LIBRARY_PATH=/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64 SHELL=/bin/bash DISPLAY=:0.0 Signal Handlers: ... --------------- S Y S T E M --------------- OS:Ubuntu 10.10 (maverick) uname:Linux 2.6.35-24-generic #42-Ubuntu SMP Thu Dec 2 02:41:37 UTC 2010 x86_64 libc:glibc 2.12.1 NPTL 2.12.1 rlimit: STACK 8192k, CORE 0k, NPROC infinity, NOFILE 1024, AS infinity load average:0.27 0.31 0.30 /proc/meminfo: MemTotal: 4048200 kB MemFree: 106552 kB Buffers: 838212 kB Cached: 1172496 kB SwapCached: 0 kB Active: 1801316 kB Inactive: 1774880 kB Active(anon): 1224708 kB Inactive(anon): 355012 kB Active(file): 576608 kB Inactive(file): 1419868 kB Unevictable: 64 kB Mlocked: 64 kB SwapTotal: 7065596 kB SwapFree: 7065596 kB Dirty: 20 kB Writeback: 0 kB AnonPages: 1565608 kB Mapped: 213424 kB Shmem: 14216 kB Slab: 164812 kB SReclaimable: 102576 kB SUnreclaim: 62236 kB KernelStack: 4784 kB PageTables: 44908 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 9089696 kB Committed_AS: 3676872 kB VmallocTotal: 34359738367 kB VmallocUsed: 332952 kB VmallocChunk: 34359397884 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 48704 kB DirectMap2M: 4136960 kB CPU:total 8 (4 cores per cpu, 2 threads per core) family 6 model 26 stepping 5, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, ht Memory: 4k page, physical 4048200k(106552k free), swap 7065596k(7065596k free) vm_info: OpenJDK 64-Bit Server VM (19.0-b09) for linux-amd64 JRE (1.6.0_20-b20), built on Dec 10 2010 19:45:55 by "buildd" with gcc 4.4.5 main.cpp: jobject toJava(std::auto_ptr<Exiv2::Value> v, const char * type, JNIEnv * env) { jclass stringClass; jmethodID cid; jobject result; stringClass = env->FindClass("photo/exiv2/Value"); cid = env->GetMethodID(stringClass, "<init>", "(Ljava/lang/String;Ljava/lang/Object;)V"); jvalue val; if ((strcmp(type, "String") == 0) || (strcmp(type, "String") == 0)) { val.l = env->NewStringUTF(v->toString().c_str()); } else if (strcmp(type, "Short") == 0) { val.s = v->toLong(0); } else if (strcmp(type, "Long") == 0) { val.j = v->toLong(0); } result = env->NewObject(stringClass, cid, env->NewStringUTF(v->toString().c_str()), val); return result; } void inLoop(std::auto_ptr<MetadataContainer> md, JNIEnv * env, jmethodID mid, jobject obj) { jvalue values[2]; const char* key = md->key().c_str(); values[0].l = env->NewStringUTF(key); /** md->value().toString().c_str(); const char* value = md->typeName(); values[1].l = env->NewStringUTF(value); TODO: do type conversions */ //std::cout << md->typeName() << std::endl; /** const char* type = md->value().toString().c_str(); values[1].l = env->NewStringUTF(type);*/ values[1].l = toJava(md->getValue(), md->typeName(), env); env->CallVoidMethodA(obj, mid, values); } void getVars(const char* path, JNIEnv * env, jobject obj) { //Load image Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path); assert(image.get() != 0); image->readMetadata(); //load method jclass cls = env->GetObjectClass(obj); jmethodID mid = env->GetMethodID(cls, "exiv2_reciveElement", "(Ljava/lang/String;Lphoto/exiv2/Value;)V"); //Load IPTC data /**loadIPTC(image, path, env, obj, mid); loadEXIF(image, path, env, obj, mid);*/ Exiv2::IptcData &iptcData = image->iptcData(); if (mid != NULL) { //is there any IPTC data AND check that method exists if (iptcData.empty()) { std::string error(path); error += ": failed loading IPTC data, there may not be any data"; } else { Exiv2::IptcData::iterator end = iptcData.end(); for (Exiv2::IptcData::iterator md = iptcData.begin(); md != end; ++md) { std::auto_ptr<MetadataContainer> meta(new MetadataContainer(md)); inLoop(meta, env, mid, obj); } } Exiv2::ExifData &exifData = image->exifData(); //is there any Exif data AND check that method exists if (exifData.empty()) { //error occurs here (main.cpp:146) std::string error(path); error += ": failed loading Exif data, there may not be any data"; } else { Exiv2::ExifData::iterator end = exifData.end(); for (Exiv2::ExifData::iterator md = exifData.begin(); md != end; ++md) { std::auto_ptr<MetadataContainer> meta(new MetadataContainer(md)); inLoop(meta, env, mid, obj); } } } else { std::string error(path); error += ": failed to load method"; } } JNIEXPORT void JNICALL Java_photo_exiv2_Exiv2MetaDataStore_impl_1loadFromExiv(JNIEnv * env, jobject obj, jstring path, jobject obj2) { const char* path2 = env->GetStringUTFChars(path, NULL); getVars(path2, env, obj); env->ReleaseStringUTFChars(path, path2); } Thanks for any help, HJED EDIT This is the output when runing the jvm with the -cacao option: run: null:/usr/local/lib Error: Directory Olympus2 with 1536 entries considered invalid; not read. LOG: [0x00007ff005376700] We received a SIGSEGV and tried to handle it, but we were LOG: [0x00007ff005376700] unable to find a Java method at: LOG: [0x00007ff005376700] LOG: [0x00007ff005376700] PC=0x00007feffe4ee67d LOG: [0x00007ff005376700] LOG: [0x00007ff005376700] Dumping the current stacktrace: at photo.exiv2.Exiv2MetaDataStore.impl_loadFromExiv(Ljava/lang/String;Lphoto/exiv2/Exiv2MetaDataStore;)V(Native Method) at photo.exiv2.Exiv2MetaDataStore.loadFromExiv2()V(Exiv2MetaDataStore.java:38) at photo.exiv2.Exiv2MetaDataStore.loadData()V(Exiv2MetaDataStore.java:29) at photo.exiv2.MetaDataStore.<init>(Lphoto/ImageFile;)V(MetaDataStore.java:33) at photo.exiv2.Exiv2MetaDataStore.<init>(Lphoto/ImageFile;)V(Exiv2MetaDataStore.java:20) at photo.ImageFile.<init>(Ljava/lang/String;)V(ImageFile.java:22) at test.Main.main([Ljava/lang/String;)V(Main.java:28) LOG: [0x00007ff005376700] vm_abort: WARNING, port me to C++ and use os::abort() instead. LOG: [0x00007ff005376700] Exiting... LOG: [0x00007ff005376700] Backtrace (15 stack frames): LOG: [0x00007ff005376700] /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/cacao/libjvm.so(+0x4ff54) [0x7ff004306f54] LOG: [0x00007ff005376700] /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/cacao/libjvm.so(+0x5ac01) [0x7ff004311c01] LOG: [0x00007ff005376700] /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/cacao/libjvm.so(+0x66e9a) [0x7ff00431de9a] LOG: [0x00007ff005376700] /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/cacao/libjvm.so(+0x76408) [0x7ff00432d408] LOG: [0x00007ff005376700] /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/cacao/libjvm.so(+0x79a4c) [0x7ff004330a4c] LOG: [0x00007ff005376700] /lib/libpthread.so.0(+0xfb40) [0x7ff004d53b40] LOG: [0x00007ff005376700] /home/hjed/libExiff2-binding.so(_ZNSt20_List_const_iteratorIN5Exiv29ExifdatumEEppEv+0xf) [0x7feffe4ee67d] LOG: [0x00007ff005376700] /home/hjed/libExiff2-binding.so(_ZSt10__distanceISt20_List_const_iteratorIN5Exiv29ExifdatumEEENSt15iterator_traitsIT_E15difference_typeES5_S5_St18input_iterator_tag+0x26) [0x7feffe4ee62a] LOG: [0x00007ff005376700] /home/hjed/libExiff2-binding.so(_ZSt8distanceISt20_List_const_iteratorIN5Exiv29ExifdatumEEENSt15iterator_traitsIT_E15difference_typeES5_S5_+0x36) [0x7feffe4ee567] LOG: [0x00007ff005376700] /home/hjed/libExiff2-binding.so(_ZNKSt4listIN5Exiv29ExifdatumESaIS1_EE4sizeEv+0x33) [0x7feffe4ee22b] LOG: [0x00007ff005376700] /home/hjed/libExiff2-binding.so(_ZNK5Exiv28ExifData5countEv+0x18) [0x7feffe4ee054] LOG: [0x00007ff005376700] /home/hjed/libExiff2-binding.so(_ZNK5Exiv28ExifData5emptyEv+0x18) [0x7feffe4ee034] LOG: [0x00007ff005376700] /home/hjed/libExiff2-binding.so(_Z7getVarsPKcP7JNIEnv_P8_jobject+0x3d7) [0x7feffe4ed947] LOG: [0x00007ff005376700] /home/hjed/libExiff2-binding.so(Java_photo_exiv2_Exiv2MetaDataStore_impl_1loadFromExiv+0x4b) [0x7feffe4edcdc] LOG: [0x00007ff005376700] [0x7feffe701ccd] Java Result: 134 BUILD SUCCESSFUL (total time: 0 seconds)

    Read the article

  • Client no longer getting data from Web Service after introducing targetNamespace in XSD

    - by Laurence
    Sorry if there is way too much info in this post – there’s a load of story before I get to the actual problem. I thought I‘d include everything that might be relevant as I don’t have much clue what is wrong. I had a working web service and client (both written with VS 2008 in C#) for passing product data to an e-commerce site. The XSD started like this: <xs:schema id="Ecommerce" elementFormDefault="qualified" xmlns:mstns="http://tempuri.org/Ecommerce.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="eur"> <xs:complexType> <xs:sequence> <xs:element ref="sec" minOccurs="1" maxOccurs="1"/> </xs:sequence> etc Here’s a sample document sent from client to service: <eur xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T17:16:34.523" version="1.1"> <sec guid="BFBACB3C-4C17-4786-ACCF-96BFDBF32DA5" company_name="Company" version="1.1"> <data /> </sec> </eur> Then, I had to give the service a targetNamespace. Actually I don’t know if I “had” to set it, but I added (to the same VS project) some code to act as a client to a completely unrelated service (which also had no namespace), and the project would not build until I gave my service a namespace. Now the XSD starts like this: <xs:schema id="Ecommerce" elementFormDefault="qualified" xmlns:mstns="http://tempuri.org/Ecommerce.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.company.com/ecommerce" xmlns:ecom="http://www. company.com/ecommerce"> <xs:element name="eur"> <xs:complexType> <xs:sequence> <xs:element ref="ecom:sec" minOccurs="1" maxOccurs="1" /> </xs:sequence> etc As you can see above I also updated all the xs:element ref attributes to give them the “ecom” prefix. Now the project builds again. I found the client needed some modification after this. The client uses a SQL stored procedure to generate the XML. This is then de-serialised into an object of the correct type for the service’s “get_data” method. The object’s type used to be “eur” but after updating the web reference to the service, it became “get_dataEur”. And sure enough the parent element in the XML had to be changed to “get_dataEur” to be accepted. Then bizarrely I also had to put the xmlns attribute containing my namespace on the “sec” element (the immediate child of the parent element) rather than the parent element. Here’s a sample document now sent from client to service: <get_dataEur xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:23:20.653" version="1.1"> <sec xmlns="http://www.company.com/ecommerce" guid="BFBACB3C-4C17-4786-ACCF-96BFDBF32DA5" company_name="Company" version="1.1"> <data /> </sec> </get_dataEur> If in the service’s get_data method I then serialize the incoming object I see this (the parent element is “eur” and the xmlns attribute is on the parent element): <eur xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://www.company.com/ecommerce" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:23:20.653" version="1.1"> <sec guid="BFBACB3C-4C17-4786-ACCF-96BFDBF32DA5" company_name="Company" version="1.1"> <data /> </sec> </eur> The service then prepares a reply to go back to the client. The XML looks like this (the important data being sent back is the date_stamp attribute in the last_sent element): <eur xmlns="http://www.company.com/ecommerce" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:22:57.530" version="1.1"> <sec version="1.1" xmlns=""> <data> <last_sent date_stamp="2010-02-25T15:15:10.193" /> </data> </sec> </eur> Now finally, here’s the problem!!! The client does not see any data – all it sees is the parent element with nothing inside it. If I serialize the reply object in the client code it looks like this: <get_dataResponseEur xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" class="ECommerce_WebService" type="product" method="GetLastDateSent" chunk_no="1" total_chunks="1" date_stamp="2010-03-10T18:22:57.53" version="1.1" /> So, my questions are: why isn’t my client seeing the contents of the reply document? how do I fix it? why do I have to put the xmlns attribute on a child element rather than the parent element in the outgoing document? Here’s a bit more possibly relevant info: The client code (pre-namespace) called the service method like this: XmlSerializer serializer = new XmlSerializer(typeof(eur)); XmlReader reader = xml.CreateReader(); eur eur = (eur)serializer.Deserialize(reader); service.Credentials = new NetworkCredential(login, pwd); service.Url = url; rc = service.get_data(ref eur); After the namespace was added I had to change it to this: XmlSerializer serializer = new XmlSerializer(typeof(get_dataEur)); XmlReader reader = xml.CreateReader(); get_dataEur eur = (get_dataEur)serializer.Deserialize(reader); get_dataResponseEur eur1 = new get_dataResponseEur(); service.Credentials = new NetworkCredential(login, pwd); service.Url = url; rc = service.get_data(eur, out eur1);

    Read the article

  • Odd behavior when recursively building a return type for variadic functions

    - by Dennis Zickefoose
    This is probably going to be a really simple explanation, but I'm going to give as much backstory as possible in case I'm wrong. Advanced apologies for being so verbose. I'm using gcc4.5, and I realize the c++0x support is still somewhat experimental, but I'm going to act on the assumption that there's a non-bug related reason for the behavior I'm seeing. I'm experimenting with variadic function templates. The end goal was to build a cons-list out of std::pair. It wasn't meant to be a custom type, just a string of pair objects. The function that constructs the list would have to be in some way recursive, with the ultimate return value being dependent on the result of the recursive calls. As an added twist, successive parameters are added together before being inserted into the list. So if I pass [1, 2, 3, 4, 5, 6] the end result should be {1+2, {3+4, 5+6}}. My initial attempt was fairly naive. A function, Build, with two overloads. One took two identical parameters and simply returned their sum. The other took two parameters and a parameter pack. The return value was a pair consisting of the sum of the two set parameters, and the recursive call. In retrospect, this was obviously a flawed strategy, because the function isn't declared when I try to figure out its return type, so it has no choice but to resolve to the non-recursive version. That I understand. Where I got confused was the second iteration. I decided to make those functions static members of a template class. The function calls themselves are not parameterized, but instead the entire class is. My assumption was that when the recursive function attempts to generate its return type, it would instantiate a whole new version of the structure with its own static function, and everything would work itself out. The result was: "error: no matching function for call to BuildStruct<double, double, char, char>::Go(const char&, const char&)" The offending code: static auto Go(const Type& t0, const Type& t1, const Types&... rest) -> std::pair<Type, decltype(BuildStruct<Types...>::Go(rest...))> My confusion comes from the fact that the parameters to BuildStruct should always be the same types as the arguments sent to BuildStruct::Go, but in the error code Go is missing the initial two double parameters. What am I missing here? If my initial assumption about how the static functions would be chosen was incorrect, why is it trying to call the wrong function rather than just not finding a function at all? It seems to just be mixing types willy-nilly, and I just can't come up with an explanation as to why. If I add additional parameters to the initial call, it always burrows down to that last step before failing, so presumably the recursion itself is at least partially working. This is in direct contrast to the initial attempt, which always failed to find a function call right away. Ultimately, I've gotten past the problem, with a fairly elegant solution that hardly resembles either of the first two attempts. So I know how to do what I want to do. I'm looking for an explanation for the failure I saw. Full code to follow since I'm sure my verbal description was insufficient. First some boilerplate, if you feel compelled to execute the code and see it for yourself. Then the initial attempt, which failed reasonably, then the second attempt, which did not. #include <iostream> using std::cout; using std::endl; #include <utility> template<typename T1, typename T2> std::ostream& operator <<(std::ostream& str, const std::pair<T1, T2>& p) { return str << "[" << p.first << ", " << p.second << "]"; } //Insert code here int main() { Execute(5, 6, 4.3, 2.2, 'c', 'd'); Execute(5, 6, 4.3, 2.2); Execute(5, 6); return 0; } Non-struct solution: template<typename Type> Type BuildFunction(const Type& t0, const Type& t1) { return t0 + t1; } template<typename Type, typename... Rest> auto BuildFunction(const Type& t0, const Type& t1, const Rest&... rest) -> std::pair<Type, decltype(BuildFunction(rest...))> { return std::pair<Type, decltype(BuildFunction(rest...))> (t0 + t1, BuildFunction(rest...)); } template<typename... Types> void Execute(const Types&... t) { cout << BuildFunction(t...) << endl; } Resulting errors: test.cpp: In function 'void Execute(const Types& ...) [with Types = {int, int, double, double, char, char}]': test.cpp:33:35: instantiated from here test.cpp:28:3: error: no matching function for call to 'BuildFunction(const int&, const int&, const double&, const double&, const char&, const char&)' Struct solution: template<typename... Types> struct BuildStruct; template<typename Type> struct BuildStruct<Type, Type> { static Type Go(const Type& t0, const Type& t1) { return t0 + t1; } }; template<typename Type, typename... Types> struct BuildStruct<Type, Type, Types...> { static auto Go(const Type& t0, const Type& t1, const Types&... rest) -> std::pair<Type, decltype(BuildStruct<Types...>::Go(rest...))> { return std::pair<Type, decltype(BuildStruct<Types...>::Go(rest...))> (t0 + t1, BuildStruct<Types...>::Go(rest...)); } }; template<typename... Types> void Execute(const Types&... t) { cout << BuildStruct<Types...>::Go(t...) << endl; } Resulting errors: test.cpp: In instantiation of 'BuildStruct<int, int, double, double, char, char>': test.cpp:33:3: instantiated from 'void Execute(const Types& ...) [with Types = {int, int, double, double, char, char}]' test.cpp:38:41: instantiated from here test.cpp:24:15: error: no matching function for call to 'BuildStruct<double, double, char, char>::Go(const char&, const char&)' test.cpp:24:15: note: candidate is: static std::pair<Type, decltype (BuildStruct<Types ...>::Go(BuildStruct<Type, Type, Types ...>::Go::rest ...))> BuildStruct<Type, Type, Types ...>::Go(const Type&, const Type&, const Types& ...) [with Type = double, Types = {char, char}, decltype (BuildStruct<Types ...>::Go(BuildStruct<Type, Type, Types ...>::Go::rest ...)) = char] test.cpp: In function 'void Execute(const Types& ...) [with Types = {int, int, double, double, char, char}]': test.cpp:38:41: instantiated from here test.cpp:33:3: error: 'Go' is not a member of 'BuildStruct<int, int, double, double, char, char>'

    Read the article

  • Save object states in .data or attr - Performance vs CSS?

    - by Neysor
    In response to my answer yesterday about rotating an Image, Jamund told me to use .data() instead of .attr() First I thought that he is right, but then I thought about a bigger context... Is it always better to use .data() instead of .attr()? I looked in some other posts like what-is-better-data-or-attr or jquery-data-vs-attrdata The answers were not satisfactory for me... So I moved on and edited the example by adding CSS. I thought it might be useful to make a different Style on each image if it rotates. My style was the following: .rp[data-rotate="0"] { border:10px solid #FF0000; } .rp[data-rotate="90"] { border:10px solid #00FF00; } .rp[data-rotate="180"] { border:10px solid #0000FF; } .rp[data-rotate="270"] { border:10px solid #00FF00; } Because design and coding are often separated, it could be a nice feature to handle this in CSS instead of adding this functionality into JavaScript. Also in my case the data-rotate is like a special state which the image currently has. So in my opinion it make sense to represent it within the DOM. I also thought this could be a case where it is much better to save with .attr() then with .data(). Never mentioned before in one of the posts I read. But then i thought about performance. Which function is faster? I built my own test following: <!DOCTYPE HTML> <html> <head> <title>test</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script> <script type="text/javascript"> function runfirst(dobj,dname){ console.log("runfirst "+dname); console.time(dname+"-attr"); for(i=0;i<10000;i++){ dobj.attr("data-test","a"+i); } console.timeEnd(dname+"-attr"); console.time(dname+"-data"); for(i=0;i<10000;i++){ dobj.data("data-test","a"+i); } console.timeEnd(dname+"-data"); } function runlast(dobj,dname){ console.log("runlast "+dname); console.time(dname+"-data"); for(i=0;i<10000;i++){ dobj.data("data-test","a"+i); } console.timeEnd(dname+"-data"); console.time(dname+"-attr"); for(i=0;i<10000;i++){ dobj.attr("data-test","a"+i); } console.timeEnd(dname+"-attr"); } $().ready(function() { runfirst($("#rp4"),"#rp4"); runfirst($("#rp3"),"#rp3"); runlast($("#rp2"),"#rp2"); runlast($("#rp1"),"#rp1"); }); </script> </head> <body> <div id="rp1">Testdiv 1</div> <div id="rp2" data-test="1">Testdiv 2</div> <div id="rp3">Testdiv 3</div> <div id="rp4" data-test="1">Testdiv 4</div> </body> </html> It should also show if there is a difference with a predefined data-test or not. One result was this: runfirst #rp4 #rp4-attr: 515ms #rp4-data: 268ms runfirst #rp3 #rp3-attr: 505ms #rp3-data: 264ms runlast #rp2 #rp2-data: 260ms #rp2-attr: 521ms runlast #rp1 #rp1-data: 284ms #rp1-attr: 525ms So the .attr() function did always need more time than the .data() function. This is an argument for .data() I thought. Because performance is always an argument! Then I wanted to post my results here with some questions, and in the act of writing I compared with the questions Stack Overflow showed me (similar titles) And true enough, there was one interesting post about performance I read it and run their example. And now I am confused! This test showed that .data() is slower then .attr() !?!! Why is that so? First I thought it is because of a different jQuery library so I edited it and saved the new one. But the result wasn't changing... So now my questions to you: Why are there some differences in the performance in these two examples? Would you prefer to use data- HTML5 attributes instead of data, if it represents a state? Although it wouldn't be needed at the time of coding? Why - Why not? Now depending on the performance: Would performance be an argument for you using .attr() instead of data, if it shows that .attr() is better? Although data is meant to be used for .data()? UPDATE 1: I did see that without overhead .data() is much faster. Misinterpreted the data :) But I'm more interested in my second question. :) Would you prefer to use data- HTML5 attributes instead of data, if it represents a state? Although it wouldn't be needed at the time of coding? Why - Why not? Are there some other reasons you can think of, to use .attr() and not .data()? e.g. interoperability? because .data() is jquery style and HTML Attributes can be read by all... UPDATE 2: As we see from T.J Crowder's speed test in his answer attr is much faster then data! which is again confusing me :) But please! Performance is an argument, but not the highest! So give answers to my other questions please too!

    Read the article

< Previous Page | 116 117 118 119 120 121  | Next Page >