Search Results

Search found 23427 results on 938 pages for 'christopher done'.

Page 618/938 | < Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >

  • Bargain Hunter Round Up – Kicking Off The E-Commerce Holiday Season

    - by Jeri Kelley
    Everyone has a different way to tackle holiday shopping – Black Friday, Small Business Saturday, Cyber Monday, some have it done months in advance, and others wait until the very last minute.   For me, I’m not big into massive crowds so online shopping to the rescue.   Others thrive on the energy of being in the stores on the busiest shopping day of the year.  With last weekend marking the official kick-off to the holiday season, I thought I’d provide a round up of what’s trending:   Online numbers are looking up: According to comScore, for the holiday season-to-date, $16.4 billion has been spent online, marking a 16-percent increase versus the corresponding days last year. Thanksgiving Day – Why wait until Black Friday or Cyber Monday: Online shopping on Thanksgiving Day also increased, totaling $633 million in receipts, a 32 percent increase over Thanksgiving 2011 Black Friday – More than just in-store: Bargain hunters spent $1.042 billion online the day after Thanksgiving, a 26 percent increase of last year's Black Friday, according to new figures released today by market analyst ComScore Cyber Monday Week: Cyber Monday reached $1.465 billion in online spending, up 17 percent versus year ago, representing the heaviest online spending day in history and the second day this season (in addition to Black Friday) to surpass $1 billion in sales                 Cyber Monday is now being dubbed Cyber Week:  “The annual event is increasingly becoming Cyber Week instead of a one-day event as retailers open their arms for Americans who prefer to avoid crowds and compare prices online.” But, Cyber Monday continues its importance, driving a nearly 22% increase in year-over-year (YoY) online sales. Monday sales beat Sunday, the next highest day by a margin of 26.7%. Mobile shopping continues to rise: ChannelAdvisor that said mobile shopping made up 32% of all online spending over the Black Friday weekend Mobile devices were a key part of the online shopping craziness that was November 26th.  Sales from smartphones and tablets doubled this year. I n tablets the growth was 110% and in smartphones - 100% Mobile bar code scans on Black Friday increased 50 percent, according to a report from ScanLife For more on how you can be ready for the holiday season, check out my blog post on commerce strategies for the holidays.

    Read the article

  • Ask the Readers: Do You Use the Command Line?

    - by Asian Angel
    Most people have heard of it but not everyone is familiar or comfortable with how to use this bastion of geekdom. This week we would like to know if you use the command line or not. The command line…the bastion of ultimate geekery in many peoples’ eyes. You often hear people referring to doing things using the command line, so there must be something to it, right? For some people using the command line is the best, most efficient, and easiest way to do things on their systems. These are the people that many of us wish we were like. Next you have those who are proficient at using the command line but do not rely on it for everything they do on their systems. Then there are people who know how to perform some tasks or hacks using the command line but may not be as comfortable or knowledgeable as they wish to be using it. Moving on you find those who are interested in learning how to use the command line and just need a small push to get started.  Perhaps you feel too intimidated to learn it and just need the right opportunity to come along. And maybe you do not care one way or the other so long as you get done what you want to do on your system. Or you may prefer to simply use a graphical interface since that is quicker and easier for you (along with being familiar). You can find the whole range of people when it comes to using the command line… This week we would like to know if you use the command line or not. What command line category do you fit into? Power user? Casual usage? Totally lost? Let us know in the comments! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography Fun and Colorful Firefox Theme for Windows 7 Happy Snow Bears Theme for Chrome and Iron [Holiday] Download Full Command and Conquer: Tiberian Sun Game for Free Scorched Cometary Planet Wallpaper Quick Fix: Add the RSS Button Back to the Firefox Awesome Bar Dropbox Desktop Client 1.0.0 RC for Windows, Linux, and Mac Released

    Read the article

  • Microsoft, please help me diagnose TFS Administration permission issues!

    - by Martin Hinshelwood
    I recently had a fun time trying to debug a permission issue I ran into using TFS 2010’s TfsConfig. Update 5th March 2010 – In its style of true excellence my company has added rant to its “Suggestions for Better TFS”. <rant> I was trying to run the TfsConfig tool and I kept getting the message: “TF55038: You don't have sufficient privileges to run this tool. Contact your Team Foundation system administrator." This message made me think that it was something to do with the Install permissions as it is always recommended to use a single account to do every install of TFS. I did not install the original TFS on our network and my account was not used to do the TFS2010 install. But I did do the upgrade from 2010 beta 2 to 2010 RC with my current account. So I proceeded to do some checking: Am I in the administrators group on the server? Figure: Yes, I am in the administrators group on the server Am I in the Administration Console users list? Figure: Yes, I am in the Administration Console users list Have I reapplied the permissions in the Administration Console users list ticking all the options? Figure: Make sure you check all of the boxed if you want to have all the admin options Figure: Yes, I have made sure that all my options are correct. Am I in the Team Foundation administrators group? Figure: Yes, I am in the Team Foundation Administrators group Is my account explicitly SysAdmin on the Database server? Figure: Yes, I do have explicit SysAdmin on the database Can you guess what the problem was? The command line window was not running as the administrator! As with most other applications there should be an explicit error message that states: "You are not currently running in administrator mode; please restart the command line with elevated privileges!" This would have saved me 30 minutes, although I agree that I should change my name to Muppet and just be done with it. </rant>   Technorati Tags: Visual Studio ALM,Administration,Team Foundation Server Admin Console,TFS Admin Console

    Read the article

  • SQL Server – Learning SQL Server Performance: Indexing Basics – Video

    - by pinaldave
    Today I remember one of my older cartoon years ago created for Indexing and Performance. Every single time when Performance is discussed, Indexes are mentioned along with it. In recent times, data and application complexity is continuously growing.  The demand for faster query response, performance, and scalability by organizations is increasing and developers and DBAs need to now write efficient code to achieve this. DBA and Developers A DBA’s role is critical, because a production environment has to run 24×7, hence maintenance, trouble shooting, and quick resolutions are the need of the hour.  The first baby step into any performance tuning exercise in SQL Server involves creating, analysing, and maintaining indexes. Though we have learnt indexing concepts from our college days, indexing implementation inside SQL Server can vary.  Understanding this behaviour and designing our applications appropriately will make sure the application is performed to its highest potential. Video Learning Vinod Kumar and myself we often thought about this and realized that practical understanding of the indexes is very important. One can not master every single aspects of the index. However there are some minimum expertise one should gain if performance is one of the concern. We decided to build a course which just addresses the practical aspects of the performance. In this course, we explored some of these indexing fundamentals and we elaborated on how SQL Server goes about using indexes.  At the end of this course of you will know the basic structure of indexes, practical insights into implementation, and maintenance tips and tricks revolving around indexes.  Finally, we will introduce SQL Server 2012 column store indexes.  We have refrained from discussing internal storage structure of the indexes but have taken a more practical, demo-oriented approach to explain these core concepts. Course Outline Here are salient topics of the course. We have explained every single concept along with a practical demonstration. Additionally shared our personal scripts along with the same. Introduction Fundamentals of Indexing Index Fundamentals Index Fundamentals – Visual Representation Practical Indexing Implementation Techniques Primary Key Over Indexing Duplicate Index Clustered Index Unique Index Included Columns Filtered Index Disabled Index Index Maintenance and Defragmentation Introduction to Columnstore Index Indexing Practical Performance Tips and Tricks Index and Page Types Index and Non Deterministic Columns Index and SET Values Importance of Clustered Index Effect of Compression and Fillfactor Index and Functions Dynamic Management Views (DMV) – Fillfactor Table Scan, Index Scan and Index Seek Index and Order of Columns Final Checklist: Index and Performance Well, we believe we have done our part, now waiting for your comments and feedback. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology, Video

    Read the article

  • USB packets - receive wrong data

    - by regorianer
    i have a little python script which shows me the packets of an enocean device and does some events depending on the packet type. unfortunately it doesn't work because i'm getting wrong packets. Parts of the python script (used pySerial): Blockquote ser = serial.Serial('/dev/ttyUSB1',57600,bytesize = serial.EIGHTBITS,timeout = 1, parity = serial.PARITY_NONE , rtscts = 0) print 'clearing buffer' s = ser.read(10000) print 'start read' while 1: s = ser.read(1) for character in s: sys.stdout.write(" %s" % character.encode('hex')) print 'end' ser.close() output baudrate 57600: e0 e0 00 e0 00 e0 e0 e0 e0 e0 00 e0 e0 00 00 00 00 00 00 00 e0 e0 e0 00 00 00 00 e0 e0 e0 00 00 e0 e0 e0 e0 e0 00 e0 00 e0 e0 e0 e0 e0 00 e0 e0 00 00 00 00 00 00 e0 e0 e0 00 00 00 00 e0 e0 e0 00 00 e0 e0 e0 output baudrate 9600: a5 5a 0b 05 10 00 00 00 00 15 c4 56 20 6f a5 5a 0b 05 00 00 00 00 00 15 c4 56 20 5f linux terminal baudrate 57600: $stty -F /dev/ttyUSB1 57600 $stty < /dev/ttyUSB1 speed 57600 baud; line = 0; eof = ^A; min = 0; time = 0; -brkint -icrnl -imaxbel -opost -onlcr -isig -icanon -iexten -echo -echoe -echok -echoctl -echoke $while (true) do cat -A /dev/ttyUSB1 ; done myfile $hexdump -C myfile 00000000 4d 2d 60 4d 2d 60 5e 40 4d 2d 60 5e 40 4d 2d 60 |M-M-^@M-^@M-| 00000010 4d 2d 60 4d 2d 60 4d 2d 60 4d 2d 60 5e 40 4d 2d |M-M-M-M-^@M-| 00000020 60 4d 2d 60 5e 40 5e 40 5e 40 5e 40 5e 40 5e 40 |M-^@^@^@^@^@^@| 00000030 5e 40 4d 2d 60 4d 2d 60 4d 2d 60 5e 40 5e 40 5e |^@M-M-M-`^@^@^| 00000040 40 5e 40 4d 2d 60 4d 2d 60 4d 2d 60 |@^@M-M-M-`| 0000004c linux terminal baudrate 9600: $hexdump -C myfile2 00000000 5e 40 5e 55 4d 2d 44 56 30 4d 2d 3f 5e 40 5e 40 |^@^UM-DV0M-?^@^@| 00000010 5e 55 4d 2d 44 56 20 5f |^UM-DV _| 00000018 the specification says: 0x55 sync byte 1st 0xNNNN data length bytes (2 bytes) 0x07 opt length byte 0x01 type byte CRC, data, opt data und nochmal CRC but I'm not getting this packet structure. The output of the python script differs from the one I get via the terminal. I also wrote the python part with C, but the output is the same as with python As the USB receiver a BSC-BoR USB Receiver/Sender is used The EnOcean device is a simple button

    Read the article

  • Naming PowerPoint Components With A VSTO Add-In

    - by Tim Murphy
    Note: Cross posted from Coding The Document. Permalink Sometimes in order to work with Open XML we need a little help from other tools.  In this post I am going to describe  a fairly simple solution for marking up PowerPoint presentations so that they can be used as templates and processed using the Open XML SDK. Add-ins are tools which it can be hard to find information on.  I am going to up the obscurity by adding a Ribbon button.  For my example I am using Visual Studio 2008 and creating a PowerPoint 2007 Add-in project.  To that add a Ribbon Visual Designer.  The new ribbon by default will show up on the Add-in tab. Add a button to the ribbon.  Also add a WinForm to collect a new name for the object selected.  Make sure to set the OK button’s DialogResult to OK. In the ribbon button click event add the following code. ObjectNameForm dialog = new ObjectNameForm(); Selection selection = Globals.ThisAddIn.Application.ActiveWindow.Selection;   dialog.objectName = selection.ShapeRange.Name;   if (dialog.ShowDialog() == DialogResult.OK) { selection.ShapeRange.Name = dialog.objectName; } This code will first read the current Name attribute of the Shape object.  If the user clicks OK on the dialog it save the string value back to the same place. Once it is done you can retrieve identify the control through Open XML via the NonVisualDisplayProperties objects.  The only problem is that this object is a child of several different classes.  This means that there isn’t just one way to retrieve the value.  Below are a couple of pieces of code to identify the container that you have named. The first example is if you are naming placeholders in a layout slide. foreach(var slideMasterPart in slideMasterParts) { var layoutParts = slideMasterPart.SlideLayoutParts; foreach(SlideLayoutPart slideLayoutPart in layoutParts) { foreach (assmPresentation.Shape shape in slideLayoutPart.SlideLayout.CommonSlideData.ShapeTree.Descendants<assmPresentation.Shape>()) { var slideMasterProperties = from p in shape.Descendants<assmPresentation.NonVisualDrawingProperties>() where p.Name == TokenText.Text select p;   if (slideMasterProperties.Count() > 0) tokenFound = true; } } } The second example allows you to find charts that you have named with the add-in. foreach(var slidePart in slideParts) { foreach(assmPresentation.Shape slideShape in slidePart.Slide.CommonSlideData.ShapeTree.Descendants<assmPresentation.Shape>()) { var slideProperties = from g in slidePart.Slide.Descendants<GraphicFrame>() where g.NonVisualGraphicFrameProperties.NonVisualDrawingProperties.Name == TokenText.Text select g;   if(slideProperties.Count() > 0) { tokenFound = true; } } } Together the combination of Open XML and VSTO add-ins make a powerful combination in creating a process for maintaining a template and generating documents from the template.

    Read the article

  • Manage Files Easier With Aero Snap in Windows 7

    - by Mysticgeek
    Before the days of Aero Snap you would need to arrange your Windows in some weird way to see all of your files. Today we show you how to quickly use the Aero Snap feature get it done in few key strokes in Windows 7. You can of course navigate the windows in Explorer to get them so you can see everything side by side, or use a free utility like Cubic Explorer.   Getting Explorer Windows Side by Side The process is actually simple but quite useful when looking for a large amount of data. Right-click the Windows Explorer icon on the taskbar and click Windows Explorer. Our first window opens up and you can certainly drag it over the the right or left side of the screen but the quickest method we’re using is the “Windows Key+Right Arrow” key combo (make sure to hold the Windows key down). Now the Windows is nicely placed on the right side. Next we want to open the other window, simply right-click the Explorer icon again and click Windows Explorer.   Now we have our second window open, and all we need to do this time is use the Windows Key+Left Arrow combination. There we go! Now you should be able to browse your files a lot more simply than relying on the expanding tree method (as much). You can actually use this method to snap a window to all four corners of your screen if you don’t feel like dragging it. Once you play with Aero Snap more you may enjoy it, but if you still despise it, you can disable it too! Similar Articles Productive Geek Tips Multitask Like a Pro with AquaSnapUse Windows Vista Aero through Remote Desktop ConnectionEasily Disable Win 7 or Vista’s Aero Before Running an Application (Such as a Video Game)Understanding Windows Vista Aero Glass RequirementsFree Storage With AOL’s Xdrive (Online Storage Series) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses

    Read the article

  • SQL SERVER – Difference Between DATETIME and DATETIME2

    - by pinaldave
    Yesterday I have written a very quick blog post on SQL SERVER – Difference Between GETDATE and SYSDATETIME and I got tremendous response for the same. I suggest you read that blog post before continuing this blog post today. I had asked people to honestly take part and share their view about above two system function. There are few emails as well few comments on the blog post asking question how did I come to know the difference between the same. The answer is real world issues. I was called in for performance tuning consultancy where I was asked very strange question by one developer. Here is the situation he was facing. System had a single table with two different column of datetime. One column was datelastmodified and second column was datefirstmodified. One of the column was DATETIME and another was DATETIME2. Developer was populating them with SYSDATETIME respectively. He was always thinking that the value inserted in the table will be the same. This table was only accessed by INSERT statement and there was no updates done over it in application.One fine day he ran distinct on both of this column and was in for surprise. He always thought that both of the table will have same data, but in fact they had very different data. He presented this scenario to me. I said this can not be possible but when looked at the resultset, I had to agree with him. Here is the simple script generated to demonstrate the problem he was facing. This is just a sample of original table. DECLARE @Intveral INT SET @Intveral = 10000 CREATE TABLE #TimeTable (FirstDate DATETIME, LastDate DATETIME2) WHILE (@Intveral > 0) BEGIN INSERT #TimeTable (FirstDate, LastDate) VALUES (SYSDATETIME(), SYSDATETIME()) SET @Intveral = @Intveral - 1 END GO SELECT COUNT(DISTINCT FirstDate) D_GETDATE, COUNT(DISTINCT LastDate) D_SYSGETDATE FROM #TimeTable GO SELECT DISTINCT a.FirstDate, b.LastDate FROM #TimeTable a INNER JOIN #TimeTable b ON a.FirstDate = b.LastDate GO SELECT * FROM #TimeTable GO DROP TABLE #TimeTable GO Let us see the resultset. You can clearly see from result that SYSDATETIME() does not populate the same value in the both of the field. In fact the value is either rounded down or rounded up in the field which is DATETIME. Event though we are populating the same value, the values are totally different in both the column resulting the SELF JOIN fail and display different DISTINCT values. The best policy is if you are using DATETIME use GETDATE() and if you are suing DATETIME2 use SYSDATETIME() to populate them with current date and time to accurately address the precision. As DATETIME2 is introduced in SQL Server 2008, above script will only work with SQL SErver 2008 and later versions. I hope I have answered few questions asked yesterday. Reference: Pinal Dave (http://www.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • JavaScript Data Binding Frameworks

    - by dwahlin
    Data binding is where it’s at now days when it comes to building client-centric Web applications. Developers experienced with desktop frameworks like WPF or web frameworks like ASP.NET, Silverlight, or others are used to being able to take model objects containing data and bind them to UI controls quickly and easily. When moving to client-side Web development the data binding story hasn’t been great since neither HTML nor JavaScript natively support data binding. This means that you have to write code to place data in a control and write code to extract it. Although it’s certainly feasible to do it from scratch (many of us have done it this way for years), it’s definitely tedious and not exactly the best solution when it comes to maintenance and re-use. Over the last few years several different script libraries have been released to simply the process of binding data to HTML controls. In fact, the subject of data binding is becoming so popular that it seems like a new script library is being released nearly every week. Many of the libraries provide MVC/MVVM pattern support in client-side JavaScript apps and some even integrate directly with server frameworks like Node.js. Here’s a quick list of a few of the available libraries that support data binding (if you like any others please add a comment and I’ll try to keep the list updated): AngularJS MVC framework for data binding (although closely follows the MVVM pattern). Backbone.js MVC framework with support for models, key/value binding, custom events, and more. Derby Provides a real-time environment that runs in the browser an in Node.js. The library supports data binding and templates. Ember Provides support for templates that automatically update as data changes. JsViews Data binding framework that provides “interactive data-driven views built on top of JsRender templates”. jQXB Expression Binder Lightweight jQuery plugin that supports bi-directional data binding support. KnockoutJS MVVM framework with robust support for data binding. For an excellent look at using KnockoutJS check out John Papa’s course on Pluralsight. Meteor End to end framework that uses Node.js on the server and provides support for data binding on  the client. Simpli5 JavaScript framework that provides support for two-way data binding. WinRT with HTML5/JavaScript If you’re building Windows 8 applications using HTML5 and JavaScript there’s built-in support for data binding in the WinJS library.   I won’t have time to write about each of these frameworks, but in the next post I’m going to talk about my (current) favorite when it comes to client-side JavaScript data binding libraries which is AngularJS. AngularJS provides an extremely clean way – in my opinion - to extend HTML syntax to support data binding while keeping model objects (the objects that hold the data) free from custom framework method calls or other weirdness. While I’m writing up the next post, feel free to visit the AngularJS developer guide if you’d like additional details about the API and want to get started using it.

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Friday Fun: Play 3D Rally Racing in Google Chrome

    - by Asian Angel
    Are you a racing fan in need of a short (or long) break from work? Then get ready to enjoy a mid-day speed boost with the 3D Rally Racing extension for Google Chrome. 3D Rally Racing in Action This is the opening screen for 3D Rally Racing. You can start game play, view current best times, and read through the instructions from here. The first thing that you should do is have a quick look at the instructions to help you get set up and started. Click on “Play” to start the process. Before you can go further you will need to choose a “User Name”. Once you have done that click “Select Track”… Note: The extension will retain your name for later use even if you close your browser. When you first start out you will only have access to two tracks…the others require reaching a certain score/level to unlock them. Once you select a track you will be taken to the next screen. After you have selected a track you will need to choose your car and car color. All that is left to do afterwards is click on “Go Race”. Note: You will be competing against three other vehicles in the race. Here is a look at the “Desert Race Track”… And a look at the “Snow Race Track”. This game moves quickly and it is easy to fall behind if you are not careful! You can have a lot of fun playing this game while you are waiting for the day to end. Conclusion If you love racing games and want a fun way to waste the rest of afternoon at work, then you should definitely give 3D Rally Racing a try. Links Download the 3d Rally Racing extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Friday Fun: Uphill RushFriday Fun: Racing Fun with SuperTuxKart RacerHow to Make Google Chrome Your Default BrowserEnable Vista Black Style Theme for Google Chrome in XPIncrease Google Chrome’s Omnibox Popup Suggestion Count With an Undocumented Switch TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Friday Fun: Play 3D Rally Racing in Google Chrome

    - by Asian Angel
    Are you a racing fan in need of a short (or long) break from work? Then get ready to enjoy a mid-day speed boost with the 3D Rally Racing extension for Google Chrome. 3D Rally Racing in Action This is the opening screen for 3D Rally Racing. You can start game play, view current best times, and read through the instructions from here. The first thing that you should do is have a quick look at the instructions to help you get set up and started. Click on “Play” to start the process. Before you can go further you will need to choose a “User Name”. Once you have done that click “Select Track”… Note: The extension will retain your name for later use even if you close your browser. When you first start out you will only have access to two tracks…the others require reaching a certain score/level to unlock them. Once you select a track you will be taken to the next screen. After you have selected a track you will need to choose your car and car color. All that is left to do afterwards is click on “Go Race”. Note: You will be competing against three other vehicles in the race. Here is a look at the “Desert Race Track”… And a look at the “Snow Race Track”. This game moves quickly and it is easy to fall behind if you are not careful! You can have a lot of fun playing this game while you are waiting for the day to end. Conclusion If you love racing games and want a fun way to waste the rest of afternoon at work, then you should definitely give 3D Rally Racing a try. Links Download the 3d Rally Racing extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Friday Fun: Uphill RushFriday Fun: Racing Fun with SuperTuxKart RacerHow to Make Google Chrome Your Default BrowserEnable Vista Black Style Theme for Google Chrome in XPIncrease Google Chrome’s Omnibox Popup Suggestion Count With an Undocumented Switch TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Oracle WebCenter Portal: Pagelet Producer – What’s New in 11.1.1.6.0 Release

    - by kellsey.ruppel
    Igor Plyakov, Sr. Principal Product Marketing Manager is back to share what's new in Oracle WebCenter Portal: Pagelet Producer. In February 2012 Oracle released 11g Release 1 (11.1.1.6.0) for WebCenter Portal. Pagelet Producer (aka Ensemble) that came out with this release added support for several new capabilities that are described in this post. As of 11.1.1.5.0 release the Pagelet Producer can expose WSRP and JPDK portlets as pagelets that can then be consumed in any portal or any third-party application that does not have a WSRP consumer. Now Pagelet Producer team is working on simplifying use of pagelets in WebCenter Sites. To expose WSRP portlets a new Producer should be registered with Pagelet Producer which can be done using Enterprise Manager, WLST or the Pagelet Producer Administration Console (for details see Section 25.9 of Administrator’s Guide for Oracle WebCenter Portal). If the producer requires authentication, Pagelet Producer allows you to select and use one of standard WSS token profiles.  After registration is finished a new resource is created and automatically populated with pagelets that represent the portlets associated with the WSRP endpoint.  For 11.1.1.6.0 release we completed extensive testing of consuming all WebCenter Services that are exposed as WSRP portlets by E2.0 Producer and delivery them as pagelets to WebCenter Interaction portal. In Pagelet Producer 11.1.1.6.0 release we added OpenSocial container that allows consuming gadgets from other OpenSocial containers, e.g. iGoogle, and expose them as pagelets. You can also use Pagelet Producer to host OpenSocial gadgets that could leverage OpenSocial APIs that it supports – People, Activities, Appdata and Pub-Sub features. Note that People and Activities expose the People Connections and Activity Stream from WebCenter Portal, i.e. to use these features Pagelet Producer requires connection to WebCenter Portal schema. Pub-Sub allows leveraging OpenAJAX Hub API for inter-gadget communication. In addition to these major new additions in Pagelet Producer 11.1.1.6.0 release we also extended several functional modules: The Clipping module was extended to support clipping of multiple regions on web resource page and then re-assembly of these separately clipped regions into a single pagelet. The auto-login feature can now be applied to web resources protected with Kerberos authentication; you would find this new functionality handy for consuming SharePoint web parts The logging module now supports full HTTP traffic between the Pagelet Producer and proxied web resource. At last, as the rest of WebCenter Portal stack the Pagelet Producer 11.1.1.6.0 can run on IBM WebSphere Application Server.

    Read the article

  • Desktop Fun: Triple Monitor Wallpaper Collection Series 1

    - by Asian Angel
    Triple monitor setups provide spacious amounts of screen real-estate but can be extremely frustrating to find good wallpapers for. Today we present the first in a series of wallpaper collections to help decorate your triple monitor setup with lots of wallpaper goodness. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution. Special Note: The screen resolution sizes available for each of these wallpapers has been included to help you match them up to your individual settings as easily as possible. All images shown here are thumbnail screenshots of the largest size available for download. Available in the following resolutions: 3840*1024, 4096*1024, 4320*900, 4800*1200, 5040*1050, and 5760*1200. Available in the following resolutions: 4800*1200. Available in the following resolutions: 3840*960, 3840*1024, 4096*1024, 4320*900, and 4800*1200. Available in the following resolutions: 3840*960, 3840*1024, 4096*1024, 4320*900, and 4800*1200. Available in the following resolutions: 3840*960, 3840*1024, 4096*1024, 4320*900, 4800*1200, 5040*1050, and 5760*1200. Available in the following resolutions: 3840*960, 3840*1024, 4096*1024, 4320*900, and 4800*1200. Available in the following resolutions: 3840*960, 3840*1024, 4096*1024, 4320*900, 4800*1200, and 5040*1050. Available in the following resolutions: 3840*960, 3840*1024, 4096*1024, 4320*900, 4800*1200, and 5040*1050. Available in the following resolutions: 3840*960, 3840*1024, 4096*1024, 4320*900, and 4800*1200. Available in the following resolutions: 3840*960, 3840*1024, 4096*1024, 4320*900, 4800*1200, and 5040*1050. Available in the following resolutions: 3840*960, 3840*1024, 4096*1024, 4800*1200, and 5040*1050. Available in the following resolutions: 3840*960, 3840*1024, 4096*1024, 4320*900, 4800*1200, 5040*1050, 5760*1200, and 7680*1600. Available in the following resolutions: 3840*960, 3840*1024, 4096*1024, 4320*900, 4800*1200, 5040*1050, and 5760*1200. Available in the following resolutions: 5760*1200. Available in the following resolutions: 5760*1200. More Triple Monitor Goodness Beautiful 3 Screen Multi-Monitor Space Wallpaper Span the same wallpaper across multiple monitors or use a different wallpaper for each. Dual Monitors: Use a Different Wallpaper on Each Desktop in Windows 7, Vista or XP For more wallpapers be certain to see our great collections in the Desktop Fun section. Latest Features How-To Geek ETC How to Upgrade Windows 7 Easily (And Understand Whether You Should) The How-To Geek Guide to Audio Editing: Basic Noise Removal Install a Wii Game Loader for Easy Backups and Fast Load Times The Best of CES (Consumer Electronics Show) in 2011 The Worst of CES (Consumer Electronics Show) in 2011 HTG Projects: How to Create Your Own Custom Papercraft Toy Firefox 4.0 Beta 9 Available for Download – Get Your Copy Now The Frustrations of a Computer Literate Watching a Newbie Use a Computer [Humorous Video] Season0nPass Jailbreaks Current Gen Apple TVs IBM’s Jeopardy Playing Computer Watson Shows The Pros How It’s Done [Video] Tranquil Juice Drop Abstract Wallpaper Pulse Is a Sleek Newsreader for iOS and Android Devices

    Read the article

  • Game Design Dilemma

    - by Chris Williams
    I'm working on a 2d tilemapped RPG. I've actually made quite a fair amount of progress, but I'm at a point where I need to make a UI decision. I have the overland world completely mapped out, and I have several towns and special areas. I'm on the fence about how to integrate the two. Scenario 1: I have one ginormous map, where everything is the same scale. This means you can walk in and out of towns without having to load or wait and transition in any way. With everything the same scale, movement costs the same no matter where you are (in terms of time/turns/energy/hunger/whatever/etc...)  The potential downside to this is that it could take quite a long time to get anywhere on foot. Scenario 2: I have an overland map, a set of town maps, overland tactical maps, dungeon maps & special area maps. The overland map is at a different scale than the other maps. This means that time/turns/energy/hunger/whatever/etc is calculated at a different rate than on the other maps, which have a 1:1 scale. When entering a town, dungeon, special area or having a random encounter, you would effectively zoom in from the overland scale to the tactical scale. When you are done with combat, or exit a dungeon or town, it would zoom back out to the overland map. The downside to this is that at the zoomed out scale, the overland map isn't all that big (comparitively) and you can traverse it fairly quickly (in real time, not game world time.) Options: 1) Go with scenario 1, as is. 2) Go with scenario 1 and introduce a slightly speedier version of overland travel, such as a horse. 3) Go with scenario 1 and introduce "instant" travel, via portals or some kind of "click the big map" mechanism. This would only work with places you've already been, or somehow unlocked (perhaps via a quest.) 4) Go with Scenario 2, as is.   Thoughts, opinions, suggestions?  Feedback appreciated.

    Read the article

  • Disable Opera Thumbnail Previews on Windows 7 Taskbar

    - by Asian Angel
    If you are one of the people who does not care for the Taskbar Thumbnail Previews in Windows 7 then we have a quick and easy way for you to turn them off in Opera Browser. Before Here is our Opera Browser with four tabs full of HTG Network goodness… Hovering the mouse over the Taskbar Icon gives a nice preview of each tabs content. Looking closer you can see the fanned edge on the Taskbar Icon indicating that there are multiple tabs open. This is all good but what if you just want something simpler? Disabling the Previews If you want to disable the Taskbar Thumbnail Previews in Opera you will need to type opera:config in the Address Bar and press Enter. Once you have done that, you will see a condensed listing for all of Opera’s preferences. There is one Preference Category that we need to look for…User Prefs. Note: While a Quick Find Search could be conducted for the entry that needs to be modified, we have chosen to show the full method here. After scrolling down and finding the User Prefs category you will need to expand the section. Notice the size of the scrollbar in comparison with the screenshot above…there is quite a lot that you can look at and finesse in Opera if desired. Scroll down until you find the Use Windows 7 Taskbar Thumbnails entry. Uncheck the box but do not close the opera:config Tab yet…or your changes will not take effect. Scroll down once more until you reach the end of the User Prefs category and click Save. With this particular modification you will need to restart Opera after clicking OK. After restarting Opera the Taskbar Icon and Taskbar Thumbnail Preview will revert to the minimal Windows 7 default as shown here. You can see Opera’s Tab Bar in the thumbnail and the Taskbar Icon no longer has a “fanned edge”. Conclusion If you want to disable Opera’s Taskbar Thumbnail Previews on your Windows 7 system, then this quick modification will help get it sorted out in just a few moments. Similar Articles Productive Geek Tips Disable IE 8 Thumbnail Previews on Windows 7 TaskbarIncrease the size of Taskbar Preview Thumbnails in Windows 7Vista Style Popup Previews for Firefox TabsEnable Thumbnail Previews for Firefox in Windows 7 TaskbarWorkaround for Vista Taskbar Thumbnail Previews Not Showing Correctly TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 10 Superb Firefox Wallpapers OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides

    Read the article

  • Effectively implementing a game view using java

    - by kdavis8
    I am writing a 2d game in java. The game mechanics are similar to the Pokémon game boy advance series e.g. fire red, ruby, diamond and so on. I need a way to draw a huge map maybe 5000 by 5000 pixels and then load individual in game sprites to across the entirety of the map, like rendering a scene. Game sprites would be things like terrain objects, trees, rocks, bushes, also houses, castles, NPC's and so on. But i also need to implement some kind of camera view class that focuses on the player. the camera view class needs to follow the characters movements throughout the game map but it also needs to clip the rest of the map away from the user's field of view, so that the user can only see the arbitrary proximity adjacent to the player's sprite. The proximity's range could be something like 500 pixels in every direction around the player’s sprite. On top of this, i need to implement an independent resolution for the game world so that the game view will be uniform on all screen sizes and screen resolutions. I know that this does sound like a handful and may fall under the category of multiple questions, but the questions are all related and any advice would be very much appreciated. I don’t need a full source code listing but maybe some pointers to effective java API classes that could make doing what i need to do a lot simpler. Also any algorithmic/ design advice would greatly benefit me as well. example of what i am trying to do in source code form below package myPackage; /** * The Purpose of GameView is to: Render a scene using Scene class, Create a * clipping pane using CameraView class, and finally instantiate a coordinate * grid using Path class. * * Once all of these things have been done, GameView class should then be * instantiated and used jointly with its helper classes. CameraView should be * used as the main drawing image. CameraView is the the window to the game * world.Scene passes data constantly to CameraView so that the entire map flows * smoothly. Path uses the x and y coordinates from camera view to construct * cells for path finding algorithms. */ public class GameView { // Scene is a helper class to game view. it renders the entire map to memory // for the camera view. Scene scene; // Camera View is a helper class to game view. It clips the Scene into a // small image that follows the players coordinates. CameraView Camera; // Path is a helper class to game view. It observes and calculates the // coordinates of camera view and divides them into Grids/Cells for Path // finding. Path path; // this represents the player and has a getSprite() method that will return // the current frame column row combination of the passed sprite sheet. Sprite player; }

    Read the article

  • What’s new in IIS8, Perf, Indexing Service-Week 49

    - by OWScott
    You can find this week’s video here. After some delays in the publishing process week 49 is finally live.  This week I'm taking Q&A from viewers, starting with what's new in IIS8, a question on enable32BitAppOnWin64, performance settings for asp.net, the ARR Helper, and Indexing Services. Starting this week for the remaining four weeks of the 52 week series I'll be taking questions and answers from the viewers. Already a number of questions have come in. This week we look at five topics. Pre-topic: We take a look at the new features in IIS8. Last week Internet Information Services (IIS) 8 Beta was released to the public. This week's video touches on the upcoming features in the next version of IIS. Here’s a link to the blog post which was mentioned in the video Question 1: In a number of places (http://learn.iis.net/page.aspx/201/32-bit-mode-worker-processes/, http://channel9.msdn.com/Events/MIX/MIX08/T06), I've saw that enable32BitAppOnWin64 is recommended for performance reasons. I'm guessing it has to do with memory usage... but I never could find detailed explanation on why this is recommended (even Microsoft books are vague on this topic - they just say - do it, but provide no reason why it should be done). Do you have any insight into this? (Predrag Tomasevic) Question 2: Do you have any recommendations on modifying aspnet.config and machine.config to deliver better performance when it comes to "high number of concurrent connections"? I've implemented recommendations for modifying machine.config from this article (http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx - ASP.NET Process Configuration Optimization section)... but I would gladly listen to more recommendations if you have them. (Predrag Tomasevic) Question 3: Could you share more of your experience with ARR Helper? I'm specifically interested in configuring ARR Helper (for example - how to only accept only X-Forwards-For from certain IPs (proxies you trust)). (Predrag Tomasevic) Question 4: What is the replacement for indexing service to use in coding web search pages on a Windows 2008R2 server? (Susan Williams) Here’s the link that was mentioned: http://technet.microsoft.com/en-us/library/ee692804.aspx This is now week 49 of a 52 week series for the web pro. You can view past and future weeks here: http://dotnetslackers.com/projects/LearnIIS7/ You can find this week’s video here.

    Read the article

  • ADNOC talks about 50x increase in performance

    - by KLaker
    If you are still wondering about how Exadata can revolutionise your business then I would recommend watching this great video which was recorded at this year's OpenWorld. First a little background...The Abu Dhabi National Oil Company for Distribution (ADNOC) is an integrated energy company that was founded in 1973. ADNOC Distribution markets and distributes petroleum products and services within the United Arab Emirates and internationally. As one of the largest and most innovative government-owned petroleum companies in the Arab Gulf, ADNOC Distribution is renowned and respected for the exceptional quality and reliability of its products and services. Its five corporate divisions include more than 200 filling stations (a number that is growing at 8% annually), more than 150 convenience stores, 10 vehicle inspection stations, as well as wholesale and retail sales of bulk fuel, gas, oil, diesel, and lubricants. ADNOC selected Oracle Exadata Database Machine after extensive research because it provided them with a single platform that can run mixed workloads in a single unified machine: "We chose Oracle Exadata Database Machine because it.offered a fully integrated and highly engineered system that was ready to deploy. With our infrastructure running all the same technology, we can operate any type of Oracle Database without restrictions and be prepared for business growth," said Ali Abdul Aziz Al-Ali, IT division manager, ADNOC Distribution. ".....we could consolidate our transaction processing and business intelligence onto one platform. Competing solutions are just not capable of doing that." - Awad Ahmed Ali El-Sidiq, Senior Database Administrator, ADNOC Distribution In this new video Awad Ahmen Ali El Sidddig, Senior DBA at ADNOC, talks about the impact that Exadata has had on his team and the whole business. ADNOC is using our engineered systems to drive and manage all their workloads: from transaction systems to payments system to data warehouse to BI environment. A true Disk-to-Dashboard revolution using Engineered Systems. This engineered approach is delivering 50x improvement in performance with one queries running 100x faster! The IT has even revolutionised some of their data warehouse related processes with the help of Exadata and now jobs that were taking over 4 hours now run in a few minutes.  To watch the video click on the image below which will take you to our Oracle YouTube page: (if the above link does not work, click here: http://www.youtube.com/watch?v=zcRpxc6u5Ic) Now that queries are running 100x faster and jobs are completing in minutes not hours, what is next for the IT team at ADNOC? Like many of our customers ADNOC is now looking to take advantage of big data to help them better align their business operations with customer behaviour and customer insights. To help deliver this next level of insight the IT team is looking at the new features in Oracle Database 12c such as the new in-memory feature to deliver even more performance gains.  The great news is that Awad Ahmen Ali El Sidddig was awarded DBA of the Year - EMEA within our Data Warehouse Global Leaders programme and you can see the badge for this award pop-up at the start of video. Well done to everyone at ADNOC and thanks for spending the time with us at OOW to create this great video.

    Read the article

  • Setting up a local AI server - easy with Solaris 11

    - by Stefan Hinker
    Many things are new in Solaris 11, Autoinstall is one of them.  If, like me, you've known Jumpstart for the last 2 centuries or so, you'll have to start from scratch.  Well, almost, as the concepts are similar, and it's not all that difficult.  Just new. I wanted to have an AI server that I could use for demo purposes, on the train if need be.  That answers the question of hardware requirements: portable.  But let's start at the beginning. First, you need an OS image, of course.  In the new world of Solaris 11, it is now called a repository.  The original can be downloaded from the Solaris 11 page at Oracle.   What you want is the "Oracle Solaris 11 11/11 Repository Image", which comes in two parts that can be combined using cat.  MD5 checksums for these (and all other downloads from that page) are available closer to the top of the page. With that, building the repository is quick and simple: # zfs create -o mountpoint=/export/repo rpool/ai/repo # zfs create rpool/ai/repo/s11 # mount -o ro -F hsfs /tmp/sol-11-1111-repo-full.iso /mnt # rsync -aP /mnt/repo /export/repo/s11 # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@fcs # pkgrepo info -s /export/repo/sol11/repo PUBLISHER PACKAGES STATUS UPDATED solaris 4292 online 2012-03-12T20:47:15.378639Z That's all there's to it.  Let's make a snapshot, just to be on the safe side.  You never know when one will come in handy.  To use this repository, you could just add it as a file-based publisher: # pkg set-publisher -g file:///export/repo/sol11/repo solaris In case I'd want to access this repository through a (virtual) network, i'll now quickly activate the repository-service: # svccfg -s application/pkg/server \ setprop pkg/inst_root=/export/repo/sol11/repo # svccfg -s application/pkg/server setprop pkg/readonly=true # svcadm refresh application/pkg/server # svcadm enable application/pkg/server That's all you need - now point your browser to http://localhost/ to view your beautiful repository-server. Step 1 is done.  All of this, by the way, is nicely documented in the README file that's contained in the repository image. Of course, we already have updates to the original release.  You can find them in MOS in the Oracle Solaris 11 Support Repository Updates (SRU) Index.  You can simply add these to your existing repository or create separate repositories for each SRU.  The individual SRUs are self-sufficient and incremental - SRU4 includes all updates from SRU2 and SRU3.  With ZFS, you can also get both: A full repository with all updates and at the same time incremental ones up to each of the updates: # mount -o ro -F hsfs /tmp/sol-11-1111-sru4-05-incr-repo.iso /mnt # pkgrecv -s /mnt/repo -d /export/repo/sol11/repo '*' # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@sru4 # zfs set snapdir=visible rpool/ai/repo/sol11 # svcadm restart svc:/application/pkg/server:default The normal repository is now updated to SRU4.  Thanks to the ZFS snapshots, there is also a valid repository of Solaris 11 11/11 without the update located at /export/repo/sol11/.zfs/snapshot/fcs . If you like, you can also create another repository service for each update, running on a separate port. But now lets continue with the AI server.  Just a little bit of reading in the dokumentation makes it clear that we will need to run a DHCP server for this.  Since I already have one active (for my SunRay installation) and since it's a good idea to have these kinds of services separate anyway, I decided to create this in a Zone.  So, let's create one first: # zfs create -o mountpoint=/export/install rpool/ai/install # zfs create -o mountpoint=/zones rpool/zones # zonecfg -z ai-server zonecfg:ai-server> create create: Using system default template 'SYSdefault' zonecfg:ai-server> set zonepath=/zones/ai-server zonecfg:ai-server> add dataset zonecfg:ai-server:dataset> set name=rpool/ai/install zonecfg:ai-server:dataset> set alias=install zonecfg:ai-server:dataset> end zonecfg:ai-server> commit zonecfg:ai-server> exit # zoneadm -z ai-server install # zoneadm -z ai-server boot ; zlogin -C ai-server Give it a hostname and IP address at first boot, and there's the Zone.  For a publisher for Solaris packages, it will be bound to the "System Publisher" from the Global Zone.  The /export/install filesystem, of course, is intended to be used by the AI server.  Let's configure it now: #zlogin ai-server root@ai-server:~# pkg install install/installadm root@ai-server:~# installadm create-service -n x86-fcs -a i386 \ -s pkg://solaris/install-image/[email protected],5.11-0.175.0.0.0.2.1482 \ -d /export/install/fcs -i 192.168.2.20 -c 3 With that, the core AI server is already done.  What happened here?  First, I installed the AI server software.  IPS makes that nice and easy.  If necessary, it'll also pull in the required DHCP-Server and anything else that might be missing.  Watch out for that DHCP server software.  In Solaris 11, there are two different versions.  There's the one you might know from Solaris 10 and earlier, and then there's a new one from ISC.  The latter is the one we need for AI.  The SMF service names of both are very similar.  The "old" one is "svc:/network/dhcp-server:default". The ISC-server comes with several SMF-services. We at least need "svc:/network/dhcp/server:ipv4".  The command "installadm create-service" creates the installation-service. It's called "x86-fcs", serves the "i386" architecture and gets its boot image from the repository of the system publisher, using version 5.11,5.11-0.175.0.0.0.2.1482, which is Solaris 11 11/11.  (The option "-a i386" in this example is optional, since the installserver itself runs on a x86 machine.) The boot-environment for clients is created in /export/install/fcs and the DHCP-server is configured for 3 IP-addresses starting at 192.168.2.20.  This configuration is stored in a very human readable form in /etc/inet/dhcpd4.conf.  An AI-service for SPARC systems could be created in the very same way, using "-a sparc" as the architecture option. Now we would be ready to register and install the first client.  It would be installed with the default "solaris-large-server" using the publisher "http://pkg.oracle.com/solaris/release" and would query it's configuration interactively at first boot.  This makes it very clear that an AI-server is really only a boot-server.  The true source of packets to install can be different.  Since I don't like these defaults for my demo setup, I did some extra config work for my clients. The configuration of a client is controlled by manifests and profiles.  The manifest controls which packets are installed and how the filesystems are layed out.  In that, it's very much like the old "rules.ok" file in Jumpstart.  Profiles contain additional configuration like root passwords, primary user account, IP addresses, keyboard layout etc.  Hence, profiles are very similar to the old sysid.cfg file. The easiest way to get your hands on a manifest is to ask the AI server we just created to give us it's default one.  Then modify that to our liking and give it back to the installserver to use: root@ai-server:~# mkdir -p /export/install/configs/manifests root@ai-server:~# cd /export/install/configs/manifests root@ai-server:~# installadm export -n x86-fcs -m orig_default \ -o orig_default.xml root@ai-server:~# cp orig_default.xml s11-fcs.small.local.xml root@ai-server:~# vi s11-fcs.small.local.xml root@ai-server:~# more s11-fcs.small.local.xml <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="S11 Small fcs local"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="IPS"> <destination>  </destination> <source> <publisher name="solaris"> <origin name="http://192.168.2.12/"/> </publisher> </source> <!-- By default the latest build available, in the specified IPS repository, is installed. If another build is required, the build number has to be appended to the 'entire' package in the following form: <name>pkg:/[email protected]#</name> --> <software_data action="install"> <name>pkg:/[email protected],5.11-0.175.0.0.0.2.0</name> <name>pkg:/group/system/solaris-small-server</name> </software_data> </software> </ai_instance> </auto_install> root@ai-server:~# installadm create-manifest -n x86-fcs -d \ -f ./s11-fcs.small.local.xml root@ai-server:~# installadm list -m -n x86-fcs Manifest Status Criteria -------- ------ -------- S11 Small fcs local Default None orig_default Inactive None The major points in this new manifest are: Install "solaris-small-server" Install a few locales less than the default.  I'm not that fluid in French or Japanese... Use my own package service as publisher, running on IP address 192.168.2.12 Install the initial release of Solaris 11:  pkg:/[email protected],5.11-0.175.0.0.0.2.0 Using a similar approach, I'll create a default profile interactively and use it as a template for a few customized building blocks, each defining a part of the overall system configuration.  The modular approach makes it easy to configure numerous clients later on: root@ai-server:~# mkdir -p /export/install/configs/profiles root@ai-server:~# cd /export/install/configs/profiles root@ai-server:~# sysconfig create-profile -o default.xml root@ai-server:~# cp default.xml general.xml; cp default.xml mars.xml root@ai-server:~# cp default.xml user.xml root@ai-server:~# vi general.xml mars.xml user.xml root@ai-server:~# more general.xml mars.xml user.xml :::::::::::::: general.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/timezone"> <instance enabled="true" name="default"> <property_group type="application" name="timezone"> <propval type="astring" name="localtime" value="Europe/Berlin"/> </property_group> </instance> </service> <service version="1" type="service" name="system/environment"> <instance enabled="true" name="init"> <property_group type="application" name="environment"> <propval type="astring" name="LANG" value="C"/> </property_group> </instance> </service> <service version="1" type="service" name="system/keymap"> <instance enabled="true" name="default"> <property_group type="system" name="keymap"> <propval type="astring" name="layout" value="US-English"/> </property_group> </instance> </service> <service version="1" type="service" name="system/console-login"> <instance enabled="true" name="default"> <property_group type="application" name="ttymon"> <propval type="astring" name="terminal_type" value="vt100"/> </property_group> </instance> </service> <service version="1" type="service" name="network/physical"> <instance enabled="true" name="default"> <property_group type="application" name="netcfg"> <propval type="astring" name="active_ncp" value="DefaultFixed"/> </property_group> </instance> </service> <service version="1" type="service" name="system/name-service/switch"> <property_group type="application" name="config"> <propval type="astring" name="default" value="files"/> <propval type="astring" name="host" value="files dns"/> <propval type="astring" name="printer" value="user files"/> </property_group> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="system/name-service/cache"> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="network/dns/client"> <property_group type="application" name="config"> <property type="net_address" name="nameserver"> <net_address_list> <value_node value="192.168.2.1"/> </net_address_list> </property> </property_group> <instance enabled="true" name="default"/> </service> </service_bundle> :::::::::::::: mars.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="network/install"> <instance enabled="true" name="default"> <property_group type="application" name="install_ipv4_interface"> <propval type="astring" name="address_type" value="static"/> <propval type="net_address_v4" name="static_address" value="192.168.2.100/24"/> <propval type="astring" name="name" value="net0/v4"/> <propval type="net_address_v4" name="default_route" value="192.168.2.1"/> </property_group> <property_group type="application" name="install_ipv6_interface"> <propval type="astring" name="stateful" value="yes"/> <propval type="astring" name="stateless" value="yes"/> <propval type="astring" name="address_type" value="addrconf"/> <propval type="astring" name="name" value="net0/v6"/> </property_group> </instance> </service> <service version="1" type="service" name="system/identity"> <instance enabled="true" name="node"> <property_group type="application" name="config"> <propval type="astring" name="nodename" value="mars"/> </property_group> </instance> </service> </service_bundle> :::::::::::::: user.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/config-user"> <instance enabled="true" name="default"> <property_group type="application" name="root_account"> <propval type="astring" name="login" value="root"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="role"/> </property_group> <property_group type="application" name="user_account"> <propval type="astring" name="login" value="stefan"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="normal"/> <propval type="astring" name="description" value="Stefan Hinker"/> <propval type="count" name="uid" value="12345"/> <propval type="count" name="gid" value="10"/> <propval type="astring" name="shell" value="/usr/bin/bash"/> <propval type="astring" name="roles" value="root"/> <propval type="astring" name="profiles" value="System Administrator"/> <propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/> </property_group> </instance> </service> </service_bundle> root@ai-server:~# installadm create-profile -n x86-fcs -f general.xml root@ai-server:~# installadm create-profile -n x86-fcs -f user.xml root@ai-server:~# installadm create-profile -n x86-fcs -f mars.xml \ -c ipv4=192.168.2.100 root@ai-server:~# installadm list -p Service Name Profile ------------ ------- x86-fcs general.xml mars.xml user.xml root@ai-server:~# installadm list -n x86-fcs -p Profile Criteria ------- -------- general.xml None mars.xml ipv4 = 192.168.2.100 user.xml None Here's the idea behind these files: "general.xml" contains settings valid for all my clients.  Stuff like DNS servers, for example, which in my case will always be the same. "user.xml" only contains user definitions.  That is, a root password and a primary user.Both of these profiles will be valid for all clients (for now). "mars.xml" defines network settings for an individual client.  This profile is associated with an IP-Address.  For this to work, I'll have to tweak the DHCP-settings in the next step: root@ai-server:~# installadm create-client -e 08:00:27:AA:3D:B1 -n x86-fcs root@ai-server:~# vi /etc/inet/dhcpd4.conf root@ai-server:~# tail -5 /etc/inet/dhcpd4.conf host 080027AA3DB1 { hardware ethernet 08:00:27:AA:3D:B1; fixed-address 192.168.2.100; filename "01080027AA3DB1"; } This completes the client preparations.  I manually added the IP-Address for mars to /etc/inet/dhcpd4.conf.  This is needed for the "mars.xml" profile.  Disabling arbitrary DHCP-replies will shut up this DHCP server, making my life in a shared environment a lot more peaceful ;-)Now, I of course want this installation to be completely hands-off.  For this to work, I'll need to modify the grub boot menu for this client slightly.  You can find it in /etc/netboot.  "installadm create-client" will create a new boot menu for every client, identified by the client's MAC address.  The template for this can be found in a subdirectory with the name of the install service, /etc/netboot/x86-fcs in our case.  If you don't want to change this manually for every client, modify that template to your liking instead. root@ai-server:~# cd /etc/netboot root@ai-server:~# cp menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org root@ai-server:~# vi menu.lst.01080027AA3DB1 root@ai-server:~# diff menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org 1,2c1,2 < default=1 < timeout=10 --- > default=0 > timeout=30 root@ai-server:~# more menu.lst.01080027AA3DB1 default=1 timeout=10 min_mem64=0 title Oracle Solaris 11 11/11 Text Installer and command line kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install_media=htt p://$serverIP:5555//export/install/fcs,install_service=x86-fcs,install_svc_addre ss=$serverIP:5555 module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive title Oracle Solaris 11 11/11 Automated Install kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install=true,inst all_media=http://$serverIP:5555//export/install/fcs,install_service=x86-fcs,inst all_svc_address=$serverIP:5555,livemode=text module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive Now just boot the client off the network using PXE-boot.  For my demo purposes, that's a client from VirtualBox, of course.  That's all there's to it.  And despite the fact that this blog entry is a little longer - that wasn't that hard now, was it?

    Read the article

  • Open Data, Government and Transparency

    - by Tori Wieldt
    A new track at TDC (The Developer's Conference in Sao Paulo, Brazil) is titled Open Data. It deals with open data, government and transparency. Saturday will be a "transparency hacker day" where developers are invited to create applications using open data from the Brazilian government.  Alexandre Gomes, co-lead of the track, says "I want to inspire developers to become "Civic hackers:" developers who create apps to make society better." It is a chance for developers to do well and do good. There are many opportunities for developers, including monitoring government expenditures and getting citizens involved via social networks. The open data movement is growing worldwide. One initiative, the Open Government Partnership, is working to make government data easier to find and access. Making this data easily available means that with the right applications, it will be easier for people to make decisions and suggestions about government policies based on detailed information. Last April, the Open Government Partnership held its annual meeting in Brasilia, the capitol of Brazil. It was a great success showcasing the innovative work being done in open data by governments, civil societies and individuals around the world. For example, Bulgaria now publishes daily data on budget spending for all public institutions. Alexandre Gomes Explains Open Data At TDC, the Open Data track will include a presentation of examples of successful open data projects, an introduction to the semantic web, how to handle big data sets, techniques of data visualization, and how to design APIs.The other track lead is Christian Moryah Miranda, a systems analyst for the Brazilian Government's Ministry of Planning. "The Brazilian government wholeheartedly supports this effort. In order to make our data available to the public, it forces us to be more consistent with our data across ministries, and that's a good step forward for us," he said. He explained the government knows they cannot achieve everything they would like without help from the public. "It is not the government versus the people, rather citizens are partners with the government, and together we can achieve great things!" Miranda exclaimed. Saturday at TDC will be a "transparency hacker day" where developers will be invited to create applications using open data from the Brazilian government. Attendees are invited to pitch their ideas, work in small groups, and present their project at the end of the conference. "For example," Gomes said, "the Brazilian government just released the salaries of all government employees and I can't wait to see what developers can do with that." Resources Open Government Partnership  U.S. Government Open Data ProjectBrazilian Government Open Data ProjectU.K. Government Open Data Project 2012 International Open Government Data Conference 

    Read the article

  • How To Disconnect Non-Mapped UNC Path “Drives” in Windows

    - by The Geek
    Have you ever browsed over to another PC on your network using “network neighborhood”, and then connected to one of the file shares? Without a drive letter, how do you disconnect yourself once you’ve done so? Really confused as to what I’m talking about? Let’s walk through the process. First, imagine that you browse through and connect to a share, entering your username and password to gain access. The problem is that you stay connected, and there’s no visible way to disconnect yourself. If you try and shut down the other PC, you’ll receive a message that users are still connected. So let’s disconnect! Open up a command prompt, and then type in the following: net use This will give you a list of the connected drives, including the ones that aren’t actually mapped to a drive letter. To disconnect one of the connections, you can use the following command: net use /delete \\server\sharename For example, in this instance we’d disconnect like so: net use /delete \\192.168.1.205\root$ Now when you run the “net use” command again, you’ll see that you’ve been properly disconnected. If you wanted to actually connect to a share without mapping a drive letter, you can do the following: net use /user:Username \\server\sharename Password You could then just pop \\server\sharename into a Windows Explorer window and browse the files that way. Note that this technique should work exactly the same in any version of windows. Similar Articles Productive Geek Tips Remove "Map Network Drive" Menu Item from Windows Vista or XPDisable the Annoying "This page has an unspecified potential security risk" When Using Files on a Network ShareCopy Path of a File to the Clipboard in Windows 7 or VistaMap a Network Drive from XP to Windows 7Defrag Multiple Hard Drives At Once In Windows TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily

    Read the article

  • ESB Toolkit 2.0 EndPointConfig (HTTPS with WCF-BasicHttp and the ESB Toolkit 2.0)

    - by Andy Morrison
    Earlier this week I had an ESB endpoint (Off-Ramp in ESB parlance) that I was sending to over http using WCF-BasicHttp.  I needed to switch the protocol to https: which I did by changing my UDDI Binding over to https:  No problem from a management perspective; however, when I tried to run the process I saw this exception: Event Type:                     Error Event Source:                BizTalk Server 2009 Event Category:            BizTalk Server 2009 Event ID:   5754 Date:                                    3/10/2010 Time:                                   2:58:23 PM User:                                    N/A Computer:                       XXXXXXXXX Description: A message sent to adapter "WCF-BasicHttp" on send port "SPDynamic.XXX.SR" with URI "https://XXXXXXXXX.com/XXXXXXX/whatever.asmx" is suspended.  Error details: System.ArgumentException: The provided URI scheme 'https' is invalid; expected 'http'. Parameter name: via    at System.ServiceModel.Channels.TransportChannelFactory`1.ValidateScheme(Uri via)    at System.ServiceModel.Channels.HttpChannelFactory.ValidateCreateChannelParameters(EndpointAddress remoteAddress, Uri via)    at System.ServiceModel.Channels.HttpChannelFactory.OnCreateChannel(EndpointAddress remoteAddress, Uri via)    at System.ServiceModel.Channels.ChannelFactoryBase`1.InternalCreateChannel(EndpointAddress address, Uri via)    at System.ServiceModel.Channels.ChannelFactoryBase`1.CreateChannel(EndpointAddress address, Uri via)    at System.ServiceModel.Channels.ServiceChannelFactory.ServiceChannelFactoryOverRequest.CreateInnerChannelBinder(EndpointAddress to, Uri via)    at System.ServiceModel.Channels.ServiceChannelFactory.CreateServiceChannel(EndpointAddress address, Uri via)    at System.ServiceModel.Channels.ServiceChannelFactory.CreateChannel(Type channelType, EndpointAddress address, Uri via)    at System.ServiceModel.ChannelFactory`1.CreateChannel(EndpointAddress address, Uri via)    at System.ServiceModel.ChannelFactory`1.CreateChannel()    at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.GetChannel[TChannel](IBaseMessage bizTalkMessage, ChannelFactory`1& cachedFactory)    at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.SendMessage(IBaseMessage bizTalkMessage)  MessageId:  {1170F4ED-550F-4F7E-B0E0-1EE92A25AB10}  InstanceID: {1640C6C6-CA9C-4746-AEB0-584FDF7BB61E} I knew from a previous experience that I likely needed to set the SecurityMode setting for my Send Port.  But how do you do this for a Dynamic port (which I was using since this is an ESB solution)? Within the UDDI portal you have to add an additional Instance Info to your Binding named: EndPointConfig  Then you have to set its value to:  SecurityMode=Transport Like this:    The EndPointConfig is how the ESB Toolkit 2.0 provides extensibility for the various transports.  To see what the key-value pair options are for a given transport, open up an itinerary and change one of your resolvers to a “static” resolver by setting the “Resolver Implementation” to Static.  Then select a “Transport Name” ”, for instance to WCF-BasicHttp.  At this point you can then click on the “EndPoint Configuration” property for to see an adapter/ramp specific properties dialog (key-value pairs.)    Here’s the dialog that popped up for WCF-BasicHttp:   I simply set the SecurityMode to Transport.  Please note that you will get different properties within the window depending on the Transport Name you select for the resolver. When you are done with your settings, export the itinerary to disk and find that xml; then find that resolver’s xml within that file.  It will look like endpointConfig=SecurityMode=Transport in this case.  Note that if you set additional properties you will have additional key-value pairs after endpointConfig= Copy that string and paste it into the UDDI portal for you Binding’s EndPointConfig Instance Info value.

    Read the article

  • Podcast Show Notes: The Red Room Interview &ndash; Part 1

    - by Bob Rhubart
      The latest OTN Arch2Arch podcast is Part 1 of a three-part series featuring a discussion of a broad range of SOA  issues with three members of the small army of contributors to The Red Room Blog, now part of the OJam.biz site, the Australia-New Zealand outpost of the global Oracle community. The panelists for this program are: Sean Boiling - Sales Consulting Manager for Oracle Fusion Middleware LinkedIn | Twitter | Blog Richard Ward - SOA Channel Development Manager at Oracle LinkedIn | Blog Mervin Chiang - Consulting Principal at Leonardo Consulting LinkedIn | Twitter | Blog (You can also follow the Red Room itself on Twitter: @OracleRedRoom.) The genesis of this interview goes back to 2009, and the original Red Room blog, on which Sean, Richard, Mervin, and other Red Roomers published a 10-part series of posts that, taken together, form a kind of SOA best-practices guide, presented in an irreverent style that is rare in a lot of technical writing. It was on the basis of their expertise and irreverence that I wanted to get a few of the Red Room bloggers on an Arch2Arch podcast.  Easier said than done. Trying to schedule a group interview with very busy people on the other side of world (they’re actually 15 hours in the future, relative to my location) is not a simple process. The conversations about getting some of the Red Room people on the program began in the summer of 2009. The interview finally happened at 5:30 PM EDT on Tuesday March 30, 2010, which for the panelists, located in Australia, was 8:30 AM on Wednesday March 31, 2010. I was waiting for dinner, and Sean, Richard, and Mervin were waiting for breakfast. But the call went off without a hitch, and the panelists carried on a great discussion of SOA issues. Listen to Part 1 Many thanks to Gareth Llewellyn for his help in putting this together. SOA Best Practices Here’s a complete list of the posts in the original 10-part Red Room series: SOA is Dead. Long Live SOA by Sean Boiling Are you doing SOP’s instead of SOA? by Saul Cunningham All The President's SOA by Sean Boiling SOA – Pay Now or Pay Dearly by Richard Ward SOA where are the skills? by Richard Ward Project Management Pitfalls within SOA by Anton Gouws Viewing SOA as a project instead of an architecture by Saul Cunningham Kiss and Tell by Sean Boiling Failure to implement and adhere to SOA Governance by Mervin Chiang Ten Out Of Ten by Sean Boiling Parts 2 of the Red Room Interview will be available next week, followed by Part 3, so stay tuned: RSS Change in the Wind Beginning with next week’s program, the OTN Arch2Arch Podcast will be rechristened as the OTN ArchBeat Podcast, to better align with this blog. The transformation will be painless – you won’t feel a thing.   del.icio.us Tags: otn,oracle,Archbeat,Arch2Arch,soa,service oriented architecture,podcast Technorati Tags: otn,oracle,Archbeat,Arch2Arch,soa,service oriented architecture,podcast

    Read the article

  • SQL SERVER – Index Created on View not Used Often – Observation of the View – Part 2

    - by pinaldave
    Earlier, I have written an article about SQL SERVER – Index Created on View not Used Often – Observation of the View. I received an email from one of the readers, asking if there would no problems when we create the Index on the base table. Well, we need to discuss this situation in two different cases. Before proceeding to the discussion, I strongly suggest you read my earlier articles. To avoid the duplication, I am not going to repeat the code and explanation over here. In all the earlier cases, I have explained in detail how Index created on the View is not utilized. SQL SERVER – Index Created on View not Used Often – Limitation of the View 12 SQL SERVER – Index Created on View not Used Often – Observation of the View SQL SERVER – Indexed View always Use Index on Table As per earlier blog posts, so far we have done the following: Create a Table Create a View Create Index On View Write SELECT with ORDER BY on View However, the blog reader who emailed me suggests the extension of the said logic, which is as follows: Create a Table Create a View Create Index On View Write SELECT with ORDER BY on View Create Index on the Base Table Write SELECT with ORDER BY on View After doing the last two steps, the question is “Will the query on the View utilize the Index on the View, or will it still use the Index of the base table?“ Let us first run the Create example. USE tempdb GO IF EXISTS (SELECT * FROM sys.views WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[SampleView]')) DROP VIEW [dbo].[SampleView] GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[mySampleTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[mySampleTable] GO -- Create SampleTable CREATE TABLE mySampleTable (ID1 INT, ID2 INT, SomeData VARCHAR(100)) INSERT INTO mySampleTable (ID1,ID2,SomeData) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY o1.name), ROW_NUMBER() OVER (ORDER BY o2.name), o2.name FROM sys.all_objects o1 CROSS JOIN sys.all_objects o2 GO -- Create View CREATE VIEW SampleView WITH SCHEMABINDING AS SELECT ID1,ID2,SomeData FROM dbo.mySampleTable GO -- Create Index on View CREATE UNIQUE CLUSTERED INDEX [IX_ViewSample] ON [dbo].[SampleView] ( ID2 ASC ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView ORDER BY ID2 GO -- Create Index on Original Table -- On Column ID1 CREATE UNIQUE CLUSTERED INDEX [IX_OriginalTable] ON mySampleTable ( ID1 ASC ) GO -- On Column ID2 CREATE UNIQUE NONCLUSTERED INDEX [IX_OriginalTable_ID2] ON mySampleTable ( ID2 ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView ORDER BY ID2 GO Now let us see the execution plans for both of the SELECT statement. Before Index on Base Table (with Index on View): After Index on Base Table (with Index on View): Looking at both executions, it is very clear that with or without, the View is using Indexes. Alright, I have written 11 disadvantages of the Views. Now I have written one case where the View is using Indexes. Anybody who says that I am being harsh on Views can say now that I found one place where Index on View can be helpful. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL View, SQLServer, T SQL, Technology

    Read the article

< Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >