Search Results

Search found 12585 results on 504 pages for 'vs 2013 preview'.

Page 391/504 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • Cross-platform builds with OGRE3D via CMake. Any tips?

    - by frarees
    I've been trying to compile a simple project for both OSX and Windows platforms, using OGRE3D, but I've got some problems on the way. I'm using CMake to create my platform specific project files (VS solution & Xcode project). Some problems I found are: OGRE3D source is distributed in 2 flavors, Windows sources and UNIX/OSX sources. In OSX, compiling dependencies (freetype, FreeImage and specially OIS) is such a pain. I don't know how to handle precompiled dependencies (they exist for both Win & Mac). May sound like a noob question, but I would appreciate some tips on this. Resources, forum posts, anything. There exists any "cross-platform base project for OGRE3D" on the net? Would be really helpful if someone who already managed to do this can bring some light. Btw, I'm not basing the project on OGRE3D, it's just that is the biggest library I'm probably using, so I depend a lot on it. Thanks in advantage!

    Read the article

  • Does lesser wide screen fit better than a large one?

    - by artaxerxe
    About 3 weeks ago I changed my job. At the former workplace, I had 2 monitors for doing programming (GUI and core programming). Here, at the place where I am, the administrators gave me a laptop (15.6 inch) and wanted to provide me a wider screen additional to the laptop's one. I said that for now I want to test as it is, with the laptop's screen. I also want to mention, that in the current job I'm not targeted for GUI development. My feeling until now (but it can be just a feeling) is that working on this single not so wide screen, I'm not so weary after a full day work as I was with 2 wide monitors. Does anyone have any recommendations on this problem? Does lesser screens (in my case exclusively 15.6 inch vs 20 inch screens) affect your eyes? If anyone have any knowledge about, please feel free to say what's your opinion. P.S. I think that's a good site for this kind of question. Otherwise, please guide me on what site from StackExchange should I put it. Thanks.

    Read the article

  • Google Analytics Not tracking data correctly IP-address issue?

    - by PaperThick
    I have developed a small site for a client and the site has been placed inside a <iframe> at the clients site. The GA-script I'm using looks like this: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push( ['_setAccount', 'UA-XXXXXXXX-2'], //My company's GA-account ['_trackPageview'], ['b._setAccount', 'UA-XXXXXXXX-1'], // Test GA-account ['b._trackPageview'], ['th._setAccount', 'UA-XXXXXXX-3'], ['th._setDomainName', '.clientdomain.se'], // Client GA-account ['th._trackPageview'] ); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> </head> As you can see I report the GA pageviews to the client as well. The GA script is tracking visitors and pageviews at both ends. But the problem is that on my clients side the visitor-count is more than double what they are on my end (20 000 vs 5 000). At first I thought that it was being duplicated at some point but when I checked my Crazy-Egg account I saw that it had tracked over 10 000 visits and then stopped tracking because that was my limit on the account. The page my site is on is on a IP-address (http://XXX.XXX.XX.X/campaign/) and not on a "valid url". Could that be an issue why some of the visitors isn't beeing tracked? Thanks in advance

    Read the article

  • Which metric/list should be used to evaluate whole software development team?

    - by adt
    Title might be seem vague, so let me tell you a little bit history what i am trying to clarify question. I have been hired as a consultant for a corporate's small developement divison ( The company also owns a couple of software dev. companies) My ex manager runs a BI team, with reportes, analyts and developers. He asked me to evaluate overall design, software developement process and code quality . Here what i found, Lots of copy/paste code everywhere ( no reuse ) Even though they have everything TFS, VS Ultimate etc, No Build process , No Cruise Control.net / team city... No unit tests Web Pages with 3700 lines of code, Lots of huge functions ( which can be divided into smaller one's ) No naming convention both db and c# code No 3r party or open source project No IoC No Seperation Of Concerns No Code Quality Check ( NDepend or FxCope or nothing ) No Code Review No Communication within the team They claim they wrote an application framework ( 6 months 3 persons), but I would hardly call a framework ( of course no unit test, there are some but all commented out). Framework contains 14 projects but there are some projects with 1 file 20 lines of code . Honestly, what people are doing fixing bug all thr day( which will provide more bugs eventually), they are kind of isolated from community, some team members even dont know github or stackoverflow they probably went there with google but they dont know about it. So here is question, Is This list ok ? Or am i being picky? Since I dont have any grudge against them, I just want to be fair, honest and I would like to hear you suggestions, before I would submit this list. And since this list also will be review by software division's manager, I dont want any heart break or something like this. http://www.hanselman.com/altnetgeekcode/ For example I would love to such lists, i cant make references. Thanks in advance.

    Read the article

  • New Workstation &ndash; Lenovo W530 Core i7 32GB 256GB SSD Win8Pro

    - by Brian Lanham
    So I pretty-much have my new machine up and running full-time. I am still going to have to hit my old workstation for some things but am more-or-less working on my new machine.  It’s really fast. And Bret was right, I’m not so far using all the RAM. 16 would have been enough but as @CodeMonkeyJava “go big or go home”. Windows 8 is…interesting.  So far I still seem to do most of my work in the “Desktop”.  However, I like the Store concept and I like the Metro UX.  Live tiles are also nice.  I really like how I can switch between Desktop and Metro easily.  Overall I think Microsoft has done a great job of combining the needed experience for touch and mouse. My overall Windows 8 rating is 5.9 because of the video card. Otherwise I’m hitting 7.8.  The system boots from cold in about 11 seconds and performs complete shutdown in 4.7 seconds.  It wakes from sleep in less than 1 second. VS 2012 starts and restarts almost instantly.  In fact, I find myself staring at the start page without realizing it.  Build time doesn’t seem to be significantly increased but it is faster. I seem to already be reinvigorated for work with this new machine. I’m looking forward to the performance.

    Read the article

  • How often do you review fundamentals?

    - by mlnyc
    So I've been out of school for a year and a half now. In school, of course we covered all the fundamentals: OS, databases, programming languages (i.e. syntax, binding rules, exception handling, recursion, etc), and fundamental algorithms. the rest were more in-depth topics on things like NLP, data mining, etc. Now, a year ago if you would have told me to write a quicksort, or reverse a singly-linked list, analyze the time complexity of this 'naive' algorithm vs it's dynamic programming counterpart, etc I would have been able to give you a decent and hopefully satisfying answer. But if you would have asked me more real world questions I might have been stumped (things like how would handle logging for an application, or security difference between GET and POST, differences between SQL Server and Oracle SQL, anything I list on my resume as currently working with [jQuery questions, ColdFusion questions, ...] etc) Now, I feel things are the opposite. I haven't wrote my own sort since graduating, and I don't really have to worry much about theoretical things that do not naturally fall into problems I am trying to solve. For example, I might give you some great SQL solutions using an analytical function that I would have otherwise been stumped on or write a cool web application using angular or something but ask me to write an algo for insertAfter(Element* elem) and I might not be able to do it in a reasonable time frame. I guess my question here to the experienced programmers is how do you balance the need to both learn and experiment with new technologies (fun!), working on personal projects (also fun!) working and solving real world problems in a timeboxed environment (so I might reach out to a library that does what I want rather than re-invent the wheel so that I can focus on the problem I am trying to solve) (work, basically), and refreshing on old theoretical material which is still valid for interviews and such (can be a drag)? Do you review older material (such as famous algorithms, dynamic programming, Big-O analysis, locking implementations) regularly or just when you need it? How much time do you dedicate to both in your 'deliberate practice' and do you have a certain to-do list of topics that you want to work on?

    Read the article

  • Entry level engineer question regarding memory management

    - by Ealianis
    It has been a few months since I started my position as an entry level software developer. Now that I am past some learning curves (e.g. the language, jargon, syntax of VB and C#) I'm starting to focus on more esoteric topics, as to write better software. A simple question I presented to a fellow coworker was responded with "I'm focusing on the wrong things." While I respect this coworker I do disagree that this is a "wrong thing" to focus upon. Here was the code (in VB) and followed by the question. Note: The Function GenerateAlert() returns an integer. Dim alertID as Integer = GenerateAlert() _errorDictionary.Add(argErrorID, NewErrorInfo(Now(), alertID)) vs... _errorDictionary.Add(argErrorID, New ErrorInfo(Now(), GenerateAlert())) I originally wrote the latter and rewrote it with the "Dim alertID" so that someone else might find it easier to read. But here was my concern and question: Should one write this with the Dim AlertID, it would in fact take up more memory; finite but more, and should this method be called many times could it lead to an issue? How will .NET handle this object AlertID. Outside of .NET should one manually dispose of the object after use (near the end of the sub). I want to ensure I become a knowledgeable programmer that does not just rely upon garbage collection. Am I over thinking this? Am I focusing on the wrong things?

    Read the article

  • What makes a theme "Premium"

    - by Sinthia V
    I have a lot of time invested in creating Wordpress templates. I want to release combinations of these templates along with different styles and Fancy Front pages as "Premium Wordpress Themes". What I need to know is what does "Premium" mean? What do people expect of a GPL theme vs. a Premium theme? Are there features that are considered required to be premium? Are there features that are in demand but considered "exceptional" i.e. not part of every premium theme? How can I tell the difference? I have heard tounge-in-cheek answers that say that any theme that makes money is premium, but I mean to ask about what gives an outstanding theme it's quality. Why is it worth more? I am technically able to do many things, but as a lone developer with a family to feed, I can't afford to spend time on features that no one cares about. I have to try to isolate the things that people want. This is serious food and rent to me. How can I get this kind of info so I can make my project successful?

    Read the article

  • Storing a pass-by-reference parameter as a pointer - Bad practice?

    - by Karl Nicoll
    I recently came across the following pattern in an API I've been forced to use: class SomeObject { public: // Constructor. SomeObject(bool copy = false); // Set a value. void SetValue(const ComplexType &value); private: bool m_copy; ComplexType *m_pComplexType; ComplexType m_complexType; }; // ------------------------------------------------------------ SomeObject::SomeObject(bool copy) : m_copy(copy) { } // ------------------------------------------------------------ void SomeObject::SetValue(const ComplexType &value) { if (m_copy) m_complexType.assign(value); else m_pComplexType = const_cast<ComplexType *>(&value); } The background behind this pattern is that it is used to hold data prior to it being encoded and sent to a TCP socket. The copy weirdness is designed to make the class SomeObject efficient by only holding a pointer to the object until it needs to be encoded, but also provide the option to copy values if the lifetime of the SomeObject exceeds the lifetime of a ComplexType. However, consider the following: SomeObject SomeFunction() { ComplexType complexTypeInstance(1); // Create an instance of ComplexType. SomeObject encodeHelper; encodeHelper.SetValue(complexTypeInstance); // Okay. return encodeHelper; // Uh oh! complexTypeInstance has been destroyed, and // now encoding will venture into the realm of undefined // behaviour! } I tripped over this because I used the default constructor, and this resulted in messages being encoded as blank (through a fluke of undefined behaviour). It took an absolute age to pinpoint the cause! Anyway, is this a standard pattern for something like this? Are there any advantages to doing it this way vs overloading the SetValue method to accept a pointer that I'm missing? Thanks!

    Read the article

  • There's Not an App for That (Yet)

    - by Mark Hesse
    With an earlier-than-normal departure this morning to avoid the stalemate known as traffic congestion, I suddenly realized what I had failed to grab on my way out the door...  my company ID badge.  Unfortunately, at the time of my epiphany, I was far enough into commuter no-man's land where turning back would completely negate my early departure and increase my overall drive time exponentially.  Not being one to retrace my steps, I decided to press on. Upon arrival at the office and with an hour to go before a security guard would be on duty, I started thinking about the number of times I had forgotten my ID vs. the number of times I had forgotten my phone.  While rare on both accounts, my ID was most likely the missing artifact. I then wondered why there isn't an app for my smartphone that allows me to verify my credentials with my employer and then, provided with a secure token for the day, have the ability to access my building's card entry system.  On many levels, this seems much more secure than an ID card which can be lost, stolen or even forged and then used simply by tailgating into and around buildings at facilities where card scanning can generally be avoided.   As it turns out, another building on the campus has 24 x 7 guard coverage, so I was able to gain access in a relatively short time and secure a temporary ID badge.  Once inside and online, a quick internet search on the subject of smartphone badge access shows that efforts are underway to do exactly what I was thinking needed to be done. Having not spent any time studying about the technology, I discovered that it relies on Near Field Communications (NFC) enabled smartphones (of which, mine does not provide).  The only other option would require modifications to the security infrastructure to support alternative authentication technologies, such as barcode readers, which would be extremely costly to implement. For now, my best option is to put my corporate ID under my car keys... 

    Read the article

  • More Win 8 Education is Needed

    - by D'Arcy Lussier
    “My mail doesn’t work”. That’s what a colleague running Windows 8 said to me the day after he installed Windows 8 on his work laptop. “When I click my email, nothing comes up.” I took a look and realized what was going on – he was clicking the Windows 8 UI Mail app and assumed that this was somehow connected to his Outlook which was installed as a desktop app. And so highlights a major educational challenge that Windows 8 will encounter – millions of users used to one style of interface now being introduced to a new one that runs side-by-side with their desktop. At work we had an internal tech user group meeting, and we were showing new features of VS.NET 2012 and Windows 8. When we started talking about the difference between Windows 8 UI Apps (AKA Windows Store Apps), people started asking some good questions: - Can we share a codebase between desktop and Windows Store Apps? - What’s the difference between WinRT and .NET? - Why would we create a Windows Store App and not just a Desktop app? Of course, people are looking at this from a traditional desktop point of view and not a tablet platform, which is really the market that Windows Store Apps will shine on. Still, for developers who not only need to educate themselves but also educate their clients, we’re going to need a better understanding of Windows 8 to see it get real traction within the business/enterprise market. D

    Read the article

  • Emailing Service: To or Bcc?

    - by Shelakel
    I'm busy coding a reusable e-mail service for my company. The e-mail service will be doing quite a few things via injection through the strategy pattern (such as handling e-mail send rate throttling, switching between Smtp and AmazonSES or Google AppEngine for e-mail clients when daily quotas are exceeded, send statistics tracking (mostly because it is neccessary in order to stay within quotas) to name a few). Because e-mail sending will need to be throttled and other limitations exist (ex. max recipient quota on AmazonSES limiting recipients to 50 per send), the e-mails typically need to be broken up. From your experience, would it be better to send bulk (multiple recipients per e-mail) or a single e-mail per recipient? The implications of the above would be to send to a 1000 recipients, with a limit of 50 per send, you would send 20 e-mails using BCC in a newsletter scenario. When sending an e-mail per recipient, it would send 1000 e-mails. E-mail sending is asynchronous (due to inherit latency when sending, it's typically only possible to send 5 e-mails per second unless you are using multiple client asynchronously). Edit Just for full disclosure, this service won't be used by or sold to spammers and will as far as possible automatically comply with national and international laws. Closed< Thanks for all the valuable feedback. The concerns regarding compliance towards laws, user experience (generic vs. personalized unsubscribe) and spam regulation via ISP blacklisting does make To the preferred and possibly the only choice when sending system generated e-mails to recipients.

    Read the article

  • Database Insider - December 2012 issue

    - by Javier Puerta
    The December issue of the Database Insider newsletter is now available. (Full newsletter here) Big Data: From Acquisition to Analysis 2012 will likely be remembered as the year of big data, as a new generation of technologies enables organizations to acquire, organize, and analyze the exponentially growing and typically less-structured data generated from a variety of new sources. Oracle has produced a series of five short videos that offer a quick and compelling high-level introduction to big data. Read More Total Cost of Ownership Comparison: Oracle Exadata vs. IBM P-Series Read the research that found that over three years, the IBM hardware running Oracle Database cost 31 percent more in total cost of ownership than Oracle Exadata. Webcast - Oracle Exadata Database Machine X3 Learn about Oracle’s next-generation database machine, Oracle Exadata X3, that combines massive memory and low-cost disks to deliver the highest performance at the lowest cost. Available in an eight-rack configuration, it allows you to start small and grow.    Maximum Availability with Oracle GoldenGate Discover how to eliminate not only unplanned downtime but also planned downtime resulting from database upgrades, migrations, and consolidation.Thursday, December 1319:00 CET / 6 pm. UK   

    Read the article

  • Web hosting deciding to pay for hosting or host your own?

    - by pllee
    Is there a guide out there on how to choose when to pay for web hosting vs. hosting your own? Assuming that root access is a must I would like to compare things like cost, scalability and personal stress. Here is what I could come up with. Paying for web hosting: Benefits: Much cheaper for a small scale. I assume anything under $50 a month would be cheaper than paying for the bandwidth of hosting. No stress in dealing with power outages, server restarts or internet going down. For the most part less busy work involved with setting up. Negatives: Cost goes way up when higher specs are needed (for example monthly cost triples with ability to use 8gb of ram that you can buy for $90 ). This means you have to target a particular ram usage and monitor so your instance stays within the threshold. root access for the most part is a premium. You may get tied into a vendor specific deployment process. Hosting on own : Positives: 100% control of specs and software. When you get past paying for the bandwidth you get much more bang for your buck by building your own machine. Negatives: Doesn't make financial sense if bandwidth costs are more than web hosting costs. Having to deal with power outages, server restarts or internet going down. I think the best of both worlds would be if there was a place that dealt with bandwidth, power outages and server restarts but you provided your own server. Kind of like a 24 hour day care for a server. Does anything like that exist?

    Read the article

  • Using jQuery, CKEditor, AJAX in ASP.NET MVC 2

    - by Ray Linder
    After banging my head for days on a “A potentially dangerous Request.Form value was detected" issue when post (ajax-ing) a form in ASP.NET MVC 2 on .NET 4.0 framework using jQuery and CKEditor, I found that when you use the following: Code Snippet $.ajax({     url: '/TheArea/Root/Add',     type: 'POST',     data: $("#form0Add").serialize(),     dataType: 'json',     //contentType: 'application/json; charset=utf-8',     beforeSend: function ()     {         pageNotify("NotifyMsgContentDiv", "MsgDefaultDiv", '<img src="/Content/images/content/icons/busy.gif" /> Adding post, please wait...', 300, "", true);         $("#btnAddSubmit").val("Please wait...").addClass("button-disabled").attr("disabled", "disabled");     },     success: function (data)     {         $("#btnAddSubmit").val("Add New Post").removeClass("button-disabled").removeAttr('disabled');         redirectToUrl("/Exhibitions");     },     error: function ()     {         pageNotify("NotifyMsgContentDiv", "MsgErrorDiv", '<img src="/Content/images/content/icons/cross.png" /> Could not add post. Please try again or contact your web administrator.', 6000, "normal");         $("#btnAddSubmit").val("Add New Post").removeClass("button-disabled").removeAttr('disabled');     } }); Notice the following: Code Snippet data: $("#form0Add").serialize(), You may run into the “A potentially dangerous Request.Form value was detected" issue with this. One of the requirements was NOT to disable ValidateRequest (ValidateRequest=”false”). For this project (and any other project) I felt it wasn’t necessary to disable ValidateRequest. Note: I’ve search for alternatives for the posting issue and everyone and their mothers continually suggested to disable ValidateRequest. That bothers me – a LOT. So, disabling ValidateRequest is totally out of the question (and always will be).  So I thought to modify how the “data: “ gets serialized. the ajax data fix was simple, add a .html(). YES!!! IT WORKS!!! No more “potentially dangerous” issue, ajax form posts (and does it beautifully)! So if you’re using jQuery to $.ajax() a form with CKEditor, remember to do: Code Snippet data: $("#form0Add").serialize().html(), or bad things will happen. Also, don’t forget to set Code Snippet config.htmlEncodeOutput = true; for the CKEditor config.js file (or equivalent). Example: Code Snippet CKEDITOR.editorConfig = function( config ) {     // Define changes to default configuration here. For example:     // config.language = 'fr';     config.uiColor = '#ccddff';     config.width = 640;     config.ignoreEmptyParagraph = true;     config.resize_enabled = false;     config.skin = 'kama';     config.enterMode = CKEDITOR.ENTER_BR;       config.toolbar = 'MyToolbar';     config.toolbar_MyToolbar =     [         ['Bold', 'Italic', 'Underline'],         ['JustifyLeft', 'JustifyCenter', 'JustifyRight', 'JustifyBlock', 'Font', 'FontSize', 'TextColor', 'BGColor'],         ['BulletedList', 'NumberedList', '-', 'Outdent', 'Indent'],         '/',         ['Scayt', '-', 'Cut', 'Copy', 'Paste', 'Find'],         ['Undo', 'Redo'],         ['Link', 'Unlink', 'Anchor', 'Image', 'Flash', 'HorizontalRule'],         ['Table'],         ['Preview', 'Source']     ];     config.htmlEncodeOutput = true; }; Happy coding!!! Tags: jQuery ASP.NET MVC 2 ASP.NET 4.0 AJAX

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • IIS 7&rsquo;s Sneaky Secret to Get COM-InterOp to Run

    - by David Hoerster
    Originally posted on: http://geekswithblogs.net/DavidHoerster/archive/2013/06/17/iis-7rsquos-sneaky-secret-to-get-com-interop-to-run.aspxIf you’re like me, you don’t really do a lot with COM components these days.  For me, I’ve been ‘lucky’ to stay in the managed world for the past 6 or 7 years. Until last week. I’m running a project to upgrade a web interface to an older COM-based application.  The old web interface is all classic ASP and lots of tables, in-line styles and a bunch of other late 90’s and early 2000’s goodies.  So in addition to updating the UI to be more modern looking and responsive, I decided to give the server side an update, too.  So I built some COM-InterOp DLL’s (easily through VS2012’s Add Reference feature…nothing new here) and built a test console line app to make sure the COM DLL’s were actually built according to the COM spec.  There’s a document management system that I’m thinking of whose COM DLLs were not proper COM DLLs and crashed and burned every time .NET tried to call them through a COM-InterOp layer. Anyway, my test app worked like a champ and I felt confident that I could build a nice façade around the COM DLL’s and wrap some functionality internally and only expose to my users/clients what they really needed. So I did this, built some tests and also built a test web app to make sure everything worked great.  It did.  It ran fine in IIS Express via Visual Studio 2012, and the timings were very close to the pure Classic ASP calls, so there wasn’t much overhead involved going through the COM-InterOp layer. You know where this is going, don’t you? So I deployed my test app to a DEV server running IIS 7.5.  When I went to my first test page that called the COM-InterOp layer, I got this pretty message: Retrieving the COM class factory for component with CLSID {81C08CAE-1453-11D4-BEBC-00500457076D} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). It worked as a console app and while running under IIS Express, so it must be permissions, right?  I gave every account I could think of all sorts of COM+ rights and nothing, nada, zilch! Then I came across this question on Experts Exchange, and at the bottom of the page, someone mentioned that the app pool should be running to allow 32-bit apps to run.  Oh yeah, my machine is 64-bit; these COM DLL’s I’m using are old and are definitely 32-bit.  I didn’t check for that and didn’t even think about that.  But I went ahead and looked at the app pool that my web site was running under and what did I see?  Yep, select your app pool in IIS 7.x, click on Advanced Settings and check for “Enable 32-bit Applications”. I went ahead and set it to True and my test application suddenly worked. Hope this helps somebody out there from pulling out your hair.

    Read the article

  • Print SSRS Report / PDF automatically from SQL Server agent or Windows Service

    - by Jeremy Ramos
    Originally posted on: http://geekswithblogs.net/JeremyRamos/archive/2013/10/22/print-ssrs-report--pdf-from-sql-server-agent-or.aspxI have turned the Web upside-down to find a solution to this considering the least components and least maintenance as possible to achieve automated printing of an SSRS report. This is for the reason that we do not have a full software development team to maintain an app and we have to minimize the support overhead for the support team.Here is my setup:SQL Server 2008 R2 in Windows Server 2008 R2PDF format reports generated by SSRS Reports subscriptions to a Windows File ShareNetwork printerColoured reports with logo and brandingI have found and tested the following solutions to no avail:ProsConsCalling Adobe Acrobat Reader exe: "C:\Program Files (x86)\Adobe\Reader 11.0\Reader\acroRd32.exe" /n /s /o /h /t "C:\temp\print.pdf" \\printserver\printername"Very simple optionAdobe Acrobat reader requires to launch the GUI to send a job to a printer. Hence, this option cannot be used when printing from a service.Calling Adobe Acrobat Reader exe as a process from a .NET console appA bit harder than above, but still a simple solutionSame as cons abovePowershell script(Start-Process -FilePath "C:\temp\print.pdf" -Verb Print)Very simple optionUses default PDF client in quiet mode to Print, but also requires an active session.    Foxit ReaderVery simple optionRequires GUI same as Adobe Acrobat Reader Using the Reporting Services Web service to run and stream the report to an image object and then passed to the printerQuite complexThis is what we're trying to avoid  After pulling my hair out for two days, testing and evaluating the above solutions, I ended up learning more about printers (more than ever in my entire life) and how printer drivers work with PostScripts. I then bumped on to a PostScript interpreter called GhostScript (http://www.ghostscript.com/) and then the solution starts to get clearer and clearer.I managed to achieve a solution (maybe not be the simplest but efficient enough to achieve the least-maintenance-least-components goal) in 3-simple steps:Install GhostScript (http://www.ghostscript.com/download/) - this is an open-source PostScript and PDF interpreter. Printing directly using GhostScript only produces grayscale prints using the laserjet generic driver unless you save as BMP image and then interpret the colours using the imageInstall GSView (http://pages.cs.wisc.edu/~ghost/gsview/)- this is a GhostScript add-on to make it easier to directly print to a Windows printer. GSPrint automates the above  PDF -> BMP -> Printer Driver.Run the GSPrint command from SQL Server agent or Windows Service:"C:\Program Files\Ghostgum\gsview\gsprint.exe" -color -landscape -all -printer "printername" "C:\temp\print.pdf"Command line options are here: http://pages.cs.wisc.edu/~ghost/gsview/gsprint.htmAnother lesson learned is, since you are calling the script from the Service Account, it will not necessarily have the Printer mapped in its Windows profile (if it even has one). The workaround to this is by adding a local printer as you normally would and then map this printer to the network printer. Note that you may need to install the Printer Driver locally in the server.So, that's it! There are many ways to achieve a solution. The key thing is how you provide the smartest solution!

    Read the article

  • Translate Languages in IE 8 with Bing Translator

    - by Asian Angel
    Do you need side by side or hover language translations while browsing? Then join us as we look at the Bing Translator accelerator for Internet Explorer 8. Note: This article is geared towards those who may not have used this accelerator before or declined to “add it” when setting up IE 8. Using Bing Translator Once you have clicked on Add to Internet Explorer and confirmed the installation your new accelerator is ready to use. For our example we chose a Norwegian news article. Right-click within the webpage to access the context menu entry for translating. Depending on the originating language, you may want to go ahead and set it manually before beginning the translation. The translation will be opened in a new tab… Note: The same entry can also be accessed through the All Accelerators listing. There are four settings available for viewing your translations: side by side, top/bottom, original with hover translation, & translation with hover original. First a look at the side by side view. When maximized the window area will be divided 50/50 and as you hover your mouse or scroll in one side the same action will occur simultaneously in the other side. The top/bottom view. As above browser actions occur simultaneously in both sections. The original with hover translation view. Especially helpful if you are studying a new language and want to check your level of understanding for the original language. The translation with original hover view. Four different viewing options make it easy to find the one that best suits your needs. Conclusion If you need a convenient way to translate between languages in Internet Explorer 8, then the Bing Translator accelerator just might be what you have been looking for. Links Add the Bing Translator accelerator to Internet Explorer 8 Similar Articles Productive Geek Tips Quickly Translate Text to Another Language in Word 2007Add Google Translation Power to FirefoxTranslate Foreign Website Text to your Native LanguageAuto Translate Text in Google ChromeView Word Definitions in IE 8 with the Define with Bing Accelerator TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials

    Read the article

  • [MISC GEEKERY] Lucid Lynx to Come Loaded with Ubuntu One Music Store

    - by Vivek
    Ubuntu 10.04 (code name Lucid Lynx) will come loaded with the Ubuntu One music store. Rhythmbox will have the Ubuntu One music store integrated in it. It’ll also allow users to download purchased music to their local machine. Ubuntu One Music Store Users will be able to access Ubuntu One music store from the sidebar of Rhythmbox. The music store is a web page that opens in the Rhythmbox player. There are albums listed on the home page of the Ubuntu One music store page. Ubuntu One music store is powered by 7digital, which is a leading digital B2B media delivery company based in London and operating globally. Canonical, the company behind Ubuntu, has partnered with 7digital to bring the music store to it’s users, integrating it with Rhythmbox and it’s cloud storage service UbuntuOne which was launched last year. The home screen of the Ubuntu One music store displays popular albums and functionality to browse and search. You can search for Artists, Tracks, Albums, or a combination of all three. Users will also be able to browse the store alphabetically, or based on different music genres. Once you select a specific artist, all their available albums are arranged in a grid. Once an album is selected, you’ll will be able to download specific songs or the whole album. You’ll also be allowed to preview different songs for 60 seconds. You’ll be able to buy tracks using a credit card or with PayPal. The purchased tracks will be visible under Library \ Purchased from Ubuntu One. The downloaded tracks are also synced with your UbuntuOne account. This means that you’ll be able to access your tracks from any where on the web. The default UbuntuOne account comes with 2 GB free storage, however, you can also purchase additional space if you need it.   All the music is in mp3 format which is not supported by default in Ubuntu. However, you can get mp3 playback functionality using GStreamer multimedia framework. Conclusion All in all the Ubuntu One music store is a positive move to enhance the user experience and also increase the popularity of Canonical in bringing Ubuntu closer to regular users. This would also provide Canonical to make some revenue in collaboration with 7digital. Ubuntu One Music Store Wiki Similar Articles Productive Geek Tips Install GIMP 2.7.1 on Lucid Lynx using PPAExaile 0.3.0 is a Music Player for UbuntuHow to install Spotify in Ubuntu 9.10 using WineAdding extra Repositories on UbuntuSpeed Up Amarok With Large Music Collections TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool

    Read the article

  • Silverlight Cream for April 02, 2010 -- #828

    - by Dave Campbell
    In this Issue: Phil Middlemiss, Robert Kozak, Kathleen Dollard, Avi Pilosof, Nokola, Jeff Wilcox, David Anson, Timmy Kokke, Tim Greenfield, and Josh Smith. Shoutout: SmartyP has additional info up on his WP7 Pivot app: Preview of My Current Windows Phone 7 Pivot Work From SilverlightCream.com: A Chrome and Glass Theme - Part I Phil Middlemiss is starting a tutorial series on building a new theme for Silverlight, in this first one we define some gradients and color resources... good stuff Phil Intercepting INotifyPropertyChanged This is Robert Kozak's first post on this blog, but it's a good one about INotifyPropertyChanged and MVVM and has a solution in the post with lots of code and discussion. How do I Display Data of Complex Bound Criteria in Horizontal Lists in Silverlight? Kathleen Dollard's latest article in Visual Studio magazine is in answer to a question about displaying a list of complex bound criteria including data, child data, and photos, and displaying them horizontally one at a time. Very nice-looking result, and all the code. Windows Phone: Frame/Page navigation and transitions using the TransitioningContentControl Avi Pilosof discusses the built-in (boring) navigation on WP7, and then shows using the TransitionContentControl from the Toolkit to apply transitions to the navigation. EasyPainter: Cloud Turbulence and Particle Buzz Nokola returns with a couple more effects for EasyPainter: Cloud Turbulence and Particle Buzz ... check out the example screenshots, then go grab the code. Property change notifications for multithreaded Silverlight applications Jeff Wilcox is discussing the need for getting change notifications to always happen on the UI thread in multi-threaded apps... great diagrams to see what's going on. Tip: The default value of a DependencyProperty is shared by all instances of the class that registers it David Anson has a tip up about setting the default value of a DependencyProperty, and the consequence that may have depending upon the type. Building a “real” extension for Expression Blend Timmy Kokke's code is WPF, but the subject is near and dear to us all, Timmy has a real-world Expression Blend extension up... a search for controls in the Objects and Timelines pane ... and even if that doesn't interest you... it's the source to a Blend extension! XPath support in Silverlight 4 + XPathPad Tim Greenfield not only talks about XPath in SL4RC, but he has produced a tool, XPathPad, and provided the source... if you've used XPath, you either are a higher thinker than me(not a big stretch), or you need this :) Using a Service Locator to Work with MessageBoxes in an MVVM Application Josh Smith posted about a question that comes up a lot: showing a messagebox from a ViewModel object. This might not work for custom message boxes or unit testing. This post covers the Unit Testing aspect. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • OWB 11gR2 &ndash; Flexible and extensible

    - by David Allan
    The Oracle data integration extensibility capabilities are something I love, nothing more frustrating than a tool or platform that is very constraining. I think extensibility and flexibility are invaluable capabilities in the data integration arena. I liked Uli Bethke's posting on some extensibility capabilities with ODI (see Nesting ODI Substitution Method Calls here), he has some useful guidance on making customizations to existing KMs, nice to learn by example. I thought I'd illustrate the same capabilities with ODI's partner OWB for the OWB community. There is a whole new world of potential. The LKM/IKM/CKM/JKMs are the primary templates that are supported (plus the Oracle Target code template), so there is a lot of potential for customizing and extending the product in this release. Enough waffle... Diving in at the deep end from Uli's post, in OWB the table operator has a number of additional properties in OWB 11gR2 that let you annotate the column usage with ODI-like properties such as the slowly changing usage or for your own user-defined purpose as in Uli's post, below you see for the target table SALES_TARGET we can use the UD5 property which when assigned the code template (knowledge module) which has been modified with Uli's change we can do custom things such as creating indices - provides The code template used by the mapping has the additional step which is basically the code illustrated from Uli's posting just used directly, the ODI 10g substitution references also supported from within OWB's runtime. Now to see whether this does what we expect before we execute it, we can check out the generated code similar to how the traditional mapping generation and preview works, you do this by clicking on the 'Inspect Code' button on the execution units code template assignment. This then  creates another tab with prefix 'Code - <mapping name>' where the generated code is put, scrolling down we find the last step with the indices being created, looks good, so we are ready to deploy and execute. After executing the mapping we can then use the 'Audit Information' panel (select the mapping in the designer tree and click on View/Audit Information), this gives us a view of the execution where we can drill into the tasks that were executed and inspect both the template and the generated code that was executed and any potential errors. Reflecting back on earlier versions of OWB, these were the kinds of features that were always highly desirable, getting under the hood of the code generation and tweaking bit and pieces - fun and powerful stuff! We can step it up a bit here and explore some further ideas. The example below is a daisy-chained set of execution units where the intermediate table is a target of one unit and the source for another. We want that table to be a global temporary table, so can tweak the templates. Back to the copy of SQL Control Append (for demo purposes) we modify the create target table step to make the table a global temporary table, with the option of on commit preserve rows. You can get a feel for some of the customizations and changes possible, providing some great flexibility and extensibility for the data integration tools.

    Read the article

  • What’s new in Silverlight 4 RC?

    - by pluginbaby
    I am here in Las Vegas for MIX10 where Scott Guthrie announced today the release of Silverlight 4 RC and the Visual Studio 2010 tools. You can now install VS2010 RC!!! As always, downloads links are here: www.silverlight.net He also said that the final version of Silverlight 4 will come next month (so april)! 4 months ago, I wrote a blog post on the new features of Silverlight 4 beta, so… what’s new in the RC ?   Rich Text · RichTextArea renamed to RichTextBox · Text position and selection APIs · “Xaml” property for serializing text content · XAML clipboard format · FlowDirection support on Runs tag · “Format then type” support when dragging controls to the designer · Thai/Vietnamese/Indic support · UI Automation Text pattern   Networking · UploadProgress support (Client stack) · Caching support (Client stack) · Sockets security restrictions removal (Elevated Trust) · Sockets policy file retrieval via HTTP · Accept-Language header   Out of Browser (Elevated Trust) · XAP signing · Silent install and emulation mode · Custom window chrome · Better support for COM Automation · Cancellable shutdown event · Updated security dialogs   Media · Pinned full-screen mode on secondary display · Webcam/Mic configuration preview · More descriptive MediaSourceStream errors · Content & Output protection updates · Updates to H.264 content protection (ClearNAL) · Digital Constraint Token · CGMS-A · Multicast · Graphics card driver validation & revocation   Graphics and Printing · HW accelerated Perspective Transforms · Ability to query page size and printable area · Memory usage and perf improvements   Data · Entity-level validation support of INotifyDataErrorInfo for DataGrid · XPath support for XML   Parser · New architecture enables future innovation · Performance and stability improvements · XmlnsPrefix & XmlnsDefinition attributes · Support setting order-dependent properties   Globalization & Localization · Support for 31 new languages · Arabic, Hebrew and Thai input on Mac · Indic support   More … · Update to DeepZoom code base with HW acceleration · Support for Private mode browsing · Google Chrome support (Windows) · FrameworkElement.Unloaded event · HTML Hosting accessibility · IsoStore perf improvements · Native hosting perf improvements (e.g., Bing Toolbar) · Consistency with Silverlight for Mobile APIs and Tooling · SDK   - System.Numerics.dll   - Dynamic XAP support (MEF)   - Frame/Navigation refresh support   That’s a lot!   You will find more details on the following links: http://timheuer.com/blog/archive/2010/03/15/whats-new-in-silverlight-4-rc-mix10.aspx http://www.davidpoll.com/2010/03/15/new-in-the-silverlight-4-rc-xaml-features/   Technorati Tags: Silverlight

    Read the article

  • Sixeyed.Caching available now on NuGet and GitHub!

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/22/sixeyed.caching-available-now-on-nuget-and-github.aspxThe good guys at Pluralsight have okayed me to publish my caching framework (as seen in Caching in the .NET Stack: Inside-Out) as an open-source library, and it’s out now. You can get it here: Sixeyed.Caching source code on GitHub, and here: Sixeyed.Caching package v1.0.0 on NuGet. If you haven’t seen the course, there’s a preview here on YouTube: In-Process and Out-of-Process Caches, which gives a good flavour. The library is a wrapper around various cache providers, including the .NET MemoryCache, AppFabric cache, and  memcached*. All the wrappers inherit from a base class which gives you a set of common functionality against all the cache implementations: •    inherits OutputCacheProvider, so you can use your chosen cache provider as an ASP.NET output cache; •    serialization and encryption, so you can configure whether you want your cache items serialized (XML, JSON or binary) and encrypted; •    instrumentation, you can optionally use performance counters to monitor cache attempts and hits, at a low level. The framework wraps up different caches into an ICache interface, and it lets you use a provider directly like this: Cache.Memory.Get<RefData>(refDataKey); - or with configuration to use the default cache provider: Cache.Default.Get<RefData>(refDataKey); The library uses Unity’s interception framework to implement AOP caching, which you can use by flagging methods with the [Cache] attribute: [Cache] public RefData GetItem(string refDataKey) - and you can be more specific on the required cache behaviour: [Cache(CacheType=CacheType.Memory, Days=1] public RefData GetItem(string refDataKey) - or really specific: [Cache(CacheType=CacheType.Disk, SerializationFormat=SerializationFormat.Json, Hours=2, Minutes=59)] public RefData GetItem(string refDataKey) Provided you get instances of classes with cacheable methods from the container, the attributed method results will be cached, and repeated calls will be fetched from the cache. You can also set a bunch of cache defaults in application config, like whether to use encryption and instrumentation, and whether the cache system is enabled at all: <sixeyed.caching enabled="true"> <performanceCounters instrumentCacheTotalCounts="true" instrumentCacheTargetCounts="true" categoryNamePrefix ="Sixeyed.Caching.Tests"/> <encryption enabled="true" key="1234567890abcdef1234567890abcdef" iv="1234567890abcdef"/> <!-- key must be 32 characters, IV must be 16 characters--> </sixeyed.caching> For AOP and methods flagged with the cache attribute, you can override the compile-time cache settings at runtime with more config (keyed by the class and method name): <sixeyed.caching enabled="true"> <targets> <target keyPrefix="MethodLevelCachingStub.GetRandomIntCacheConfiguredInternal" enabled="false"/> <target keyPrefix="MethodLevelCachingStub.GetRandomIntCacheExpiresConfiguredInternal" seconds="1"/> </targets> It’s released under the MIT license, so you can use it freely in your own apps and modify as required. I’ll be adding more content to the GitHub wiki, which will be the main source of documentation, but for now there’s an FAQ to get you started. * - in the course the framework library also wraps NCache Express, but there's no public redistributable library that I can find, so it's not in Sixeyed.Caching.

    Read the article

  • Nginx and PHP Fundamentals

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/08/01/nginx-and-php-fundamentals.aspxHot on the heels of my .NET caching course, I’ve had my first “fundamentals” course released on Pluralsight: Nginx and PHP Fundamentals. It’s a practical look at two of the biggest technologies on the web – Nginx, which is the fastest growing HTTP server around (currently hosting 100+ million sites), and PHP, which powers more websites than any other server-side framework (currently 240+ million sites). The two technologies work well together, both are open-source and cross-platform and both are lightweight and easy to get started with - you just need to download and unzip the runtimes, and with a text editor you can create and host dynamic websites. I’ve used PHP as a second (sometimes third) language since 2005 when I was brought cold into an established codebase to help improve performance, and Nginx to host tier 2 apps for the last couple of years. As with any training course, you learn new things as you produce it, and it was good to focus on a different stack from my commercial .NET world. In the course I start with a website in two parts – one which is just static content, and one which processes a user registration form using ASP.NET MVC, both running in IIS. Over four modules I migrate the app to Nginx and PHP: Hosting Static Content in Nginx – how to deploy and configure Nginx for a basic website; PHP Part 1: Basic Web Forms – installing PHP and an IDE, and building a simple form with server-side validation; PHP Part 2: Packages and Integration – using PECL and Composer for packages to connect to Azure, AWS, Mongo and reCAPTCHA; Hosting PHP in Nginx – configuring Nginx to host our PHP site. Along the way I run some performance stats with JMeter, and the headlines are that Nginx running on Linux outperforms IIS on Windows for static content,by 800 requests per second over 1000 concurrent requests; and Linux+Ngnix+PHP outperforms Windows+IIS+ASP.NET MVC by 700 request per second with the same load. Of course, the headline stats don’t tell the whole story, and when you add OpCode caching for PHP and the ASP.NET Output Cache, the results are very different. As Web architecture moves away from heavy server-side processing, to Single Page Apps with client-side frameworks like AngularJS and Knockout, I think there’s an increasing need for high-performance, low-cost server technologies, and the combination of Nginx and PHP makes a compelling case.

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >