Search Results

Search found 1915 results on 77 pages for 'identical'.

Page 52/77 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Efficient mirroring of directories using hard links [closed]

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

  • Cause of laptop only booting once every 3-10 times

    - by user16441
    My 3-year old Asus EEE laptop has been working flawlessly in the past, but in the last week it has started behaving oddly. When I boot, one of the following happens: Screen remains totally black. Absolutely nothing comes up. I need to retry booting. (Happens 70% of the time). Laptop starts regular boot. However, somewhere during the boot process it goes black and I need to retry (20% of the time). Screen turning black is more likely when I move the laptop around during boot. Laptop boots properly. Once booted, all is fine. But I'm afraid to turn it off and have been keeping the laptop running till the night. Additional details: All normal lights light up when I boot; whichever of the scenarios goes down. There are no odd sounds or beeps, whichever of the scenarios goes down. I thought it might be the SSD drive that is dying, but it does not use SMART and it appears difficult to troubleshoot. I tried booting with and without the battery, but the scenarios are identical. Before completely investigating the hard drive, I'd like to hear opinions regarding the cause of this problem. Is this most likely the hard drive, or could there be another cause for these symptoms? Using Arch Linux.

    Read the article

  • 3 Server, is this a cluster scenario?

    - by HornedBeast
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? In short, how can I get the best out of my 3 hardware servers? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • Casting an object using 'as' returns null: myObject = newObject as MyObject; // null

    - by John Russell
    I am trying to create a custom object in AS3 to pass information to and from a server, which in this case will be Red5. In the below screenshots you will see that I am able to send a request for an object from as3, and receive it successfully from the java server. However, when I try to cast the received object to my defined objectType using 'as', it takes the value of null. It is my understanding that that when using "as" your checking to see if your variable is a member of the specified data type. If the variable is not, then null will be returned. This screenshot illustrates that I am have successfully received my object 'o' from red5 and I am just about to cast it to the (supposedly) identical datatype testObject of LobbyData: However, when testObject = o as LobbyData; runs, it returns null. :( Below you will see my specifications both on the java server and the as3 client. I am confident that both objects are identical in every way, but for some reason flash does not think so. I have been pulling my hair out for a long time, does anyone have any thoughts? AS3 Object: import flash.utils.IDataInput; import flash.utils.IDataOutput; import flash.utils.IExternalizable; import flash.net.registerClassAlias; [Bindable] [RemoteClass(alias = "myLobbyData.LobbyData")] public class LobbyData implements IExternalizable { private var sent:int; // java sentinel private var u:String; // red5 username private var sen:int; // another sentinel? private var ui:int; // fb uid private var fn:String; // fb name private var pic:String; // fb pic private var inb:Boolean; // is in the table? private var t:int; // table number private var s:int; // seat number public function setSent(sent:int):void { this.sent = sent; } public function getSent():int { return sent; } public function setU(u:String):void { this.u = u; } public function getU():String { return u; } public function setSen(sen:int):void { this.sen = sen; } public function getSen():int { return sen; } public function setUi(ui:int):void { this.ui = ui; } public function getUi():int { return ui; } public function setFn(fn:String):void { this.fn = fn; } public function getFn():String { return fn; } public function setPic(pic:String):void { this.pic = pic; } public function getPic():String { return pic; } public function setInb(inb:Boolean):void { this.inb = inb; } public function getInb():Boolean { return inb; } public function setT(t:int):void { this.t = t; } public function getT():int { return t; } public function setS(s:int):void { this.s = s; } public function getS():int { return s; } public function readExternal(input:IDataInput):void { sent = input.readInt(); u = input.readUTF(); sen = input.readInt(); ui = input.readInt(); fn = input.readUTF(); pic = input.readUTF(); inb = input.readBoolean(); t = input.readInt(); s = input.readInt(); } public function writeExternal(output:IDataOutput):void { output.writeInt(sent); output.writeUTF(u); output.writeInt(sen); output.writeInt(ui); output.writeUTF(fn); output.writeUTF(pic); output.writeBoolean(inb); output.writeInt(t); output.writeInt(s); } } Java Object: package myLobbyData; import org.red5.io.amf3.IDataInput; import org.red5.io.amf3.IDataOutput; import org.red5.io.amf3.IExternalizable; public class LobbyData implements IExternalizable { private static final long serialVersionUID = 115280920; private int sent; // java sentinel private String u; // red5 username private int sen; // another sentinel? private int ui; // fb uid private String fn; // fb name private String pic; // fb pic private Boolean inb; // is in the table? private int t; // table number private int s; // seat number public void setSent(int sent) { this.sent = sent; } public int getSent() { return sent; } public void setU(String u) { this.u = u; } public String getU() { return u; } public void setSen(int sen) { this.sen = sen; } public int getSen() { return sen; } public void setUi(int ui) { this.ui = ui; } public int getUi() { return ui; } public void setFn(String fn) { this.fn = fn; } public String getFn() { return fn; } public void setPic(String pic) { this.pic = pic; } public String getPic() { return pic; } public void setInb(Boolean inb) { this.inb = inb; } public Boolean getInb() { return inb; } public void setT(int t) { this.t = t; } public int getT() { return t; } public void setS(int s) { this.s = s; } public int getS() { return s; } @Override public void readExternal(IDataInput input) { sent = input.readInt(); u = input.readUTF(); sen = input.readInt(); ui = input.readInt(); fn = input.readUTF(); pic = input.readUTF(); inb = input.readBoolean(); t = input.readInt(); s = input.readInt(); } @Override public void writeExternal(IDataOutput output) { output.writeInt(sent); output.writeUTF(u); output.writeInt(sen); output.writeInt(ui); output.writeUTF(fn); output.writeUTF(pic); output.writeBoolean(inb); output.writeInt(t); output.writeInt(s); } } AS3 Client: public function refreshRoom(event:Event) { var resp:Responder=new Responder(handleResp,null); ncLobby.call("getLobbyData", resp, null); } public function handleResp(o:Object):void { var testObject:LobbyData=new LobbyData; testObject = o as LobbyData; trace(testObject); } Java Client public LobbyData getLobbyData(String param) { LobbyData lobbyData1 = new LobbyData(); lobbyData1.setSent(5); lobbyData1.setU("lawlcats"); lobbyData1.setSen(5); lobbyData1.setUi(5); lobbyData1.setFn("lulz"); lobbyData1.setPic("lulzagain"); lobbyData1.setInb(true); lobbyData1.setT(5); lobbyData1.setS(5); return lobbyData1; }

    Read the article

  • C++ non-member functions for nested template classes

    - by beldaz
    I have been writing several class templates that contain nested iterator classes, for which an equality comparison is required. As I believe is fairly typical, the comparison is performed with a non-member (and non-friend) operator== function. In doing so, my compiler (I'm using Mingw32 GCC 4.4 with flags -O3 -g -Wall) fails to find the function and I have run out of possible reasons. In the rather large block of code below there are three classes: a Base class, a Composed class that holds a Base object, and a Nested class identical to the Composed class except that it is nested within an Outer class. Non-member operator== functions are supplied for each. These classes are in templated and untemplated forms (in their own respective namespaces), with the latter equivalent to the former specialised for unsigned integers. In main, two identical objects for each class are compared. For the untemplated case there is no problem, but for the templated case the compiler fails to find operator==. What's going on? #include <iostream> namespace templated { template<typename T> class Base { T t_; public: explicit Base(const T& t) : t_(t) {} bool equal(const Base& x) const { return x.t_==t_; } }; template<typename T> bool operator==(const Base<T> &x, const Base<T> &y) { return x.equal(y); } template<typename T> class Composed { typedef Base<T> Base_; Base_ base_; public: explicit Composed(const T& t) : base_(t) {} bool equal(const Composed& x) const {return x.base_==base_;} }; template<typename T> bool operator==(const Composed<T> &x, const Composed<T> &y) { return x.equal(y); } template<typename T> class Outer { public: class Nested { typedef Base<T> Base_; Base_ base_; public: explicit Nested(const T& t) : base_(t) {} bool equal(const Nested& x) const {return x.base_==base_;} }; }; template<typename T> bool operator==(const typename Outer<T>::Nested &x, const typename Outer<T>::Nested &y) { return x.equal(y); } } // namespace templated namespace untemplated { class Base { unsigned int t_; public: explicit Base(const unsigned int& t) : t_(t) {} bool equal(const Base& x) const { return x.t_==t_; } }; bool operator==(const Base &x, const Base &y) { return x.equal(y); } class Composed { typedef Base Base_; Base_ base_; public: explicit Composed(const unsigned int& t) : base_(t) {} bool equal(const Composed& x) const {return x.base_==base_;} }; bool operator==(const Composed &x, const Composed &y) { return x.equal(y); } class Outer { public: class Nested { typedef Base Base_; Base_ base_; public: explicit Nested(const unsigned int& t) : base_(t) {} bool equal(const Nested& x) const {return x.base_==base_;} }; }; bool operator==(const Outer::Nested &x, const Outer::Nested &y) { return x.equal(y); } } // namespace untemplated int main() { using std::cout; unsigned int testVal=3; { // No templates first typedef untemplated::Base Base_t; Base_t a(testVal); Base_t b(testVal); cout << "a=b=" << testVal << "\n"; cout << "a==b ? " << (a==b ? "TRUE" : "FALSE") << "\n"; typedef untemplated::Composed Composed_t; Composed_t c(testVal); Composed_t d(testVal); cout << "c=d=" << testVal << "\n"; cout << "c==d ? " << (c==d ? "TRUE" : "FALSE") << "\n"; typedef untemplated::Outer::Nested Nested_t; Nested_t e(testVal); Nested_t f(testVal); cout << "e=f=" << testVal << "\n"; cout << "e==f ? " << (e==f ? "TRUE" : "FALSE") << "\n"; } { // Now with templates typedef templated::Base<unsigned int> Base_t; Base_t a(testVal); Base_t b(testVal); cout << "a=b=" << testVal << "\n"; cout << "a==b ? " << (a==b ? "TRUE" : "FALSE") << "\n"; typedef templated::Composed<unsigned int> Composed_t; Composed_t c(testVal); Composed_t d(testVal); cout << "c=d=" << testVal << "\n"; cout << "d==c ? " << (c==d ? "TRUE" : "FALSE") << "\n"; typedef templated::Outer<unsigned int>::Nested Nested_t; Nested_t e(testVal); Nested_t f(testVal); cout << "e=f=" << testVal << "\n"; cout << "e==f ? " << (e==f ? "TRUE" : "FALSE") << "\n"; // Above line causes compiler error: // error: no match for 'operator==' in 'e == f' } cout << std::endl; return 0; }

    Read the article

  • GPU Debugging with VS 11

    - by Daniel Moth
    With VS 11 Developer Preview we have invested tremendously in parallel debugging for both CPU (managed and native) and GPU debugging. I'll be doing a whole bunch of blog posts on those topics, and in this post I just wanted to get people started with GPU debugging, i.e. with debugging C++ AMP code. First I invite you to watch 6 minutes of a glimpse of the C++ AMP debugging experience though this video (ffw to minute 51:54, up until minute 59:16). Don't read the rest of this post, just go watch that video, ideally download the High Quality WMV. Summary GPU debugging essentially means debugging the lambda that you pass to the parallel_for_each call (plus any functions you call from the lambda, of course). CPU debugging means debugging all the code above and below the parallel_for_each call, i.e. all the code except the restrict(direct3d) lambda and the functions that it calls. With VS 11 you have to choose what debugger you want to use for a particular debugging session, CPU or GPU. So you can place breakpoints all over your code, then choose what debugger you want (CPU or GPU), and you'll only be able to hit breakpoints for the code type that the debugger engine understands – the remaining breakpoints will appear as unbound. If you want to hit the unbound breakpoints, you'd have to stop debugging, and start again with the other debugger. Sorry. We suck. We know. But once you are past that limitation, I think you'll find the experience truly rewarding – seriously! Switching debugger engines With the Developer Preview bits, one way to switch the debugger engine is through the project properties – see the screenshots that follow. This one is showing the CPU option selected, which is basically the default that you are all familiar with: This screenshot is showing the GPU option selected, by changing the debugger launcher (notice that this applies for both the local and remote case): You actually do not have to open the project properties just for switching the debugger engine, you can switch the selection from the toolbar in VS 11 Developer Preview too – see following screenshot (the effect is the same as if you opened the project properties and switched there) Breakpoint behavior Here are two screenshots, one showing a debugging session for CPU and the other a debugging session for GPU (notice the unbound breakpoints in each case) …and here is the GPU case (where we cannot bind the CPU breakpoints but can the GPU breakpoint, which is actually hit) Give C++ AMP debugging a try So to debug your C++ AMP code, pull down the drop down under the 'play' button to select the 'GPU C++ Direct3D Compute Debugger' menu option, then hit F5 (or the 'play' button itself). Then you can explore debugging by exploring the menus under the Debug and under the Debug->Windows menus. One way to do that exploration is through the C++ AMP debugging walkthrough on MSDN. Another way to explore the C++ AMP debugging experience, you can use the moth.cpp code file, which is what I used in my BUILD session debugger demo. Note that for my demo I was using the latest internal VS11 bits, so your experience with the Developer Preview bits won't be identical to what you saw me demonstrate, but it shouldn't be far off. Stay tuned for a lot more content on the parallel debugger in VS 11, both CPU and GPU, both managed and native. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • SharePoint 2010 Hosting :: SharePoint 2010 Custom Web Template

    - by mbridge
    SharePoint 2010 offers some changes and additions to the SharePoint 2007 approach. Site definitions and publishing providers remain largely the same, but site templates created from the SharePoint UI or SharePoint Designer are now saved to a .WSP file, the same solution deployment packaging file format used for deploying custom SharePoint solutions. Site Templates saved to a .WSP solution file can be imported into Visual Studio for additional customization. Introducing the WebTemplate Feature Element The WebTemplate element, introduced in SharePoint 2010, allows site templates to be defined and deployed as a Feature as part of a solution package. A WebTemplate element feature can be used to deploy site templates in either a Farm or Sandbox solution - without modification. If deployed as a Farm feature and solution, site templates will appear in the site collection provisioning page in Central Administration and can be used to provision new site collections, or within a Site Collection to create sub-sites. If deployed as a Site feature and Sandbox solution, site templates will appear within the site collection to support creating a root site or sub-sites. Creating a new WebTemplate Feature in Visual Studio 2010 In addition to supporting the ability to save and import Site Templates created from the SharePoint UI into Visual Studio for customization, it can also be used to create new site templates from scratch. In the following sample we will walk through how to create a new WebTemplate solution based on  a customized version of the out-of-box Blank Site. 1. Create a new Empty SharePoint Project in Visual Studio 2010. 2. Add a new Empty Element to the project. we like to create folders for each type of element in our solution, so in our sample, we have created a Web Templates folder, and then added the BLANKENT element. NOTE: The Elements folder MUST share the same name as the WebTemplate name property. 3. Open the empty Elements.xml and add the <WebTemplate /> element block. 4. Copy the default.aspx and ONET.XML files from the STS site definition location at 14\TEMPLATES\Site Templates\STS. We will customize the ONET.XML in the next section. Open the properties for each file and set the Deployment Type to ElementFile. This ensures the files are deployed with the Element when included in a Feature. 5. By default a new feature is added to the solution for you automatically when a new element is added to the solution. Rename and edit the feature as appropriate. Select Farm for the scope to deploy the WebTemplate to the entire farm, or Site for a sandboxed solution. Customize the ONET.XML At this point, you have a working WebTemplate solution that will deploy the identical site to the out-of-box Blank Site, however the ONET.XML supporting the STS site definition contains 3 configurations – essentially 3 separate site templates and can be simplified before customizing. In the following sample, we have trimmed the ONET.XML to the essentials for a single Site Template, and added references to the <SiteFeatures /> and <WebFeatures /> elements to include the SharePoint Standard and Enterprise features. We have left the top-level navigation bar, and the default page module intact, but removed all other extraneous markup.

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • OSB and Coherence Integration

    - by mark.ms.smith
    Anyone who has tried to manage Coherence nodes or tried to cache results in OSB, will appreciate the new functionality now available. As of WebLogic Server 10.3.4, you can use the WebLogic Administration Server, via the Administration Console or WLST, and java-based Node Manager to manage and monitor the life cycle of stand-alone Coherence cache servers. This is a great step forward as the previous options mainly involved writing your own scripts to do this. You can find an excellent description of how this works at James Bayer’s blog. You can also find the WebLogic documentation here.As of Oracle Service Bus 11gR1 (11.1.1.3.0), OSB now supports service result caching for Business Bervices with Coherence. If you use Business Services that return somewhat static results that do not change often, you can configure those Business Services to cache results. For Business Services that use result caching, you can control the time to live for the cached result. After the cached result expires, the next Business Service call results in invoking the back-end service to get the result. This result is then stored in the cache for future requests to access. I’m thinking that this caching functionality would be perfect for some sort of cross reference data that was refreshed nightly by batch. You can find the OSB Business Service documentation here.Result Caching in a dedicated JVMThis example demonstrates these new features by configuring a OSB Business Service to cache results in a separate Coherence JVM managed by WebLogic. The reason why you may want to use a separate, dedicated JVM is that the result cache data could potentially be quite large and you may want to protect your OSB java heap.In this example, the client will call an OSB Proxy Service to get Employee data based on an Employee Id. Using a Business Service, OSB calls an external system. The results are automatically cached and when called again, the respective results are retrieved from the cache rather than the external system.Step 1 – Set up your Coherence Server Via the OSB Administration Server Console, create your Coherence Server to be used as the results cache.Here are the configured Coherence Server arguments from the Server Start tab. Note that I’m using the default Cache Config and Override files in the domain.-Xms256m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=256m -Dtangosol.coherence.override=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-override.xml -Dtangosol.coherence.cluster=OSB-cluster -Dtangosol.coherence.cacheconfig=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dcom.sun.management.jmxremote Just incase you need it, here is my Coherence Server classpath:/app/middleware/jdev_11.1.1.4/oracle_common/modules/oracle.coherence_3.6/coherence.jar: /app/middleware/jdev_11.1.1.4/modules/features/weblogic.server.modules.coherence.server_10.3.4.0.jar: /app/middleware/jdev_11.1.1.4/oracle_osb/lib/osb-coherence-client.jarBy default, OSB will try and create a local result cache instance. You need to disable this by adding the following JVM parameters to each of the OSB Managed Servers:-Dtangosol.coherence.distributed.localstorage=false -DOSB.coherence.cluster=OSB-clusterIf you need more information on configuring a remote result cache, have a look at the configuration documentration under the heading Using an Out-of-Process Coherence Cache Server.Step 2 – Configure your Business Service Under the respective Business Service Message Handling Configuration (Advanced Properties), you need to enable “Result Caching”. Additionally, you need to determine what the cache data will be keyed on. In the example below, I’m keying it on the unique Employee Id.The Results As this test was on my laptop, the actual timings are just an indication that there is a benefit to caching results. Using my test harness, I sent 10,000 requests to OSB, all with the same Employee Id. In this case, I had result caching disabled.You can see that this caused the back end Business Service (BS_GetEmployeeData) to be called for each request. Then after enabling result caching, I sent the same number of identical requests.You can now see the Business Service was only invoked once on the first request. All subsequent requests used the Results Cache.

    Read the article

  • Java Developer Days India Trip Report

    - by reza_rahman
    You are probably aware of Oracle's decision to discontinue the relatively resource intensive regional JavaOnes in favor of more Java Developer Days, virtual events and deeper involvement with independent conferences. In comparison to the regional JavaOnes, Java Developer Days are smaller, shorter (typically one full day), more focused (mostly Oracle speakers/topics) and more local (targeting cities). For those who have been around the Java ecosystem for a few years, they are basically the current incarnation of the highly popular and developer centric Sun Tech Days. October 21st through October 25th I spoke at Java Developer Days India. This was basically three separate but identical events in the cities of Pune (October 21st), Chennai (October 24th) and Bangalore (October 25th). For those with some familiarity with India, other than Hyderabad these cities are India's IT powerhouses. The events were basically focused on Java EE. I delivered five of the sessions (yes, you read that right), while my friend NetBeans Group Product Manager Ashwin Rao delivered three talks. Jagadish Ramu from the GlassFish team India helped me out in Bangalore by delivering two sessions. It was also a pleasure to introduce my co-contributor to the Cargo Tracker Java EE Blue Prints project Vijay Nair at Bangalore during the opening talk. I thought it was a great dynamic between Ashwin and I flipping between talking about the new features and demoing live code in NetBeans. The following were my sessions (source PDF and abstracts posted as usual on my SlideShare account): JavaEE.Next(): Java EE 7, 8, and Beyond Building Java HTML5/WebSocket Applications with JSR 356 What’s New in Java Message Service 2 JAX-RS 2: New and Noteworthy in the RESTful Web Services API Using NoSQL with JPA, EclipseLink and Java EE The event went well and was packed in all three cities. The Q&A was great and Indian developers were particularly generous with kind words :-). It seemed the event and our presence was appreciated in the truest sense which I must say is a rarity. The events were exhausting but very rewarding at the same time. As hectic as the three city trip was I tried to see at least some of the major sights (mostly at night) since this was my very first time to India. I think the slideshow below is a good representation of the riddle wrapped up in an enigma that is India (and the rest of the Indian sub-continent for that matter): Ironically enough what struck me the most during this trip is the woman pictured below - Shushma. My chauffeur, tour guide and friend for a day, she fluidly navigated the madness that is Mumbai traffic with skills that would make Evel Knievel blush while simultaneously pointing out sights and prompting me to take pictures (Mumbai was my stopover and gateway to/from India). In some ways she is probably the most potent symbol of the new India. When we parted ways I told her she should take solace in the fact she has won mostly without a fight a potentially hazardous battle her sisters across the Arabian sea are still fighting. I'm not sure she entirely understood the significance of what I told her. I hope that she did. I also had occasion to take a pretty cool local bus ride from Chennai to Bangalore instead of yet another boring flight. All in all I really enjoyed the trip to India and hope to return again soon. Jai Hind :-)!

    Read the article

  • Changes in licence in forked project what are my rights?

    - by Wes
    Hi I'm intrested in using the apparently now defunct app-mdi libray in a flex application for a paying customer. http://sourceforge.net/projects/appmdi/ It appears that the app-mdi project has been forked from flex-mdi and indeed the code has so much in common it would appear almost identical to the origional code. Now in the original source flex-mdi the following licence appears in the source code /* Copyright (c) 2007 FlexMDI Contributors. See: http://code.google.com/p/flexmdi/wiki/ProjectContributors Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ However in the app-mdi library on the same file the following licence appears. Copyright (c) 2010, TRUEAGILE All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of the TRUEAGILE nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ Now I've no problem with the licence except for the line. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. The copyright notice in its entireity makes no sense in binary material. Specifically talking about redistobutions in the binary form. Finally the question is what exactly has to be shown on web clients who access softare that utilises this library? Also is changing the licence in this manner actually allowed?

    Read the article

  • Mirroring git and mercurial repos the lazy way

    - by Greg Malcolm
    I maintain Python Koans on mirrored on both Github using git and Bitbucket using mercurial. I get pull requests from both repos but it turns out keeping the two repos in sync is pretty easy. Here is how it's done... Assuming I’m starting again on a clean laptop, first I clone both repos ~/git $ hg clone https://bitbucket.org/gregmalcolm/python_koans ~/git $ git clone [email protected]:gregmalcolm/python_koans.git python_koans2 The only thing that makes a folder a git or mercurial repository is the .hg folder in the root of python_koans and the .git folder in the root of python_koans2. So I just need to move the .git folder over into the python_koans folder I'm using for mercurial: ~/git $ rm -rf python_koans/.git ~/git $ mv python_koans2/.git python_koans ~/git $ ls -la python_koans total 48 drwxr-xr-x 11 greg staff 374 Mar 17 15:10 . drwxr-xr-x 62 greg staff 2108 Mar 17 14:58 .. drwxr-xr-x 12 greg staff 408 Mar 17 14:58 .git -rw-r--r-- 1 greg staff 34 Mar 17 14:54 .gitignore drwxr-xr-x 13 greg staff 442 Mar 17 14:54 .hg -rw-r--r-- 1 greg staff 48 Mar 17 14:54 .hgignore -rw-r--r-- 1 greg staff 365 Mar 17 14:54 Contributor Notes.txt -rw-r--r-- 1 greg staff 1082 Mar 17 14:54 MIT-LICENSE -rw-r--r-- 1 greg staff 5765 Mar 17 14:54 README.txt drwxr-xr-x 10 greg staff 340 Mar 17 14:54 python 2 drwxr-xr-x 10 greg staff 340 Mar 17 14:54 python 3 That’s about it! Now git and mercurial are tracking files in the same folder. Of course you will still need to set up your .gitignore to ignore mercurial’s dotfiles and .hgignore to ignore git’s dotfiles or there will be squabbling in the backseat. ~/git $ cd python_koans/ ~/git/python_koans $ cat .gitignore *.pyc *.swp .DS_Store answers .hg <-- Ignore mercurial ~/git/python_koans $ cat .hgignore syntax: glob *.pyc *.swp .DS_Store answers .git <-- Ignore git Because both my mirrors are both identical as far as tracked files are concerned I won’t yet see anything if I check statuses at this point: ~/git/python_koans $ git status # On branch master nothing to commit (working directory clean) ~/git/python_koans $ hg status ~/git/python_koans But how about if I accept a pull request from the bitbucket (mercuial) site? ~/git/python_koans $ hg status ~/git/python_koans $ git status # On branch master # Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: python 2/koans/about_decorating_with_classes.py # modified: python 2/koans/about_iteration.py # modified: python 2/koans/about_with_statements.py # modified: python 3/koans/about_decorating_with_classes.py # modified: python 3/koans/about_iteration.py # modified: python 3/koans/about_with_statements.py Mercurial doesn’t have any changes to track right now, but git has changes. Commit and push them up to github and balance is restored to the force: ~/git/python_koans $ git commit -am "Merge from bitbucket mirror: 'gpiancastelli - Fix for issue #21 and some other tweaks'" [master 79ca184] Merge from bitbucket mirror: 'gpiancastelli - Fix for issue #21 and some other tweaks' 6 files changed, 78 insertions(+), 63 deletions(-) ~/git/python_koans $ git push origin master Or just use hg-git? The github developers have actually published a plugin for automatic mirroring: http://hg-git.github.com I haven’t used it because at the time I tried it a couple of years ago I was having problems getting all the parts to play nice with each other. Probably works fine now though..

    Read the article

  • Knowledge Management Feedback

    - by Robert Schweighardt
    Did you know that you can provide feedback on Knowledge Management (KM) articles? It's nice to read a technical article that is well-written, the grammar and spelling are correct, the information is up to date, concise, to the point, easy to understand and it flows from one paragraph to another.  And though we always strive for a well-written article, it doesn't always come out that way. Knowledge Management articles are written by Oracle Support Engineers and we welcome your feedback.  Providing feedback helps to improve Oracle's Knowledge Base.  If you're reading a KM article and you have a comment, please let us know about it.  Maybe it's just to fix a spelling or grammatical error.  Maybe there's a broken link that needs to be fixed.  Maybe it's a suggestion to provide additional information.  Maybe the article contains incorrect information.  Maybe some information in the article is outdated.  Maybe something is not clear in the article.  Whatever it is, we want to hear about it.  We value your input! When you provide feedback it goes directly to the owner of the article.  The owner carefully reviews the comment and decides whether or not to implement it.  Most comments are implemented and we strive to implement them within a week!  For those comments that are not implemented, there is normally a good reason.  It may not be feasible to implement the suggestion or the suggestion may not be correct.  We don't take the decision lightly! So how do you provide feedback? Providing feedback on a KM article depends on whether you're a customer or an Oracle Employee. Customer 1. In the upper right hand corner of the article, click on the little +/- Rate this document icon: Note: The grayed out Comments (0) link will only show a number when there are open comments that are still being evaluated. 2. In the Article Rating window, complete as many of the following optional fields as you like and then click the Send Rating button: Rate the article as Excellent, Good or Poor Specify whether the article helped you or not Specify the ease of finding the article Provide whatever comments you have Employee The interface for Oracle Employees is a little bit different, there are more options. 1. The +/- Rate this document icon is also available to employees and is identical to what the customers have.  Please see Customer section above. 2. The Show document comments link shows all comments that have ever been submitted for the article 3. Employees have an additional way to submit a comment.  Click on the little + Add Comment icon: 4. Fill out the Add Comment fields and click the Add Comment button: We look forward to your feedback!

    Read the article

  • Monitor and Control Memory Usage in Google Chrome

    - by Asian Angel
    Do you want to know just how much memory Google Chrome and any installed extensions are using at a given moment? With just a few clicks you can see just what is going on under the hood of your browser. How Much Memory are the Extensions Using? Here is our test browser with a new tab and the Extensions Page open, five enabled extensions, and one disabled at the moment. You can access Chrome’s Task Manager using the Page Menu, going to Developer, and selecting Task manager… Or by right clicking on the Tab Bar and selecting Task manager. There is also a keyboard shortcut (Shift + Esc) available for the “keyboard ninjas”. Sitting idle as shown above here are the stats for our test browser. All of the extensions are sitting there eating memory even though some of them are not available/active for use on our new tab and Extensions Page. Not so good… If the default layout is not to your liking then you can easily modify the information that is available by right clicking and adding/removing extra columns as desired. For our example we added Shared Memory & Private Memory. Using the about:memory Page to View Memory Usage Want even more detail? Type about:memory into the Address Bar and press Enter. Note: You can also access this page by clicking on the Stats for nerds Link in the lower left corner of the Task Manager Window. Focusing on the four distinct areas you can see the exact version of Chrome that is currently installed on your system… View the Memory & Virtual Memory statistics for Chrome… Note: If you have other browsers running at the same time you can view statistics for them here too. See a list of the Processes currently running… And the Memory & Virtual Memory statistics for those processes. The Difference with the Extensions Disabled Just for fun we decided to disable all of the extension in our test browser… The Task Manager Window is looking rather empty now but the memory consumption has definitely seen an improvement. Comparing Memory Usage for Two Extensions with Similar Functions For our next step we decided to compare the memory usage for two extensions with similar functionality. This can be helpful if you are wanting to keep memory consumption trimmed down as much as possible when deciding between similar extensions. First up was Speed Dial”(see our review here). The stats for Speed Dial…quite a change from what was shown above (~3,000 – 6,000 K). Next up was Incredible StartPage (see our review here). Surprisingly both were nearly identical in the amount of memory being used. Purging Memory Perhaps you like the idea of being able to “purge” some of that excess memory consumption. With a simple command switch modification to Chrome’s shortcut(s) you can add a Purge Memory Button to the Task Manager Window as shown below.  Notice the amount of memory being consumed at the moment… Note: The tutorial for adding the command switch can be found here. One quick click and there is a noticeable drop in memory consumption. Conclusion We hope that our examples here will prove useful to you in managing the memory consumption in your own Google Chrome installation. If you have a computer with limited resources every little bit definitely helps out. Similar Articles Productive Geek Tips Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersFix for Firefox memory leak on WindowsHow to Purge Memory in Google ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs

    Read the article

  • Whether to use UNION or OR in SQL Server Queries

    - by Dinesh Asanka
    Recently I came across with an article on DB2 about using Union instead of OR. So I thought of carrying out a research on SQL Server on what scenarios UNION is optimal in and which scenarios OR would be best. I will analyze this with a few scenarios using samples taken  from the AdventureWorks database Sales.SalesOrderDetail table. Scenario 1: Selecting all columns So we are going to select all columns and you have a non-clustered index on the ProductID column. --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 874 So query 1 is using OR and the later is using UNION. Let us analyze the execution plans for these queries. Query 1 Query 2 As expected Query 1 will use Clustered Index Scan but Query 2, uses all sorts of things. In this case, since it is using multiple CPUs you might have CX_PACKET waits as well. Let’s look at the profiler results for these two queries: CPU Reads Duration Row Counts OR 78 1252 389 3854 UNION 250 7495 660 3854 You can see from the above table the UNION query is not performing well as the  OR query though both are retuning same no of rows (3854).These results indicate that, for the above scenario UNION should be used. Scenario 2: Non-Clustered and Clustered Index Columns only --Query 1 : OR SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 GO --Query 2 : UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 874 GO So this time, we will be selecting only index columns, which means these queries will avoid a data page lookup. As in the previous case we will analyze the execution plans: Query 1 Query 2 Again, Query 2 is more complex than Query 1. Let us look at the profile analysis: CPU Reads Duration Row Counts OR 0 24 208 3854 UNION 0 38 193 3854 In this analyzis, there is only slight difference between OR and UNION. Scenario 3: Selecting all columns for different fields Up to now, we were using only one column (ProductID) in the where clause.  What if we have two columns for where clauses and let us assume both are covered by non-clustered indexes? --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2: As we can see, the query plan for the second query has improved. Let us see the profiler results. CPU Reads Duration Row Counts OR 47 1278 443 1228 UNION 31 1334 400 1228 So in this case too, there is little difference between OR and UNION. Scenario 4: Selecting Clustered index columns for different fields Now let us go only with clustered indexes: --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2 Now both execution plans are almost identical except is an additional Stream Aggregate is used in the first query. This means UNION has advantage over OR in this scenario. Let us see profiler results for these queries again. CPU Reads Duration Row Counts OR 0 319 366 1228 UNION 0 50 193 1228 Now see the differences, in this scenario UNION has somewhat of an advantage over OR. Conclusion Using UNION or OR depends on the scenario you are faced with. So you need to do your analyzing before selecting the appropriate method. Also, above the four scenarios are not all an exhaustive list of scenarios, I selected those for the broad description purposes only.

    Read the article

  • std::map for storing static const Objects

    - by Sean M.
    I am making a game similar to Minecraft, and I am trying to fine a way to keep a map of Block objects sorted by their id. This is almost identical to the way that Minecraft does it, in that they declare a bunch of static final Block objects and initialize them, and then the constructor of each block puts a reference of that block into whatever the Java equivalent of a std::map is, so there is a central place to get ids and the Blocks with those ids. The problem is, that I am making my game in C++, and trying to do the exact same thing. In Block.h, I am declaring the Blocks like so: //Block.h public: static const Block Vacuum; static const Block Test; And in Block.cpp I am initializing them like so: //Block.cpp const Block Block::Vacuum = Block("Vacuum", 0, 0); const Block Block::Test = Block("Test", 1, 0); The block constructor looks like this: Block::Block(std::string name, uint16 id, uint8 tex) { //Check for repeat ids if (IdInUse(id)) { fprintf(stderr, "Block id %u is already in use!", (uint32)id); throw std::runtime_error("You cannot reuse block ids!"); } _id = id; //Check for repeat names if (NameInUse(name)) { fprintf(stderr, "Block name %s is already in use!", name); throw std::runtime_error("You cannot reuse block names!"); } _name = name; _tex = tex; //fprintf(stdout, "Using texture %u\n", _tex); _transparent = false; _solidity = 1.0f; idMap[id] = this; nameMap[name] = this; } And finally, the maps that I'm using to store references of Blocks in relation to their names and ids are declared as such: std::map<uint16, Block*> Block::idMap = std::map<uint16, Block*>(); //The map of block ids std::map<std::string, Block*> Block::nameMap = std::map<std::string, Block*>(); //The map of block names The problem comes when I try to get the Blocks in the maps using a method called const Block* GetBlock(uint16 id), where the last line is return idMap.at(id);. This line returns a Block with completely random values like _visibility = 0xcccc and such like that, found out through debugging. So my question is, is there something wrong with the blocks being declared as const obejcts, and then stored at pointers and accessed later on? The reason I cant store them as Block& is because that makes a copy of the Block when it is entered, so the block wouldn't have any of the attributes that could be set afterwards in the constructor of any child class, so I think I need to store them as a pointer. Any help is greatly appreciated, as I don't fully understand pointers yet. Just ask if you need to see any other parts of the code.

    Read the article

  • First Impressions of a MacBook (from a PC guy)

    - by dgreen
    Disclaimer: I've been a PC guy my entire working career. I'd probably characterize myself as a power user. Never afraid to bust out the console line. But working with a Mac is totally foreign to me. So for those Mac guys who are curious, this is how your world appears from the outside to a computer literate person :)My Macbook Air has arrived! And it's a thing of beauty:First, the specs: 13" MacBook Air, 2.0GHz Core i7 processor. Upgraded to 8GB of RAM for an additional $100, SSD flash storage  = 256GB. The plan is ultimately to use this baby for some iOS development but also some decent lifting in Windows with Visual Studio. Done a lot of reading  and between VMWare Fusion, Parallels and Bootcamp...I'm going to go with VMWare Fusion for $49.99And now my impressions (please re-read disclaimer before proceeding!):I open the box and am trying to understand exactly how the magsafe connector works (and how to disconnect it).  Why does it have two socket outlet plugs? Who knows.  I feel like Hansel in Zoolander. The files are "in" the computer.Stuck in my external hard drive (usb). So how do I get to the files? To the Googles!Argh...it can't read my external NTFS drive. Fat32 can't support field over 4GB…problematic since some of my existing VMWare image files are much larger than 4GB. Didn't see this coming.Three year old loves iPhoto. Super easy to use. Don't even know what I'm doing but I've already (accidentally) discovered the image filtering options. Fun stuff.First thing I downloaded ever => Chrome. I need something to ground me, something familiar. My token, if you will (sorry, gratuitous Inception joke).Ok, I get it… Finder == windows explorer. But where is my hierarchical structure? I miss the tree :(On that note, yeah…how do I see what "path" my files reside in? I'm afraid to know the answer. You know what scares more though…this notion of a smart folder. Feel like the godfather - just get the job done, I don't care how you handle it, I don't want to know...just get it done. What the hell is AirDrop?Mail…just worked. Still in shock that they have a free client for yahoo mail (please no yahoo jokes).mail -> deleting a message takes 5 seconds. Have they heard of async?"Command" key instead of "Control" ok, then what the $%&^! is the control key for then"aliases" == shortcuts I thinkI don't see the file system. And I'm scared. All these things I'm downloading…these .dmg files (bad name) where are they going? Can't seem to delete when they're doneUgh...realized need to buy a mini-to-vga adaptor if I want to use my external monitor ($13 on ebay, $39 in apple store).Windows docking is trickiest for me…this notion of detached windows with a menu bar at the top. I don't like this paradigm, it's confusing. But maybe because I've been using Windows for too long.Evernote, Dropbox desktop clients seem almost identical…few quirks here and there I need to get used to.iTunes is still a bit gross. In a weird way it's actually worse on a Mac if thats possible. This is not the MacBook's fault…this is a software design issue. Overall: UI will take some getting used to. Can't decide if this represents the future and I'm stuck in the past…or this is the past and I've been spoiled by the future (which would be Windows…don't be hating I happen to be very productive in Win7)  So there you go - my 90 minute first impression of the MacBook universe.

    Read the article

  • Growing Talent

    The subtitle of Daniel Coyles intriguing book The Talent Code is Greatness Isnt Born. Its Grown. Heres How. The Talent Code proceeds to layout a theory of how expertise can be cultivated through specific practices that encourage the growth of myelin in the brain. Myelin is a material that is produced and wraps around heavily used circuits in the brain, making them more efficient. Coyle uses an analogy that geeks will appreciate. When a circuit in the brain is used a lot (i.e. a specific action is repeated), the myelin insulates that circuit, increasing its bandwidth from telephone over copper to high speed broadband. This leads to the funny phenomenon of effortless expertise. Although highly skilled, the best players make it look easy. Coyle provides some biological backing for the long held theory that it takes 10,000 hours of practice to achieve mastery over a given subject. 10,000 hours or 10 years, as in, Teach Yourself Programming in Ten Years and others. However, it is not just that more hours equals more mastery. The other factors that Coyle identifies includes deep practice, practice which crucially involves drills that are challenging without being impossible. Another way to put it is that every day you spend doing only tasks you find monotonous and automatic, you are literally stagnating your brains development! Perhaps Coyles subtitle, needs one more phrase, Greatness Isnt Born. Its Grown. Heres How. And oh yeah, its not easy. Challenging yourself, continuing to persist in the face of repeated failures, practicing every day is not easy. As consultants, we sell our expertise, so it makes sense that we plan projects so that people can play to their strengths. At the same time, an important part of our culture is constant improvement, challenging yourself to be better. And the balancing contest ensues. I just finished working on a proof of concept (POC) we did for a project we are bidding on. Completely time boxed, so our team naturally split responsibilities amongst ourselves according to who was better at what. I must have been pretty bad at the other components, as I found myself working on the user interface, not my usual strength. The POC had a website frontend, and one thing I do know is HTML. After starting out in pure ASP.NET WebForms, I got frustrated as time was ticking, I knew what I wanted in HTML, but I couldnt coax the right output out of the ASP.NET controls. I needed two or three elements on the screen that were identical in layout, with different content. With a backup plan in  of writing the HTML into the response by hand, I decided to challenge myself a bit and see what I could do in an hour or two using the Microsoft submitted jQuery micro-templating JavaScript library. This risk paid off. I was able to quickly get the user interface up and running, responsive to the JSON data we were working with. I felt energized by the double win of getting the POC ready and learning something new. Opportunities  specifically like this POC dont come around often, but the takeaway is that while it wont be easy, there are ways to generate your own opportunities to grow towards greatness.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How can I fix my keyboard layout?

    - by Scott Severance
    For a long time, I've had my keyboard configured to use the layout currently known as "English (international AltGr dead keys)." I like this layout because without any modifier keys, it's identical to the US English keyboard, but when I hold Right Alt I can get accented letters and other characters not available on a standard US English keyboard. In Oneiric, however, the layout is messed up. Right Alt+N produces "ñ" as expected. And another method works: Right Alt+`, E produces "è", also as expected. But there's no way to type "é", which is probably the accented letter I type the most. I expect Right Alt+A, E to do the trick. But instead of a dead key for the acute accent, it uses a method for combining characters to create the hybrid "´e". This hybrid looks like the proper "é" in some settings, but it isn't the same character and doesn't always work. (For example, in the text input box as I type this, it looks the same as the proper character, but when displayed on the site for all so see, it looks very wrong--at least on my machine.) Ditto for all other characters with an acute accent, though some are available directly as pre-composed characters: For example, Right Alt+I yields "í". How can I change the acute accent on the A key to a proper dead key? Perhaps the more general version of this is: How can I tweak my keyboard layout? Update I just tested this on my other machine, also running Oneiric, but upgraded from previous versions. I have no problems with the second machine. The problem machine was a fresh install of Oneiric, but I kept my old $HOME when I did the fresh install. Clarification Even if an answer doesn't address my specific examples, I would still accept it if it provided enough detail for me to find the layout and tweak it according to my needs. Major Update After working through the information gained through Jim C's and Chascon's helpful replies, I've learned something new: The problem isn't with the layout itself, but with the fact that the selected layout isn't being applied. When I look at the definition in /usr/share/X11/xkb/symbols/us of the layout I've been running for a long time, I found that the definition doesn't match what I get when I type. In addition, the keyboard layout dialog that's supposed to show the current layout looks different from the way the layout is defined in the file I mentioned, and matches what actually happens when I type. Following Jim C's suggestion, I created a new layout in /usr/share/X11/xkb/symbols/us containing some modifications to the layout I want. I can select my layout from the keyboard properties, and I can use in on the console following Chascon's post, but the layout I get when typing is unchanged. Apparently, there's a different layout defined somewhere that's overriding what I've set. Where is that layout hiding? This problem occurs in Unity (3D and 2D), but I was able to get the correct layout set in Xfce. In case it's relevant, this problem has occurred since I installed Oneiric fresh on this machine (though I preserved my $HOME). I don't recall whether this problem occurred before the reinstall. Also, in case it's relevant, I also run iBus so I can type Korean. I have a few difficulties with iBus, but I doubt they're related.

    Read the article

  • Tessellation Texture Coordinates

    - by Stuart Martin
    Firstly some info - I'm using DirectX 11 , C++ and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer { float tessellationAmount; float3 padding; }; struct HullInputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; }; struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; struct HullOutputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; float4 depthPosition : TEXCOORD2; }; ConstantOutputType ColorPatchConstantFunction(InputPatch<HullInputType, 3> inputPatch, uint patchId : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("ColorPatchConstantFunction")] HullOutputType ColorHullShader(InputPatch<HullInputType, 3> patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; output.position = patch[pointId].position; output.tex = patch[pointId].tex; output.tex2 = patch[pointId].tex2; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; output.binormal = patch[pointId].binormal; return output; } Edited to include the domain shader:- [domain("tri")] PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<HullOutputType, 3> patch) { float3 vertexPosition; PixelInputType output; // Determine the position of the new vertex. vertexPosition = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.position = mul(float4(vertexPosition, 1.0f), worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); output.depthPosition = output.position; output.tex = patch[0].tex; output.tex2 = patch[0].tex2; output.normal = patch[0].normal; output.tangent = patch[0].tangent; output.binormal = patch[0].binormal; return output; }

    Read the article

  • vmware player xorg driver broke after ubuntu 12.04 LTS automatic system update on 06/07/2014

    - by user291828
    I am running Ubuntu 12.04 LTS as a virtual machine on a Windows 7 host using vmware player 6.0.2. On Sat. 06/07/2014, Ubuntu performed an automatic system update as it does regularly, however after reboot, the display driver seems to be broken which causes the screen to split into many identical panels. This pretty much rendered the VM unusable. Below is the log entry for that update from /var/log/apt/history.log It appears that at least one of these updates have caused this problem. The exact same problem has been reported on the vmware forum as well, see https://communities.vmware.com/message/2388776#2388776 So far I have not been able so find a solution. Content in /var/log/apt/history.log Start-Date: 2014-06-07 15:04:46 Commandline: aptdaemon role='role-commit-packages' sender=':1.65' Install: linux-headers-3.2.0-64:amd64 (3.2.0-64.97), linux-image-3.2.0-64-generic:amd64 (3.2.0-64.97), linux-headers-3.2.0-64-generic:amd64 (3.2.0-64.97) Upgrade: iproute:amd64 (20111117-1ubuntu2.1, 20111117-1ubuntu2.3), libknewstuff3-4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdeclarative5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libnepomukquery4a:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libgnutls-openssl27:amd64 (2.12.14-5ubuntu3.7, 2.12.14-5ubuntu3.8), libthreadweaver4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdecore5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libnepomukutils4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libktexteditor4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), linux-generic:amd64 (3.2.0.63.75, 3.2.0.64.76), libkmediaplayer4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkrosscore4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libgnutls26:amd64 (2.12.14-5ubuntu3.7, 2.12.14-5ubuntu3.8), libgnutls26:i386 (2.12.14-5ubuntu3.7, 2.12.14-5ubuntu3.8), libsolid4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libnepomuk4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdnssd4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkparts4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), kdoctools:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libssl-dev:amd64 (1.0.1-4ubuntu5.13, 1.0.1-4ubuntu5.14), libssl-doc:amd64 (1.0.1-4ubuntu5.13, 1.0.1-4ubuntu5.14), libkidletime4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), linux-headers-generic:amd64 (3.2.0.63.75, 3.2.0.64.76), linux-image-generic:amd64 (3.2.0.63.75, 3.2.0.64.76), libkcmutils4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkfile4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkpty4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkntlm4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libplasma3:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkemoticons4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), kdelibs-bin:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdewebkit5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkjsembed4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkio5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkjsapi4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), openssl:amd64 (1.0.1-4ubuntu5.13, 1.0.1-4ubuntu5.14), kdelibs5-data:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), linux-libc-dev:amd64 (3.2.0-63.95, 3.2.0-64.97), libkde3support4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libknotifyconfig4:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), kdelibs5-plugins:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkhtml5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdeui5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libkdesu5:amd64 (4.8.5-0ubuntu0.2, 4.8.5-0ubuntu0.3), libssl1.0.0:amd64 (1.0.1-4ubuntu5.13, 1.0.1-4ubuntu5.14), libssl1.0.0:i386 (1.0.1-4ubuntu5.13, 1.0.1-4ubuntu5.14) End-Date: 2014-06-07 15:09:08

    Read the article

  • Best Practices - Core allocation

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (also called Logical Domains) Introduction SPARC T-series servers currently have up to 4 CPU sockets, each of which has up to 8 or (on SPARC T3) 16 CPU cores, while each CPU core has 8 threads, for a maximum of 512 dispatchable CPUs. The defining feature of Oracle VM Server for SPARC is that each domain is assigned CPU threads or cores for its exclusive use. This avoids the overhead of software-based time-slicing and emulation (or binary rewriting) of system state-changing privileged instructions used in traditional hypervisors. To create a domain, administrators specify either the number of CPU threads or cores that the domain will own, as well as its memory and I/O resources. When CPU resources are assigned at the individual thread level, the logical domains constraint manager attempts to assign threads from the same cores to a domain, and avoid "split core" situations where the same CPU core is used by multiple domains. Sometimes this is unavoidable, especially when domains are allocated and deallocated CPUs in small increments. Why split cores can matter Split core allocations can silenty reduce performance because multiple domains with different address spaces and memory contents are sharing the core's Level 1 cache (L1$). This is called false cache sharing since even identical memory addresses from different domains must point to different locations in RAM. The effect of this is increased contention for the cache, and higher memory latency for each domain using that core. The degree of performance impact can be widely variable. For applications with very small memory working sets, and with I/O bound or low-CPU utilization workloads, it may not matter at all: all machines wait for work at the same speed. If the domains have substantial workloads, or are critical to performance then this can have an important impact: This blog entry was inspired by a customer issue in which one CPU core was split among 3 domains, one of which was the control and service domain. The reported problem was increased I/O latency in guest domains, but the root cause might be higher latency servicing the I/O requests due to the control domain being slowed down. What to do about it Split core situations are easily avoided. In most cases the logical domain constraint manager will avoid it without any administrative action, but it can be entirely prevented by doing one of the several actions: Assign virtual CPUs in multiples of 8 - the number of threads per core. For example: ldm set-vcpu 8 mydomain or ldm add-vcpu 24 mydomain. Each domain will then be allocated on a core boundary. Use the whole core constraint when assigning CPU resources. This allocates CPUs in increments of entire cores instead of virtual CPU threads. The equivalent of the above commands would be ldm set-core 1 mydomain or ldm add-core 3 mydomain. Older syntax does the same thing by adding the -c flag to the add-vcpu, rm-vcpu and set-vcpu commands, but the new syntax is recommended. When whole core allocation is used an attempt to add cores to a domain fails if there aren't enough completely empty cores to satisfy the request. See https://blogs.oracle.com/sharakan/entry/oracle_vm_server_for_sparc4 for an excellent article on this topic by Eric Sharakan. Don't obsess: - if the workloads have minimal CPU requirements and don't need anywhere near a full CPU core, then don't worry about it. If you have low utilization workloads being consolidated from older machines onto a current T-series, then there's no need to worry about this or to assign an entire core to domains that will never use that much capacity. In any case, make sure the most important domains have their own CPU cores, in particular the control domain and any I/O or service domain, and of course any important guests. Summary Split core CPU allocation to domains can potentially have an impact on performance, but the logical domains manager tends to prevent this situation, and it can be completely and simply avoided by allocating virtual CPUs on core boundaries.

    Read the article

  • Need WIF Training?

    - by Your DisplayName here!
    I spend numerous hours every month answering questions about WIF and identity in general. This made me realize that this is still quite a complicated topic once you go beyond the standard fedutil stuff. My good friend Brock and I put together a two day training course about WIF that covers everything we think is important. The course includes extensive lab material where you take standard application and apply all kinds of claims and federation techniques and technologies like WS-Federation, WS-Trust, session management, delegation, home realm discovery, multiple identity providers, Access Control Service, REST, SWT and OAuth. The lab also includes the latest version of the thinktecture identityserver and you will learn how to use and customize it. If you are looking for an open enrollment style of training, have a look here. Or contact me directly! The course outline looks as follows: Day 1 Intro to Claims-based Identity & the Windows Identity Foundation WIF introduces important concepts like conversion of security tokens and credentials to claims, claims transformation and claims-based authorization. In this module you will learn the basics of the WIF programming model and how WIF integrates into existing .NET code. Externalizing Authentication for Web Applications WIF includes support for the WS-Federation protocol. This protocol allows separating business and authentication logic into separate (distributed) applications. The authentication part is called identity provider or in more general terms - a security token service. This module looks at this scenario both from an application and identity provider point of view and walks you through the necessary concepts to centralize application login logic both using a standard product like Active Directory Federation Services as well as a custom token service using WIF’s API support. Externalizing Authentication for SOAP Services One big benefit of WIF is that it unifies the security programming model for ASP.NET and WCF. In the spirit of the preceding modules, we will have a look at how WIF integrates into the (SOAP) web service world. You will learn how to separate authentication into a separate service using the WS-Trust protocol and how WIF can simplify the WCF security model and extensibility API. Day 2 Advanced Topics:  Security Token Service Architecture, Delegation and Federation The preceding modules covered the 80/20 cases of WIF in combination with ASP.NET and WCF. In many scenarios this is just the tip of the iceberg. Especially when two business partners decide to federate, you usually have to deal with multiple token services and their implications in application design. Identity delegation is a feature that allows transporting the client identity over a chain of service invocations to make authorization decisions over multiple hops. In addition you will learn about the principal architecture of a STS, how to customize the one that comes with this training course, as well as how to build your own. Outsourcing Authentication:  Windows Azure & the Azure AppFabric Access Control Service Microsoft provides a multi-tenant security token service as part of the Azure platform cloud offering. This is an interesting product because it allows to outsource vital infrastructure services to a managed environment that guarantees uptime and scalability. Another advantage of the Access Control Service is, that it allows easy integration of both the “enterprise” protocols like WS-* as well as “web identities” like LiveID, Google or Facebook into your applications. ACS acts as a protocol bridge in this case where the application developer doesn’t need to implement all these protocols, but simply uses a service to make it happen. Claims & Federation for the Web and Mobile World Also the web & mobile world moves to a token and claims-based model. While the mechanics are almost identical, other protocols and token types are used to achieve better HTTP (REST) and JavaScript integration for in-browser applications and small footprint devices. Also patterns like how to allow third party applications to work with your data without having to disclose your credentials are important concepts in these application types. The nice thing about WIF and its powerful base APIs and abstractions is that it can shield application logic from these details while you can focus on implementing the actual application. HTH

    Read the article

  • Game Object Factory: Fixing Memory Leaks

    - by Bunkai.Satori
    Dear all, this is going to be tough: I have created a game object factory that generates objects of my wish. However, I get memory leaks which I can not fix. Memory leaks are generated by return new Object(); in the bottom part of the code sample. static BaseObject * CreateObjectFunc() { return new Object(); } How and where to delete the pointers? I wrote bool ReleaseClassType(). Despite the factory works well, ReleaseClassType() does not fix memory leaks. bool ReleaseClassTypes() { unsigned int nRecordCount = vFactories.size(); for (unsigned int nLoop = 0; nLoop < nRecordCount; nLoop++ ) { // if the object exists in the container and is valid, then render it if( vFactories[nLoop] != NULL) delete vFactories[nLoop](); } return true; } Before taking a look at the code below, let me help you in that my CGameObjectFactory creates pointers to functions creating particular object type. The pointers are stored within vFactories vector container. I have chosen this way because I parse an object map file. I have object type IDs (integer values) which I need to translate them into real objects. Because I have over 100 different object data types, I wished to avoid continuously traversing very long Switch() statement. Therefore, to create an object, I call vFactoriesnEnumObjectTypeID via CGameObjectFactory::create() to call stored function that generates desired object. The position of the appropriate function in the vFactories is identical to the nObjectTypeID, so I can use indexing to access the function. So the question remains, how to proceed with garbage collection and avoid reported memory leaks? #ifndef GAMEOBJECTFACTORY_H_UNIPIXELS #define GAMEOBJECTFACTORY_H_UNIPIXELS //#include "MemoryManager.h" #include <vector> template <typename BaseObject> class CGameObjectFactory { public: // cleanup and release registered object data types bool ReleaseClassTypes() { unsigned int nRecordCount = vFactories.size(); for (unsigned int nLoop = 0; nLoop < nRecordCount; nLoop++ ) { // if the object exists in the container and is valid, then render it if( vFactories[nLoop] != NULL) delete vFactories[nLoop](); } return true; } // register new object data type template <typename Object> bool RegisterClassType(unsigned int nObjectIDParam ) { if(vFactories.size() < nObjectIDParam) vFactories.resize(nObjectIDParam); vFactories[nObjectIDParam] = &CreateObjectFunc<Object>; return true; } // create new object by calling the pointer to the appropriate type function BaseObject* create(unsigned int nObjectIDParam) const { return vFactories[nObjectIDParam](); } // resize the vector array containing pointers to function calls bool resize(unsigned int nSizeParam) { vFactories.resize(nSizeParam); return true; } private: //DECLARE_HEAP; template <typename Object> static BaseObject * CreateObjectFunc() { return new Object(); } typedef BaseObject*(*factory)(); std::vector<factory> vFactories; }; //DEFINE_HEAP_T(CGameObjectFactory, "Game Object Factory"); #endif // GAMEOBJECTFACTORY_H_UNIPIXELS

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >