Search Results

Search found 22641 results on 906 pages for 'case insensitivity'.

Page 95/906 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • GPU Debugging with VS 11

    - by Daniel Moth
    With VS 11 Developer Preview we have invested tremendously in parallel debugging for both CPU (managed and native) and GPU debugging. I'll be doing a whole bunch of blog posts on those topics, and in this post I just wanted to get people started with GPU debugging, i.e. with debugging C++ AMP code. First I invite you to watch 6 minutes of a glimpse of the C++ AMP debugging experience though this video (ffw to minute 51:54, up until minute 59:16). Don't read the rest of this post, just go watch that video, ideally download the High Quality WMV. Summary GPU debugging essentially means debugging the lambda that you pass to the parallel_for_each call (plus any functions you call from the lambda, of course). CPU debugging means debugging all the code above and below the parallel_for_each call, i.e. all the code except the restrict(direct3d) lambda and the functions that it calls. With VS 11 you have to choose what debugger you want to use for a particular debugging session, CPU or GPU. So you can place breakpoints all over your code, then choose what debugger you want (CPU or GPU), and you'll only be able to hit breakpoints for the code type that the debugger engine understands – the remaining breakpoints will appear as unbound. If you want to hit the unbound breakpoints, you'd have to stop debugging, and start again with the other debugger. Sorry. We suck. We know. But once you are past that limitation, I think you'll find the experience truly rewarding – seriously! Switching debugger engines With the Developer Preview bits, one way to switch the debugger engine is through the project properties – see the screenshots that follow. This one is showing the CPU option selected, which is basically the default that you are all familiar with: This screenshot is showing the GPU option selected, by changing the debugger launcher (notice that this applies for both the local and remote case): You actually do not have to open the project properties just for switching the debugger engine, you can switch the selection from the toolbar in VS 11 Developer Preview too – see following screenshot (the effect is the same as if you opened the project properties and switched there) Breakpoint behavior Here are two screenshots, one showing a debugging session for CPU and the other a debugging session for GPU (notice the unbound breakpoints in each case) …and here is the GPU case (where we cannot bind the CPU breakpoints but can the GPU breakpoint, which is actually hit) Give C++ AMP debugging a try So to debug your C++ AMP code, pull down the drop down under the 'play' button to select the 'GPU C++ Direct3D Compute Debugger' menu option, then hit F5 (or the 'play' button itself). Then you can explore debugging by exploring the menus under the Debug and under the Debug->Windows menus. One way to do that exploration is through the C++ AMP debugging walkthrough on MSDN. Another way to explore the C++ AMP debugging experience, you can use the moth.cpp code file, which is what I used in my BUILD session debugger demo. Note that for my demo I was using the latest internal VS11 bits, so your experience with the Developer Preview bits won't be identical to what you saw me demonstrate, but it shouldn't be far off. Stay tuned for a lot more content on the parallel debugger in VS 11, both CPU and GPU, both managed and native. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Creating PDF documents dynamically using Umbraco and XSL-FO part 2

    - by Vizioz Limited
    Since my last post I have made a few modifications to the PDF generation, the main one being that the files are now dynamically renamed so that they reflect the name of the case study instead of all being called PDF.PDF which was not a very helpful filename, I just wanted to get something live last week, so decided that something was better than nothing :)The issue with the filenames comes down to the way that the PDF's are being generated by using an alternative template in Umbraco, this means that all you need to do is add " /pdf " to the end of a case study URL and it will create a PDF version of the case study. The down side is that your browser will merrily download the file and save it as PDF.PDF because that is the name of the last part of the URL.What you need to do is set the content-disposition header to be equal to the name you would like the file use, Darren Ferguson mentioned this on the Change the name of the PDF forum post.We have used the same technique for downloading dynamically generated excel files, so I thought it would be useful to create a small macro to set both this header and also to set the caching headers to prevent any caching issues, I think in the past we have experienced all possible issues, including various issues where IE behaves differently to other browsers when you are using SSL and so the below code should work in all situations!The template for the PDF alternative template is very simple:<%@ Master Language="C#" MasterPageFile="~/umbraco/masterpages/default.master" AutoEventWireup="true" %><asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolderDefault" runat="server"> <umbraco:Macro Alias="PDFHeaders" runat="server"></umbraco:Macro> <umbraco:Macro xsl="FO-CaseStudy.xslt" Alias="PDFXSLFO" runat="server"></umbraco:Macro></asp:Content>The following code snippet is the XSLT macro that simply creates our file name and then passes the file name into the helper function:<xsl:template match="/"> <xsl:variable name="fileName"> <xsl:text>Vizioz_</xsl:text> <xsl:value-of select="$currentPage/@nodeName" /> <xsl:text>_case_study.pdf</xsl:text> </xsl:variable> <xsl:value-of select="Vizioz.Helper:AddDocumentDownloadHeaders('application/pdf', $fileName)"/> </xsl:template>And the following code is the helper function that clears the current response and adds all the appropriate headers:public static void AddDocumentDownloadHeaders(string contentType, string fileName){ HttpResponse response = HttpContext.Current.Response; HttpRequest request = HttpContext.Current.Request; response.Clear(); response.ClearHeaders(); if (request.IsSecureConnection & request.Browser.Browser == "IE") { // Don't use the caching headers if the browser is IE and it's a secure connection // see: http://support.microsoft.com/kb/323308 } else { // force not using the cache response.AppendHeader("Cache-Control", "no-cache"); response.AppendHeader("Cache-Control", "private"); response.AppendHeader("Cache-Control", "no-store"); response.AppendHeader("Cache-Control", "must-revalidate"); response.AppendHeader("Cache-Control", "max-stale=0"); response.AppendHeader("Cache-Control", "post-check=0"); response.AppendHeader("Cache-Control", "pre-check=0"); response.AppendHeader("Pragma", "no-cache"); response.Cache.SetCacheability(HttpCacheability.NoCache); response.Cache.SetNoStore(); response.Cache.SetExpires(DateTime.UtcNow.AddMinutes(-1)); } response.AppendHeader("Expires", DateTime.Now.AddMinutes(-1).ToLongDateString()); response.AppendHeader("Keep-Alive", "timeout=3, max=993"); response.AddHeader("content-disposition", "attachment; filename=\"" + fileName + "\""); response.ContentType = contentType;}I will write another blog soon with some more details about XSL-FO and how to create the PDF's dynamically.Please do re-tweet if you find this interest :)

    Read the article

  • JavaScript: this

    - by bdukes
    JavaScript is a language steeped in juxtaposition.  It was made to “look like Java,” yet is dynamic and classless.  From this origin, we get the new operator and the this keyword.  You are probably used to this referring to the current instance of a class, so what could it mean in a language without classes? In JavaScript, this refers to the object off of which a function is referenced when it is invoked (unless it is invoked via call or apply). What this means is that this is not bound to your function, and can change depending on how your function is invoked. It also means that this changes when declaring a function inside another function (i.e. each function has its own this), such as when writing a callback. Let's see some of this in action: var obj = { count: 0, increment: function () { this.count += 1; }, logAfterTimeout = function () { setTimeout(function () { console.log(this.count); }, 1); } }; obj.increment(); console.log(obj.count); // 1 var increment = obj.increment; window.count = 'global count value: '; increment(); console.log(obj.count); // 1 console.log(window.count); // global count value: 1 var newObj = {count:50}; increment.call(newObj); console.log(newObj.count); // 51 obj.logAfterTimeout();// global count value: 1 obj.logAfterTimeout = function () { var proxiedFunction = $.proxy(function () { console.log(this.count); }, this); setTimeout(proxiedFunction, 1); }; obj.logAfterTimeout(); // 1 obj.logAfterTimeout = function () { var that = this; setTimeout(function () { console.log(that.count); }, 1); }; obj.logAfterTimeout(); // 1 The last couple of examples here demonstrate some methods for making sure you get the values you expect.  The first time logAfterTimeout is redefined, we use jQuery.proxy to create a new function which has its this permanently set to the passed in value (in this case, the current this).  The second time logAfterTimeout is redefined, we save the value of this in a variable (named that in this case, also often named self) and use the new variable in place of this. Now, all of this is to clarify what’s going on when you use this.  However, it’s pretty easy to avoid using this altogether in your code (especially in the way I’ve demonstrated above).  Instead of using this.count all over the place, it would have been much easier if I’d made count a variable instead of a property, and then I wouldn’t have to use this to refer to it.  var obj = (function () { var count = 0; return { increment: function () { count += 1; }, logAfterTimeout = function () { setTimeout(function () { console.log(count); }, 1); }, getCount: function () { return count; } }; }()); If you’re writing your code in this way, the main place you’ll run into issues with this is when handling DOM events (where this is the element on which the event occurred).  In that case, just be careful when using a callback within that event handler, that you’re not expecting this to still refer to the element (and use proxy or that/self if you need to refer to it). Finally, as demonstrated in the example, you can use call or apply on a function to set its this value.  This isn’t often needed, but you may also want to know that you can use apply to pass in an array of arguments to a function (e.g. console.log.apply(console, [1, 2, 3, 4])).

    Read the article

  • How To Clear An Alert - Part 2

    - by werner.de.gruyter
    There were some interesting comments and remarks on the original posting, so I decided to do a follow-up and address some of the issues that got raised... Handling Metric Errors First of all, there is a significant difference between an 'error' and an 'alert'. An 'alert' is the violation of a condition (a threshold) specified for a given metric. That means that the Agent is collecting and gathering the data for the metric, but there is a situation that requires the attention of an administrator. An 'error' on the other hand however, is a failure to collect metric data: The Agent is throwing the error because it cannot determine the value for the metric Whereas the 'alert' guarantees continuity of the metric data, an 'error' signals a big unknown. And the unknown aspect of all this is what makes an error a lot more serious than a regular alert: If you don't know what the current state of affairs is, there could be some serious issues brewing that nobody is aware of... The life-cycle of a Metric Error Clearing a metric error is pretty much the same workflow as a metric 'alert': The Agent signals the error after it failed to execute the metric The error is uploaded to the OMS/repository, where it becomes visible in the Console The error will remain active until the Agent is able to execute the metric successfully. Even though the metric is still getting scheduled and executed on a regular basis, the error will remain outstanding as long as the Agent is not capable of executing the metric correctly Knowing this, the way to fix the metric error should be obvious: Take the 'problem' away, and as soon as the metric is executed again (based on the frequency of the metric), the error will go away. The same tricks used to clear alerts can be used here too: Wait for the next scheduled execution. For those metrics that are executed regularly (like every 15 minutes or so), it's just a matter of waiting those minutes to see the updates. The 'Reevaluate Alert' button can be used to force a re-execution of the metric. In case a metric is executed once a day, this will be a better way to make sure that the underlying problem has been solved. And if it has been, the metric error will be removed, and the regular data points will be uploaded to the repository. And just in case you have to 'force' the issue a little: If you disable and re-enable a metric, it will get re-scheduled. And that means a new metric execution, and an update of the (hopefully) fixed problem. Database server-generated alerts and problem checkers There are various ways the Agent can collect metric data: Via a script or a SQL statement, reading a log file, getting a value from an SNMP OID or listening for SNMP traps or via the DBMS_SERVER_ALERTS mechanism of an Oracle database. For those alert which are generated by the database (like tablespace metrics for 10g and above databases), the Agent just 'waits' for the database to report any new findings. If the Agent has lost the current state of the server-side metrics (due to an incomplete recovery after a disaster, or after an improper use of the 'emctl clearstate' command), the Agent might be still aware of an alert that the database no longer has (or vice versa). The same goes for 'problem checker' alerts: Those metrics that only report data if there is a problem (like the 'invalid objects' metric) will also have a problem if the Agent state has been tampered with (again, the incomplete recovery, and after improper use of 'emctl clearstate' are the two main causes for this). The best way to deal with these kinds of mismatches, is to simple disable and re-enable the metric again: The disabling will clear the state of the metric, and the re-enabling will force a re-execution of the metric, so the new and updated results can get uploaded to the repository. Starting 10gR5, the Agent performs additional checks and verifications after each restart of the Agent and/or each state change of the database (shutdown/startup or failover in case of DataGuard) to catch these kinds of mismatches.

    Read the article

  • Customize Team Build 2010 – Part 12: How to debug my custom activities

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application       Developers are “spoilt” persons who expect to be able to have easy debugging experiences for every technique they work with. So they also expect it when developing custom activities for the build process template. This post describes how you can debug your custom activities without having to develop on the build server itself. Remote debugging prerequisites The prerequisite for these steps are to install the Microsoft Visual Studio Remote Debugging Monitor. You can find information how to install this at http://msdn.microsoft.com/en-us/library/bt727f1t.aspx. I chose for the option to run the remote debugger on the build server from a file share. Debugging symbols prerequisites To be able to start the debugging, you need to have the pdb files on the buildserver together with the assembly. The pdb must have been build with Full Debug Info. Steps In my setup I have a development machine and a build server. To setup the remote debugging, I performed the following steps Locate on your development machine the folder C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Remote Debugger Create a share for the Remote Debugger folder. Make sure that the share (and the folder) has the correct permissions so the user on the build server has access to the share. On the build server go to the shared “Remote Debugger” folder Start msvsmon.exe which is located in the folder that represents the platform of the build server. This will open a winform application like   Go back to your development machine and open the BuildProcess solution. Start the Attach to process command (Ctrl+Alt+P) Type in the Qualifier the name of the build server. In my case the user account that has started the msvsmon is another user then the user on my development machine. In that case you have to type the qualifier in the format that is shown in the Remote Debugging Monitor (in my case LOCAL\Administrator@TFSLAB) and confirm it by pressing <Enter> Since the build service is running with other credentials, check the option “Show processes from all users”. Now the Attach to process dialog shows the TFSBuildServiceHost process Set the breakpoint in the activity you want to debug and kick of a build. Be aware that when you attach to the TFSBuildServiceHost that you debug every single build that is run by this windows service, so make sure you don’t debug the build server that is in production! You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Book Reviews: Art of Community and Eyetracking Web Usability

    - by ultan o'broin
    Holidays time offers a chance to catch up on some user experience and user assistance related material. So, two short book reviews (which I considered using my new Tumblr blog for. More about that another time) coming up. The Art of Community by Jono Bacon Excellent starting point for anyone wanting to get going in the community software (FLOSS, for example) space or understand how to set up, manage, and leverage the collective intelligence of communities for whatever ends. The book is a little too long in my opinion, and of course, usage of what Jono is recommending needs to be nuanced and adapted for enterprise applications space (hardly surprising there is a lot about Ubuntu, Lug Radio, and so on given Jono's interests). Shame there wasn't more information on international, non-English community considerations too. Still, some great ideas and insight into setting up and managing communities that I will leverage (watch out for the results on this blog, later in 2011). One section, on collaborative writing really jumped out. It reinforced the whole idea that to successful community initiatives are based on instigators knowing what makes the community tick in the first place. How about this for insight into user profiles for people who write community user assistance (OK then, "doc") and what tools they might use (in this case, we're talking about Jokosher): "Most people who write documentation for open source software projects would fall into the category of power user. They are technology enthusiasts who are not interested in the super-technical avenues of programming, but want to help out. Many of these people have good writing skills and a good knowledge of using the software, so the documentation fit is natural. With Jokosher we wanted to acknowledge this profile of user. As such, instead of focussing on complex text processing tools, we encouraged our documentation contributors to use a wiki." The book is available for free here, and well as being available from usual sources. Eyetracking Web Usability by Jakob Nielsen and Kara Prentice Another fine book by established experts. I have some field experience of eyetracking studies myself --in the user assistance for enterprise applications space--though Jakob and Kara concentrate on websites for their research here. I would caution how much about websites transfers easily to the applications space, especially enterprise applications, as claimed in the book too. However, Jakob and Kara do make the case very well that understanding design goals (for example, productivity improvement in the case of applications) and the context of the software use is critical. Executing a study using eyetracking technology requires that you know what you want to test, can set up realistic tasks for testing by representative testers, and then analyze the results. Be precise, as lots of data will be generated (I think the authors underplay the effort in analyzing data too). What I found disappointing was the lack of emphasis on eyetracking as only part of the usability solution. It's really for fine-tuning designs in my opinion, and should be used after other design reviews. I also wasn't that crazy about the level of disengagement between the qualitative and quantitative side of this kind of testing that the book indicated. I think it is useful to have testers verbalize their thoughts and for test engineers to prompt, intervene, or guide as necessary. More on cultural or international aspects to usability testing might have been included too (websites are available to everyone). To conclude, I enjoyed the book, took on board some key takeaways about methodologies and found the recommendations sensible and easy to follow (for example about Forms layouts). Applying enterprise applications requirements such as those relating to user profiles, design goals, and overall context of use in conjunction with what's in this book would be the way to go here. It also made me think of how interesting it would be to compare eyetracking findings between website and enterprise applications usage.

    Read the article

  • Welcome to the SOA &amp; E2.0 Partner Community Forum

    - by Jürgen Kress
    With more than 200 registrations the SOA & E2.0 Partner Community Forum is a huge success!   Conference program Is available online: http://tinyurl.com/soaforumagenda Agenda Tuesday March 15th 2011 12:15 Welcome & Introduction – Hans Blaas & Jürgen Kress, Oracle 12:30 Oracle Middleware Strategy and Information on Application Grid and Exalogic - Andrew Sutherland, Oracle 13:15 Managing Online Customer, Partner and Employee Engagement Oracle E2.0 Solutions - Andrew Gilboy, Oracle 14:00 Coffee Break 14:30 Partner SOA/ BPM Reference Case – Leon Smiers, Capgemini 15:15 Partner WebCenter/ UCM Reference Case – Vikram Setia, Infomentum 16.00 Break 16.30 SOA and BPM 11gR1 PS3 Update – David Shaffer 17:00 Why specialization is important for Partners – Nick Kritikos, Hans Blaas & Jürgen Kress 17:45 Social Event   Wednesday March 16th 2011 09.00 Welcome & Introduction Day II 09.15 Breakout sessions Round 1 SOA Suite 11g PS3 & OSB Importance of ADF & Jdeveloper SOA Security IDM WebCenter PS3, Whats New E2.0 Sales Plays 10.30 Break 10.45 Breakout sessions Round 2 WebCenter PS3, Whats New Applications Management Enterprise Manager and Amberpoint ADF/WebCenter 11g integration with BPM Suite 11g Importance of ADF & Jdeveloper JCAPS & OC4J migration opportunities for service business 12.00 Lunch 13.00 Breakout sessions Round 3 BPM 11g, Whats New Universal Content Management! 11g SOA Security IDM E2.0 Surrounding Products: ATG, Documaker, Primavera Middleware Industry Value Propositions & Sales Plays 14.30 Break 14.45 Fusion Applications, Rajan Krishnan, Oracle 15.30 SOA & E2.0 Summary & Closing, Hans Blaas & Jürgen Kress, Oracle 15.45 Finish & Departure 16:00 Bus departure   Capgemini Nederland BV Papendorpseweg 100 3500 GN Utrecht The Netherlands Tel: +31 30 689 00 00 For a detailed routedescription by car or public transport please visit: http://www.nl.capgemini.com/pdf/Papendorp_UK.pdf Hotel In case you have not booked your hotel yet, please make your own hotel reservation. You can book your hotel room at the 'Hotel Vianen' at a special rate, by using the Oracle booking code: DDG VIA-GF41422. One night package € 110,- for a single room, including breakfast. Kindly secure your hotel room as soon as possible. The number of rooms is limited! Hotel Vianen Prins Bernhardstraat 75 4132 XE Vianen [email protected] The Netherlands [email protected] Arrival on 14th of March and staying at Hotel Vianen. On 15th of March we have arranged a transfer from Hotel Vianen to the Capgemini Offices. The bus is parked in front of the hotel and will leave at 10.15AM (UTC/GMT+1). Logistics Pass with barcode At your arrival you will receive a pass with a barcode. This pass will give you access to the conference building and the different floors within the building. Please make sure to hand in your pass at the registration desk at the end of the day. Arrival by plane Transfer from Schiphol Airport to Capgemini on 15th of March will be arranged by Oracle. A hostess will be welcoming you at the Meeting Point at Schiphol Airport (this is a red and white large cubicle situated next to Delifrance) The buses will depart from Schiphol Airport at 09.00AM, 09.45AM and 10.30AM (UTC/GMT+1).     For future SOA Partner Community Forums  become a member for registration please visit www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Partner Community Forum,Community,SOA Partner Community,Utrecht 03.2011,OPN,Oracle,Jürgen Kress

    Read the article

  • Dynamic Class Inheritance For PHP

    - by VirtuosiMedia
    I have a situation where I think I might need dynamic class inheritance in PHP 5.3, but the idea doesn't sit well and I'm looking for a different design pattern to solve my problem if it's possible. Use Case I have a set of DB abstraction layer classes that dynamically compiles SQL queries, with one DAL class for each DB type (MySQL, MsSQL, Oracle, etc.). Each table in the database has its own class that extends the appropriate DAL class. The idea is that you interact with the table classes, but never directly use the DAL class. If you want to support a different DB type for your app, you don't need to rewrite any queries or even any code, you simply change a setting that swaps one DAL class out for another...and that's it. To give you a better idea of how this is used, you can take a look at the DAL class, the table classes, and how they are used on this StackExchange Code Review page. To really understand what I'm trying to do, please take a look at my implementation first before suggesting a solution. Issues The strategy that I had used previously was to have all of the DAL classes share the same class name. This eliminated autoloading, so I had to manually load the appropriate DAL class in a switch statement. However, this approach presents some problems for testing and documentation purposes, so I'd like to find a different way to solve the problem of loading the correct DAL class more elegantly. Update to clarify the issue The problem basically boils down to inconsistencies in the class name (pre-PHP 5.3) or class namespace (PHP 5.3) and its location in the directory structure. At this point, all of my DAL classes have the same name, DBObject, but reside in different folders, MySQL, Oracle, etc. My table classes all extend DBObject, but which DBObject they extend varies depending on which one has been loaded. Basically, I'm trying to have my cake and eat it too. The table classes act as a stable API and extend a dynamic backend, the DAL (DBObject) classes. It works great, but I outsmarted myself and because of the inconsistencies with the class names and their locations, I can't autoload the DBObject, which makes running unit tests and generating API docs impossible for the DBObject classes because the tests and docs rely on auto-loading. Just loading the appropriate DBObject into memory using a factory method won't work because there will be times when I need to load multiple DBObjects for testing. Because the classes currently share a name, this causes a class is already defined error. I can make exceptions for the DBObjects in my test code, obviously, but I'm looking for something a little less hacky as there may future instances where something similar would need to be done. Solutions? Worst case scenario, I can continue my current strategy, but I don't like it very much, especially as I'll soon be converting my code to PHP 5.3. I suspect that I can use some sort of dynamic inheritance via either namespaces (preferred) or a dynamic class extension, but I haven't been able to find good examples of this implemented in the wild. In your answers, please suggest either an alternate pattern that would work for this use case or an example of dynamic inheritance done right. Please assume PHP 5.3 with namespaced code. Any code examples are greatly encouraged. The preferred constraints for the solution are: DAL class can be autoloaded. DAL classes don't share the same exact same namespace, but share the same class name. As an example, I would prefer to use classes named DbObject that use namespaces like Vm\Db\MySql and Vm\Db\Oracle. Table classes don't have to be rewritten with a change in DB type. The appropriate DB type is determined via a single setting only. That setting is the only thing that should need to change to interchange DB types. Ideally, the setting check should occur only once per page load, but I'm flexible on that.

    Read the article

  • SQL SERVER – How to Compare the Schema of Two Databases with Schema Compare

    - by Pinal Dave
    Earlier I wrote about An Efficiency Tool to Compare and Synchronize SQL Server Databases and it was very much well received. Since the blog post I have received quite a many question that just like data how we can also compare schema and synchronize it. If you think about comparing the schema manually, it is almost impossible to do so. Table Schema has been just one of the concept but if you really want the all the schema of the database (triggers, views, stored procedure and everything else) it is just impossible task. If you are developer or database administrator who works in the production environment than you know that there are so many different occasions when we have to compare schema of the database. Before deploying any changes to the production server, I personally like to make note of the every single schema change and document it so in case of any issue , I can always go back and refer my documentation. As discussed earlier it is absolutely impossible to do this task without the help of third party tools. I personally use Devart Schema Compare for this task. This is an extremely easy tool. Let us see how it works. First I have two different databases – a) AdventureWorks2012 and b) AdventureWorks2012-V1. There are total three changes between these databases. Here is the list of the same. One of the table has additional column One of the table have new index One of the stored procedure is changed Now let see how dbForge Schema Compare works in this scenario. First open dbForge Schema Compare studio. Click on New Schema Comparison. It will bring you to following screen where we have to configure the database needed to configure. I have selected AdventureWorks2012 and AdventureWorks-V1 databases. In the next screen we can verify various options but for this demonstration we will keep it as it is. We will not change anything in schema mapping screen as in our case it is not required but generically if you are comparing across schema you may need this. This is the most important screen as on this screen we select which kind of object we want to compare. You can see the options which are available to select. The screen lets you select the objects from SQL Server 2000 to SQL Server 2012. Once you click on compare in previous screen it will bring you to this screen, which will essentially display the comparative difference between two of the databases which we had selected in earlier screen. As mentioned above there are three different changes in the database and the same has been listed over here. Two of the changes belongs to the tables and one changes belong to the procedure. Let us click each of them one by one to see what is the difference between them. In very first option we can see that there is an additional column in another database which did not exist earlier. In this example we can see that AdventureWorks2012 database have an additional index. Following example is very interesting as in this case, we have changed the definition of the stored procedure and the result pan contains the same. dbForget Schema Compare very effectively identify the changes in schema and lists them neatly to developers. Here is one more screen. This software not only compares the schema but also provides the options to update or drop them as per the choice. I think this is brilliant option. Well, I have been using schema compare for quite a while and have found it very useful. Here are few of the things which dbForge Schema Compare can do for developers and DBAs. Compare and synchronize SQL Server database schemas Compare schemas of live database and SQL Server backup Generate comparison reports in Excel and HTML formats Eliminate mistakes in schema changes propagation across environments Track production database changes and customizations Automate migration of schema changes using command line interface I suggest that you try out dbForge Schema Compare and let me know what you think of this product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

  • Non-standard installation (installing Linux from Linux)

    - by Evan Plaice
    So, here's my setup. I have one partition with the newest version installed, a second partition with an older version installed (as a backup just in case), a swap partition that both share, and a boot partition so the bootloader doesn't need to be setup after each upgrade. Partitions: sda1 ext3 /boot sda2 ext4 / (current version) sda3 ext4 / (old version) sda4 swap /swap sda5 ntfs (contains folders symbolically linked to /home on /) So far it has been a very good setup. I can create new boot loaders without screwing it up and adding my personal files into a new install is as simple as creating some symbolic links (the partition is NTFS in case I need to load windows on the system again). Here's the issue. I'd like to be able to drop the install into /distro on the current version and install a new version on / on the old version effectively replacing/upgrading it. The goal is to be able to just swap out new versions as they are released while maintaining redundancy in case I don't like th update. So far I have: downloaded the install.iso created a folder in /distro copied the install.iso into /distro extracted vmlinuz and initrd.lz into /distro Then I modified /boot/grub/menu.lst with the following entry: title Install Linux root (hd0,1) kernel /distro/vmlinuz initrd /distro/initrd.lz vmlinuz loads perfectly but it says it can't find initrd.lz on boot. I have also tried to uncompress the image with: unlzma < initrd.lz > initrd.img And, updating the menu.lst file to match; but that doesn't work either. I'm assuming that vmlinuz (linux kernel) loads, fires up the virtual filesystem by creating a ramdisk (initrd), mounts the iso, and launches the installer. Am I missing something here? Update: First, I wanted to say that the accepted answer would have been the best option if I was doing a normal Ubuntu install. Unfortunately, I was installing Linux Mint (which lacks the script needed to make debootstrap work. So the problem I with the above approach was, I was missing the command that vmlinuz (linux kernel) needed to execute to start boot into LiveCD mode. By looking in the /boot/grub/grub.cfg file I found what I was missing. Although this method will work, it requires that the installation files reside on their own partition. I took the easy route and used unetbootin to drop the LiveCD on a usb drive and booted from that. Like I said before. Debootstrap would have been the ideal solution here. Even though I couldn't use it I wrote down the steps it would've taken to use it. Step One: Format sda3 (the partition with the old copy of linux that's being overwritten) I used gparted to format it as ext4 from within the current linux install. How this is done varies based on what tools you prefer to use. Step Two: Mount the newly formatted partition (we'll call the mount ubuntu for simplicity) sudo mkdir /mnt/ubuntu sudo mount -o -loop /dev/sda3 /mnt/ubuntu Step Three: Get debootstrap sudo apt-get install debootstrap Step Four: Mount the install disk (replace ubuntu.iso with the name if your install disk) sudo mkdir /media/cdrom sudo mount -o loop ~/ubuntu.iso /media/cdrom Step Five: Install the OS using debootstrap (replace fiesty with the version you're installing and amd64 with your processor's architecture) sudo debootstrap --arch amd64 fiesty /mnt/ubuntu file:/media/cdrom The settings here varies. While I loaded debootstrap using an install iso, you can also have debootstrap automatically download and install if with a repository link (While most of these repositories contain debian versions I'm still not clear as to whether Ubuntu has similar repositories). Here a list of the debian package repositories and their mirrors. This is how you'd deploy debootstrap if you were doing it directly from a repository: sudo debootstrap --arch amd64 squeeze /mnt/debian http://ftp.us.debian.org/debian Here's the link that I primarily used to figure this out.

    Read the article

  • Deploying Data Mining Models using Model Export and Import

    - by [email protected]
    In this post, we'll take a look at how Oracle Data Mining facilitates model deployment. After building and testing models, a next step is often putting your data mining model into a production system -- referred to as model deployment. The ability to move data mining model(s) easily into a production system can greatly speed model deployment, and reduce the overall cost. Since Oracle Data Mining provides models as first class database objects, models can be manipulated using familiar database techniques and technology. For example, one or more models can be exported to a flat file, similar to a database table dump file (.dmp). This file can be moved to a different instance of Oracle Database EE, and then imported. All methods for exporting and importing models are based on Oracle Data Pump technology and found in the DBMS_DATA_MINING package. Before performing the actual export or import, a directory object must be created. A directory object is a logical name in the database for a physical directory on the host computer. Read/write access to a directory object is necessary to access the host computer file system from within Oracle Database. For our example, we'll work in the DMUSER schema. First, DMUSER requires the privilege to create any directory. This is often granted through the sysdba account. grant create any directory to dmuser; Now, DMUSER can create the directory object specifying the path where the exported model file (.dmp) should be placed. In this case, on a linux machine, we have the directory /scratch/oracle. CREATE OR REPLACE DIRECTORY dmdir AS '/scratch/oracle'; If you aren't sure of the exact name of the model or models to export, you can find the list of models using the following query: select model_name from user_mining_models; There are several options when exporting models. We can export a single model, multiple models, or all models in a schema using the following procedure calls: BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('MY_MODEL.dmp','dmdir','name =''MY_DT_MODEL'''); END; BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('MY_MODELS.dmp','dmdir',              'name IN (''MY_DT_MODEL'',''MY_KM_MODEL'')'); END; BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('ALL_DMUSER_MODELS.dmp','dmdir'); END; A .dmp file can be imported into another schema or database using the following procedure call, for example: BEGIN   DBMS_DATA_MINING.IMPORT_MODEL('MY_MODELS.dmp', 'dmdir'); END; As with models from any data mining tool, when moving a model from one environment to another, care needs to be taken to ensure the transformations that prepare the data for model building are matched (with appropriate parameters and statistics) in the system where the model is deployed. Oracle Data Mining provides automatic data preparation (ADP) and embedded data preparation (EDP) to reduce, or possibly eliminate, the need to explicitly transport transformations with the model. In the case of ADP, ODM automatically prepares the data and includes the necessary transformations in the model itself. In the case of EDP, users can associate their own transformations with attributes of a model. These transformations are automatically applied when applying the model to data, i.e., scoring. Exporting and importing a model with ADP or EDP results in these transformations being immediately available with the model in the production system.

    Read the article

  • Whether to use UNION or OR in SQL Server Queries

    - by Dinesh Asanka
    Recently I came across with an article on DB2 about using Union instead of OR. So I thought of carrying out a research on SQL Server on what scenarios UNION is optimal in and which scenarios OR would be best. I will analyze this with a few scenarios using samples taken  from the AdventureWorks database Sales.SalesOrderDetail table. Scenario 1: Selecting all columns So we are going to select all columns and you have a non-clustered index on the ProductID column. --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 874 So query 1 is using OR and the later is using UNION. Let us analyze the execution plans for these queries. Query 1 Query 2 As expected Query 1 will use Clustered Index Scan but Query 2, uses all sorts of things. In this case, since it is using multiple CPUs you might have CX_PACKET waits as well. Let’s look at the profiler results for these two queries: CPU Reads Duration Row Counts OR 78 1252 389 3854 UNION 250 7495 660 3854 You can see from the above table the UNION query is not performing well as the  OR query though both are retuning same no of rows (3854).These results indicate that, for the above scenario UNION should be used. Scenario 2: Non-Clustered and Clustered Index Columns only --Query 1 : OR SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 GO --Query 2 : UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 874 GO So this time, we will be selecting only index columns, which means these queries will avoid a data page lookup. As in the previous case we will analyze the execution plans: Query 1 Query 2 Again, Query 2 is more complex than Query 1. Let us look at the profile analysis: CPU Reads Duration Row Counts OR 0 24 208 3854 UNION 0 38 193 3854 In this analyzis, there is only slight difference between OR and UNION. Scenario 3: Selecting all columns for different fields Up to now, we were using only one column (ProductID) in the where clause.  What if we have two columns for where clauses and let us assume both are covered by non-clustered indexes? --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2: As we can see, the query plan for the second query has improved. Let us see the profiler results. CPU Reads Duration Row Counts OR 47 1278 443 1228 UNION 31 1334 400 1228 So in this case too, there is little difference between OR and UNION. Scenario 4: Selecting Clustered index columns for different fields Now let us go only with clustered indexes: --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2 Now both execution plans are almost identical except is an additional Stream Aggregate is used in the first query. This means UNION has advantage over OR in this scenario. Let us see profiler results for these queries again. CPU Reads Duration Row Counts OR 0 319 366 1228 UNION 0 50 193 1228 Now see the differences, in this scenario UNION has somewhat of an advantage over OR. Conclusion Using UNION or OR depends on the scenario you are faced with. So you need to do your analyzing before selecting the appropriate method. Also, above the four scenarios are not all an exhaustive list of scenarios, I selected those for the broad description purposes only.

    Read the article

  • C++ problem with assimp 3D model loader

    - by Brendan Webster
    In my game I have model loading functions for Assimp model loading library. I can load the model and render it, but the model displays incorrectly. The models load in as if they were using a seperate projection matrix. I have looked over my code over and over again, but I probably keep on missing the obvious reason why this is happening. Here is an image of my game: It's simply a 6 sided cube, but it's off big time! Here are my code snippets for rendering the cube to the screen: void C_MediaLoader::display(void) { float tmp; glTranslatef(0,0,0); // rotate it around the y axis glRotatef(angle,0.f,0.f,1.f); glColor4f(1,1,1,1); // scale the whole asset to fit into our view frustum tmp = scene_max.x-scene_min.x; tmp = aisgl_max(scene_max.y - scene_min.y,tmp); tmp = aisgl_max(scene_max.z - scene_min.z,tmp); tmp = (1.f / tmp); glScalef(tmp/5, tmp/5, tmp/5); // center the model //glTranslatef( -scene_center.x, -scene_center.y, -scene_center.z ); // if the display list has not been made yet, create a new one and // fill it with scene contents if(scene_list == 0) { scene_list = glGenLists(1); glNewList(scene_list, GL_COMPILE); // now begin at the root node of the imported data and traverse // the scenegraph by multiplying subsequent local transforms // together on GL's matrix stack. recursive_render(scene, scene->mRootNode); glEndList(); } glCallList(scene_list); } void C_MediaLoader::recursive_render (const struct aiScene *sc, const struct aiNode* nd) { unsigned int i; unsigned int n = 0, t; struct aiMatrix4x4 m = nd->mTransformation; // update transform aiTransposeMatrix4(&m); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++) { int index = face->mIndices[i]; if(mesh->mColors[0] != NULL) glColor4fv((GLfloat*)&mesh->mColors[0][index]); if(mesh->mNormals != NULL) glNormal3fv(&mesh->mNormals[index].x); glVertex3fv(&mesh->mVertices[index].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n]); } glPopMatrix(); } Sorry there is so much code to look through, but I really cannot find the problem, and I would love to have help.

    Read the article

  • Oracle celebrates a successful Oracle CloudWorld in Bogotá

    - by yaldahhakim
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 written by: Diana Tamayo Tovar Oracle CloudWorld Bogotá began with scattered showers, rain and strong winds, inviting Colombians to spend a whole day in the social, mobile and complete world of Oracle Cloud. The event took place on November 6th with 807 attendees, 15 media representatives and 65 partners, who gathered to share the business value of Cloud along with Oracle executives and Colombian market leaders. Line-of-business leaders in sales and marketing, customer service and support, HR and talent management, and finance and operations, shared their ideas with Colombian customers, giving them a chance to learn, discover and engage with the tools, trends and concepts of Cloud. The highlights of the event included the presence of keynote speakers such as Bob Evans, Chief Communications Officer, and a customer testimonial session with top business leaders from insurance, finances, retail, communications and health Colombian industries, who shared their innovation experiences and success stories on workforce empowerment, talent management, cloud security, social engagement and productivity, providing best case scenarios on how Oracle has helped them transform their business with technologies like cloud, social collaboration and mobile applications. The keynote session was preceded by a customer success story from one of the largest virtual network operator in the country, providing an interesting case study of mobile banking innovation and a great customer testimonial of the importance of cross industry strategies and cloud technology. The event provided five different tracks on the main trends of how companies communicate and engage with different audiences, providing a different perspective on the importance of empowering brands through their customers, trusting and investing in technology for growth, while Oracle University shared their knowledge with “Oracle Cloud Fundamentals” a training lesson regarding Java Cloud, Database Cloud and other Oracle Cloud product technologies and solutions. The rainy day scenario included sideshows of aerial acrobatics and speed painting performances to recreate the environment of modern and flexible Cloud Solutions in a colorful and creative way. Oracle CloudWorld Bogotá was a great opportunity to expose invalid cloud Myths and the main concerns of the Colombian customers towards cloud, considering IDC Latin America studies stating that 93% of Colombian business leaders are interested in cloud but only 47% understand its business value. Spending a day in the cloud with 6 demogrounds stations, conference sessions, interesting case studies and customer testimonials will surely widen the endless market opportunities for Colombian customers, leaving them amazed with how Oracle Cloud works towards integration with other environments, non oracle applications, social media and mobile devices with bulletproof security infrastructure. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • A tiny Utility to recycle an IIS Application Pool

    - by Rick Strahl
    In the last few weeks I've annoyingly been having problems with an area on my Web site. It's basically ancient articles that are using ASP classic pages and for reasons unknown ASP classic locks up on these pages frequently. It's not an individual page, but ALL ASP classic pages lock up. Ah yes, gotta old tech gone bad. It's not super critical since the content is really old, but still a hassle since it's linked content that still gets quite a bit of traffic. When it happens all ASP classic in that AppPool dies. I've been having a hard time tracking this one down - I suspect an errant COM object I have a Web Monitor running on the server that's checking for failures and while the monitor can detect the failures when the timeouts occur, I didn't have a good way to just restart that particular application pool. I started putzing around with PowerShell, but - as so often seems the case - I can never get the PowerShell syntax right - I just don't use it enough and have to dig out cheat sheets etc. In any case, after about 20 minutes of that I decided to just create a small .NET Console Application that does the trick instead, and in a few minutes I had this:using System; using System.Collections.Generic; using System.Text; using System.DirectoryServices; namespace RecycleApplicationPool { class Program { static void Main(string[] args) { string appPoolName = "DefaultAppPool"; string machineName = "LOCALHOST"; if (args.Length > 0) appPoolName = args[0]; if (args.Length > 1) machineName = args[1]; string error = null; DirectoryEntry root = null; try { Console.WriteLine("Restarting Application Pool " + appPoolName + " on " + machineName + "..."); root = new DirectoryEntry("IIS://" + machineName + "/W3SVC/AppPools/" +appPoolName); Console.WriteLine(root.InvokeGet("Name")); root.Invoke("Recycle"); Console.WriteLine("Application Pool recycling complete..."); } catch(Exception ex) { error = "Error: Unable to access AppPool: " + ex.Message; } if ( !string.IsNullOrEmpty(error) ) { Console.WriteLine(error); return; } } } } To run in you basically provide the name of the ApplicationPool and optionally a machine name if it's not on the local box. RecyleApplicationPool.exe "WestWindArticles" And off it goes. What's nice about AppPool recycling versus doing a full IISRESET is that it only affects the AppPool, and more importantly AppPool recycles happen in a staggered fashion - the existing instance isn't shut down immediately until requests finish while a new instance is fired up to handle new requests. So, now I can easily plug this Executable into my West Wind Web Monitor as an action to take when the site is not responding or timing out which is a big improvement than hanging for an unspecified amount of time. I'm posting this fairly trivial bit of code just in case somebody (maybe myself a few months down the road) is searching for ApplicationPool recyling code. It's clearly trivial, but I've written batch files for this a bunch of times before and actually having a small utility around without having to worry whether Powershell is installed and configured right is actually an improvement. Next time I think about using PowerShell remind me that it's just easier to just build a small .NET Console app, 'k? :-) Resources Download Executable and VS Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7  .NET  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SharePoint 2010 Hosting :: Setting Default Column Values on a Folder Programmatically

    - by mbridge
    The reason I write this post today is because my initial searches on the Internet provided me with nothing on the topic.  I was hoping to find a reference to the SDK but I didn’t have any luck.  What I want to do is set a default column value on an existing folder so that new items in that folder automatically inherit that value.  It’s actually pretty easy to do once you know what the class is called in the API.  I did some digging and discovered that class is MetadataDefaults. It can be found in Microsoft.Office.DocumentManagement.dll.  Note: if you can’t find it in the GAC, this DLL is in the 14/CONFIG/BIN folder and not the 14/ISAPI folder.  Add a reference to this DLL in your project.  In my case, I am building a console application, but you might put this in an event receiver or workflow. In my example today, I have simple custom folder and document content types.  I have one shared site column called DocumentType.  I have a document library which each of these content types registered.  In my document library, I have a folder named Test and I want to set its default column values using code.  Here is what it looks like.  Start by getting a reference to the list in question.  This assumes you already have a SPWeb object.  In my case I have created it and it is called site. SPList customDocumentLibrary = site.Lists["CustomDocuments"]; You then pass the SPList object to the MetadataDefaults constructor. MetadataDefaults columnDefaults = new MetadataDefaults(customDocumentLibrary); Now I just need to get my SPFolder object in question and pass it to the meethod SetFieldDefault.  This takes a SPFolder object, a string with the name of the SPField to set the default on, and finally the value of the default (in my case “Memo”). SPFolder testFolder = customDocumentLibrary.RootFolder.SubFolders["Test"]; columnDefaults.SetFieldDefault(testFolder, "DocumentType", "Memo"); You can set multiple defaults here.  When you’re done, you will need to call .Update(). columnDefaults.Update(); Here is what it all looks like together. using (SPSite siteCollection = new SPSite("http://sp2010/sites/ECMSource")) {     using (SPWeb site = siteCollection.OpenWeb())     {         SPList customDocumentLibrary = site.Lists["CustomDocuments"];         MetadataDefaults columnDefaults = new MetadataDefaults(customDocumentLibrary);          SPFolder testFolder = customDocumentLibrary.RootFolder.SubFolders["Test"];         columnDefaults.SetFieldDefault(testFolder, "DocumentType", "Memo");         columnDefaults.Update();     } } You can verify that your property was set correctly on the Change Default Column Values page in your list This is something that I could see used a lot on an ItemEventReceiver attached to a folder to do metadata inheritance.  Whenever, the user changed the value of the folder’s property, you could have it update the default.  Your code might look something columnDefaults.SetFieldDefault(properties.ListItem.Folder, "MyField", properties.ListItem[" This is a great way to keep the child items updated any time the value a folder’s property changes.  I’m also wondering if this can be done via CAML.  I tried saving a site template, but after importing I got an error on the default values page.  I’ll keep looking and let you know what I find out.

    Read the article

  • Twice as long and half as long

    - by PointsToShare
    We are in a project and we hit some snags. What’s a snag? An activity that takes longer than expected. Actually it takes longer than the time assigned to it by an over pressed PM who accepts an impossible time table and tries his best to make it possible, but I digress (again!).  So we have snags and we also have the opposite. Let’s call these “cinches”. The question is: how does a combination of snags and cinches affect the project timeline? Well, there is no simple answer. It depends on the projects dependencies as we see in the PERT chart. If all the snags are in the critical path and all the cinches are elsewhere then the cinches don’t help at all. In fact any snag in the critical path will delay the project.  Conversely, a cinch in the critical path will expedite it. A snag outside the critical path might be serious enough to even change the critical path. Thus without the PERT chart, we cannot really tell. Still there is a principle involved – Time and speed are non-linear! Twice as long adds a full unit, half as long only takes ½ unit away. Let’s just investigate a simple project. It consists of two activities – S and C - each estimated to take a week. Alas, S is a snag and really needs twice the time allotted and – a sigh of relief – C is a cinch and will take half the time allotted, so everything is Hun-key-dory, or is it?  Even here the PERT chart is important. We have 2 cases: 1: S depends on C (or vice versa) as in when the two activities are assigned to one employee. Here the estimated time was 1 + 1 and the actual time was 2 + ½ and we are ½ week late or 25% late. 2: S and C are done in parallel. Here the estimated time was 1, but the actual time is 2 – we are a whole week or 100% late. Let’s change the equation a little. S need 1.5 and C needs .5 so in case 1, we have the loss fully compensated by the gain, but in case 2 we are still behind. There are cases where this really makes no difference. This is when the critical path is not affected and we have enough slack in the other paths to counteract the difference between its snags and cinches – Let’s call this difference DSC. So if the slack is greater than DSC the project will not suffer. Conclusion: There is no general rule about snags and cinches. We need to examine each case within its project, still as we saw in the 4 examples above; the snag is generally more powerful than the cinch. Long live Murphy! That’s All Folks

    Read the article

  • Optimization and Saving/Loading

    - by MrPlosion1243
    I'm developing a 2D tile based game and I have a few questions regarding it. First I would like to know if this is the correct way to structure my Tile class: namespace TileGame.Engine { public enum TileType { Air, Stone } class Tile { TileType type; bool collidable; static Tile air = new Tile(TileType.Air); static Tile stone = new Tile(TileType.Stone); public Tile(TileType type) { this.type = type; collidable = true; } } } With this method I just say world[y, x] = Tile.Stone and this seems right to me but I'm not a very experienced coder and would like assistance. Now the reason I doubt this so much is because I like everything to be as optimized as possible and there is a major flaw in this that I need help overcoming. It has to do with saving and loading... well more on loading actually. The way it's done relies on the principle of casting an enumeration into a byte which gives you the corresponding number where its declared in the enumeration. Each TileType is cast as a byte and written out to a file. So TileType.Air would appear as 0 and TileType.Stone would appear as 1 in the file (well in byte form obviously). Loading in the file is alot different though because I can't just loop through all the bytes in the file cast them as a TileType and assign it: for(int x = 0; x < size.X; x++) { for(int y = 0; y < size.Y; y+) { world[y, x].Type = (TileType)byteReader.ReadByte(); } } This just wont work presumably because I have to actually say world[y, x] = Tile.Stone as apposed to world[y, x].Type = TileType.Stone. In order to be able to say that I need a gigantic switch case statement (I only have 2 tiles but you could imagine what it would look like with hundreds): Tile tile; for(int x = 0; x < size.X; x++) { for(int y = 0; y < size.Y; y+) { switch(byteReader.ReadByte()){ case 0: tile = Tile.Air; break; case 1: tile = Tile.Stone; break; } world[y, x] = tile; } } Now you can see how unoptimized this is and I don't know what to do. I would really just like to cast the byte as a TileType and use that but as said before I have to say world[y, x] = Tile.whatever and TileType can't be used this way. So what should I do? I would imagine I need to restructure my Tile class to fit the requirements but I don't know how I would do that. Please help! Thanks.

    Read the article

  • Terminal Colors Not Working

    - by Matt Fordham
    I am accessing an Ubuntu 10.04.2 LTS server via SSH from OSX. Recently the colors stopped working. I think it happened while I was installing/troubleshooting RVM, but I am not positive. In .bashrc I uncommeneted force_color_prompt=yes, and when I run env | grep TERM I get TERM=xterm-color. But still no colors. Any ideas? Thanks! Here is the output of cat .bashrc # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines in the history. See bash(1) for more options # ... or force ignoredups and ignorespace HISTCONTROL=ignoredups:ignorespace # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # set a fancy prompt (non-color, unless we know we "want" color) case "$TERM" in xterm-color) color_prompt=yes;; esac # uncomment for a colored prompt, if the terminal has the capability; turned # off by default to not distract the user: the focus in a terminal window # should be on the output of commands, not on the prompt force_color_prompt=yes if [ -n "$force_color_prompt" ]; then if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then # We have color support; assume it's compliant with Ecma-48 # (ISO/IEC-6429). (Lack of such support is extremely rare, and such # a case would tend to support setf rather than setaf.) color_prompt=yes else color_prompt= fi fi if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' fi unset color_prompt force_color_prompt # If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' alias dir='dir --color=auto' alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # some more ls aliases alias ll='ls -alF' alias la='ls -A' alias l='ls -CF' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi [[ -s "/usr/local/rvm/scripts/rvm" ]] && . "/usr/local/rvm/scripts/rvm"

    Read the article

  • 65536% Autogrowth!

    - by Tara Kizer
    Twice a year, we move our production systems to our disaster recovery site.  Last Saturday night was one of those days.  There are about 50 SQL Server databases to be moved to the DR site, which is done via database mirroring.  It takes only a few seconds to failover, but some databases have a bit more involved work such as setting up replication.  Everything went relatively smooth, but we encountered a weird bug on our most mission critical system.  After everything was successfully failed over to the DR site, it was noticed that mirroring was in a suspended state on one of the databases.  We thought we had run into a SQL Server 2005 bug that we had been encountering and were working with Microsoft on a fix.  Microsoft did fix it in both SQL Server 2005 service pack 3 cumulative update package 13 and service pack 4 cumulative update package 2, however SP3 CU13 and SP4 both recently failed on this system so we were not patched yet with the bug fix.  As the suspended state was causing us issues with replication, we dropped mirroring.  We then noticed we had 10MB of free disk space on the mount point where the principal’s data files are stored.  I knew something went amiss as this system should have at least 150GB free on that mount point.  I immediately checked the main database’s data file and was shocked to see an autgrowth size of 65536%.  The data file autogrew right before mirroring went into the suspended state. 65536%! I didn’t have a lot of time to research if this autgrowth problem was a known SQL Server bug, so I deferred that research to today.  A quick Google search yielded no results but emphasis on “quick”.  I checked our performance system, which was recently restored with a copy of the affected production database, and found the autogrowth setting to be 512MB.  So this autogrowth bug was encountered sometime in the last two weeks.  On February 26th, we had attempted to install SQL 2005 SP4 on production, however it had failed (PSS case open with Microsoft).  I suspected that the SP4 failure was somehow related to this autgrowth bug although that turned out not to be the case. I then tweeted (@TaraKizer) about this problem to see if the SQL Server community (#sqlhelp) had any insights.  It seems several people have either heard of this bug or encountered it.  Aaron Bertrand (blog|twitter) referred me to this Connect item. Our affected database originated on SQL Server 2000 and was upgraded to SQL Server 2005 in 2007.  Back on SQL Server 2000, we were using the default file growth setting which was a percentage.  Sometime after the 2005 upgrade is when we changed it to 512MB.  Our situation seemed to fit the bug Aaron referred to me, so now the question was whether Microsoft had fixed it yet. I received a reply to my tweet from Amit Banerjee (twitter) that it had been fixed in SP3 CU1 (KB958004).  My affected system is SP3 CU8, so I was initially confused why we had encountered the bug.  Because I don’t read things fully, I had missed that there are additional steps you have to follow after applying the bug fix.  Amit set me straight.  Although you can read this information in the KB article, I will also copy it here in case you are as lazy as me and miss the most important section of it (although if you are as lazy as me, you won’t have read this far down my blog post): This hotfix will prevent only future occurrences of this problem. For example, if you restore a database from SQL Server 2000 to a SQL Server 2005 instance that contains this hotfix, this problem will not occur. However, if you already have a database that is affected by this problem, you must follow these steps to resolve this problem manually: Apply this hotfix. Set the file growth settings for the affected files to percentage settings, and then set the settings back to megabyte settings. Take the database offline, and then bring it back online. Verify that the values of the is_percent_growth column are correct in the sys.database_files system table and in the sys.master_files system table.

    Read the article

  • What C++ coding standard do you use?

    - by gablin
    For some time now, I've been unable to settle on a coding standard and use it concistently between projects. When starting a new project, I tend to change some things around (add a space there, remove a space there, add a line break there, an extra indent there, change naming conventions, etc.). So I figured that I might provide a piece of sample code, in C++, and ask you to rewrite it to fit your standard of coding. Inspiration is always good, I say. ^^ So here goes: #ifndef _DERIVED_CLASS_H__ #define _DERIVED_CLASS_H__ /** * This is an example file used for sampling code layout. * * @author Firstname Surname */ #include <stdio> #include <string> #include <list> #include "BaseClass.h" #include "Stuff.h" /** * The DerivedClass is completely useless. It represents uselessness in all its * entirety. */ class DerivedClass : public BaseClass { //////////////////////////////////////////////////////////// // CONSTRUCTORS / DESTRUCTORS //////////////////////////////////////////////////////////// public: /** * Constructs a useless object with default settings. * * @param value * Is never used. * @throws Exception * If something goes awry. */ DerivedClass (const int value) : uselessSize_ (0) {} /** * Constructs a copy of a given useless object. * * @param object * Object to copy. * @throws OutOfMemoryException * If necessary data cannot be allocated. */ ItemList (const DerivedClass& object) {} /** * Destroys this useless object. */ ~ItemList (); //////////////////////////////////////////////////////////// // PUBLIC METHODS //////////////////////////////////////////////////////////// public: /** * Clones a given useless object. * * @param object * Object to copy. * @return This useless object. */ DerivedClass& operator= (const DerivedClass& object) { stuff_ = object.stuff_; uselessSize_ = object.uselessSize_; } /** * Does absolutely nothing. * * @param useless * Pointer to useless data. */ void doNothing (const int* useless) { if (useless == NULL) { return; } else { int womba = *useless; switch (womba) { case 0: cout << "This is output 0"; break; case 1: cout << "This is output 1"; break; case 2: cout << "This is output 2"; break; default: cout << "This is default output"; break; } } } /** * Does even less. */ void doEvenLess () { int mySecret = getSecret (); int gather = 0; for (int i = 0; i < mySecret; i++) { gather += 2; } } //////////////////////////////////////////////////////////// // PRIVATE METHODS //////////////////////////////////////////////////////////// private: /** * Gets the secret value of this useless object. * * @return A secret value. */ int getSecret () const { if ((RANDOM == 42) && (stuff_.size() > 0) || (1000000000000000000 > 0) && true) { return 420; } else if (RANDOM == -1) { return ((5 * 2) + (4 - 1)) / 2; } int timer = 100; bool stopThisMadness = false; while (!stopThisMadness) { do { timer--; } while (timer > 0); stopThisMadness = true; } } //////////////////////////////////////////////////////////// // FIELDS //////////////////////////////////////////////////////////// private: /** * Don't know what this is used for. */ static const int RANDOM = 42; /** * List of lists of stuff. */ std::list <Stuff> stuff_; /** * Specifies the size of this object's uselessness. */ size_t uselessSize_; }; #endif

    Read the article

  • MVC Portable Areas &ndash; Static Files as Embedded Resources

    - by Steve Michelotti
    This is the third post in a series related to build and deployment considerations as I’ve been exploring MVC Portable Areas: #1 – Using Web Application Project to build portable areas #2 – Conventions for deploying portable area static files #3 – Portable area static files as embedded resources In the last post, I walked through a convention for managing static files.  In this post I’ll discuss another approach to manage static files (e.g., images, css, js, etc.).  With this approach, you *also* compile the static files as embedded resources into the assembly similar to the *.aspx pages. Once again, you can set this to happen automatically by simply modifying your *.csproj file to include the desired extensions so you don’t have to remember every time you add a file: 1: <Target Name="BeforeBuild"> 2: <ItemGroup> 3: <EmbeddedResource Include="**\*.aspx;**\*.ascx;**\*.gif;**\*.css;**\*.js" /> 4: </ItemGroup> 5: </Target> We now need a reliable way to serve up these static files that are embedded in the assembly. There are a couple of ways to do this but one way is to simply create a Resource controller whose job is dedicated to doing this. 1: public class ResourceController : Controller 2: { 3: public ActionResult Index(string resourceName) 4: { 5: var contentType = GetContentType(resourceName); 6: var resourceStream = Assembly.GetExecutingAssembly().GetManifestResourceStream(resourceName); 7:   8: return this.File(resourceStream, contentType); 9: return View(); 10: } 11:   12: private static string GetContentType(string resourceName) 13: { 14: var extention = resourceName.Substring(resourceName.LastIndexOf('.')).ToLower(); 15: switch (extention) 16: { 17: case ".gif": 18: return "image/gif"; 19: case ".js": 20: return "text/javascript"; 21: case ".css": 22: return "text/css"; 23: default: 24: return "text/html"; 25: } 26: } 27: } In order to use this controller, we need to make sure we’ve registered the route in our portable area registration (shown in lines 5-6): 1: public class WidgetAreaRegistration : PortableAreaRegistration 2: { 3: public override void RegisterArea(System.Web.Mvc.AreaRegistrationContext context, IApplicationBus bus) 4: { 5: context.MapRoute("ResourceRoute", "widget1/resource/{resourceName}", 6: new { controller = "Resource", action = "Index" }); 7:   8: context.MapRoute("Widget1", "widget1/{controller}/{action}", new 9: { 10: controller = "Home", 11: action = "Index" 12: }); 13:   14: RegisterTheViewsInTheEmbeddedViewEngine(GetType()); 15: } 16:   17: public override string AreaName 18: { 19: get { return "Widget1"; } 20: } 21: } In my previous post, we relied on a custom Url helper method to find the actual physical path to the static file like this: 1: <img src="<%: Url.AreaContent("/images/arrow.gif") %>" /> Hello World! However, since we are now embedding the files inside the assembly, we no longer have to worry about the physical path. We can change this line of code to this: 1: <img src="<%: Url.Resource("Widget1.images.arrow.gif") %>" /> Hello World! Note that I had to fully quality the resource name (with namespace and physical location) since that is how .NET assemblies store embedded resources. I also created my own Url helper method called Resource which looks like this: 1: public static string Resource(this UrlHelper urlHelper, string resourceName) 2: { 3: var areaName = (string)urlHelper.RequestContext.RouteData.DataTokens["area"]; 4: return urlHelper.Action("Index", "Resource", new { resourceName = resourceName, area = areaName }); 5: } This method gives us the convenience of not having to know how to construct the URL – but just allowing us to refer to the resource name. The resulting html for the image tag is: 1: <img src="/widget1/resource/Widget1.images.arrow.gif" /> so we can always request any image from the browser directly. This is almost analogous to the WebResource.axd file but for MVC. What is interesting though is that we can encapsulate each one of these so that each area can have it’s own set of resources and they are easily distinguished because the area name is the first segment of the route. This makes me wonder if something like this ResourceController should be baked into portable areas itself. I’m definitely interested in anyone has any opinions on it or have taken alternative approaches.

    Read the article

  • OEG11gR2 integration with OES11gR2 Authorization with condition

    - by pgoutin
    Introduction This OES use-case has been defined originally by Subbu Devulapalli (http://accessmanagement.wordpress.com/).  Based on this OES museum use-case, I have developed the OEG11gR2 policy able to deal with the OES authorization with condition. From an OEG point of view, the way to deal with OES condition is to provide with the OES request some Environmental / Context Attributes.   Museum Use-Case  All painting in the museum have security sensors, an alarm goes off when a person comes too close a painting. The employee designated for maintenance needs to use their ID and disable the alarm before maintenance. You are the Security Administrator for the museum and you have been tasked with creating authorization policies to manage authorization for different paintings. Your first task is to understand how paintings are organized. Asking around, you are surprised to see that there isno formal process in place, so you need to start from scratch. the museum tracks the following attributes for each painting 1. Name of the work 2. Painter 3. Condition (good/poor) 4. Cost You compile the list of paintings  Name of Painting  Painter  Paint Condition  Cost  Mona Lisa  Leonardo da Vinci  Good  100  Magi  Leonardo da Vinci  Poor  40  Starry Night  Vincent Van Gogh  Poor  75  Still Life  Vincent Van Gogh  Good  25 Being a software geek who doesn’t (yet) understand art, you feel that price(or insurance price) of a painting is the most important criteria. So you feel that based on years-of-experience employees can be tasked with maintaining different paintings. You decide that paintings worth over 50 cost should be only handled by employees with over 20 years of experience and employees with less than 10 years of experience should not handle any painting. Lets us start with policy modeling. All paintings have a common set of attributes and actions, so it will be good to have them under a single Resource Type. Based on this resource type we will create the actual resources. So our high level model is: 1) Resource Type: Painting which has action manage and the following four attributes a) Name of the work b) Painter c) Condition (good/poor) d) Cost 2) To keep things simple lets use painting name for Resource name (in real world you will try to use some identifier which is unique, because in future we may end up with more than one painting which has the same name.) 3) Create Resources based on the previous table 4) Create an identity attribute Experience (Integer) 5) Create the following authorization policies a) Allow employees with over 20 years experience to access all paintings b) Allow employees with 10 – 20 years of experience to access painting which cost less than 50 c) Deny access to all paintings for employees with less than 10 year of experience OES Authorization Configuration We do need to create 2 authorization policies with specific conditions a) Allow employees with over 20 years experience to access all paintings b) Allow employees with 10 – 20 years of experience to access painting which cost less than 50 c) Deny access to all paintings for employees with less than 10 year of experience We don’t need an explicit policy for Deny access to all paintings for employees with less than 10 year of experience, because Oracle Entitlements Server will automatically deny if there is no matching policy. OEG Policy The OEG policy looks like the following The 11g Authorization filter configuration is similar to :  The ${PAINTING_NAME} and ${USER_EXPERIENCE} variables are initialized by the "Retrieve from the HTTP header" filters for testing purpose. That's to say, under Service Explorer, we need to provide 2 attributes "Experience" & "Painting" following the OES 11g Authorization filter described above.

    Read the article

  • SOA Community Newsletter May 2014

    - by JuergenKress
    Registration for the Fusion Middleware Summer Camps 2014 is open – Register asap for one of our bootcamps August 4th – 8th 2014 in Lisbon. Please read details and pre-requisitions careful before you register. We expect that like in the past, the conference will be booked out soon! If you can’t make it to Lisbon attend our SOA Suite 11c free on-demand Bootcamp or  Managing the Complexity of IoT online trainings. With more than 5000 customers, SOA Suite Achieves Significant Customer Adoption and Industry Recognition.Thanks to all our SOA Specialized partners for making our joins SOA customers successful! As a summary of the Industrial SOA series we published the Podcast Show Notes: SOA and Cloud - Where's This Relationship Going? Make sure you use the Oracle Demo Systems for your customer presentations. The demo systems are hosted by Oracle and include complete scenarios based on the latest Middleware version like the new B2B SOA Suite Demo System! For local presentations without fast internet use the SOA/BPM 11.1.1.7.1 Virtual Machine and Case Management Sample. At our SOA Community Workspace (SOA Community membership required) you can get new IoT presentations for Location Based Offers for Banking & Whitepaper and online Webcast & Utility presentation. In this newsletter you will find many articles about OSB: OSB 11g – A Hands-on Tutorial & Using Split-Joins in OSB Services for parallel processing of messages & OSB, Service Callouts and OQL & Working with Oracle Security Token Service. Thanks for sharing all the additional SOA articles within the community: How to configure Oracle SOA/BPM task auto release & Controlling BPEL process flow at runtime & Upgrading to Oracle SOA Suite 11g PS6 (11.1.1.7)? Do this. & BPEL and BPM's performance monitoring using DMS & SOA 11g - Create RESTful Service In Oracle SOA & Wrong timezone causes TopLink warning in SOA suite. Highlight of the BPM and ACM section is the IDC BPM vendor report. The new bundle Patch including the ACM UI is now available. If you want to learn more about ACM, get the ACM training material at our SOA Community Workspace (SOA Community membership required). A great demo for your next BPM presentation is the BPM iPad app. It’s simpleMobile BPM is Not An Option. It’s a Necessity. Thanks for sharing all the additional BPM articles within the community: BPM update adds Case Management Web Interface and REST APIs & Implementing deadline functionality with Oracle Adaptive Case Management & BPM 11g Timeout Heuristics & Humantask Assignment: Names and Expressions Assignment via Rules. In our last section Architecture, it is all about design. Usability is a key factor for customer satisfaction, worth to spend some time and read the Simplified User Experience Design Patterns eBook. Great blueprint for your project! See you in Lisbon! To read the newsletter please visit www.tinyurl.com/soaNewsMay2014 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: newsletter,SOA Community newsletter,SOA Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Problem with Assimp 3D model loader

    - by Brendan Webster
    In my game I have model loading functions for Assimp model loading library. I can load the model and render it, but the model displays incorrectly. The models load in as if they were using a seperate projection matrix. I have looked over my code over and over again, but I probably keep on missing the obvious reason why this is happening. Here is an image of my game: It's simply a 6 sided cube, but it's off big time! Here are my code snippets for rendering the cube to the screen: void C_MediaLoader::display(void) { float tmp; glTranslatef(0,0,0); // rotate it around the y axis glRotatef(angle,0.f,0.f,1.f); glColor4f(1,1,1,1); // scale the whole asset to fit into our view frustum tmp = scene_max.x-scene_min.x; tmp = aisgl_max(scene_max.y - scene_min.y,tmp); tmp = aisgl_max(scene_max.z - scene_min.z,tmp); tmp = (1.f / tmp); glScalef(tmp/5, tmp/5, tmp/5); // center the model //glTranslatef( -scene_center.x, -scene_center.y, -scene_center.z ); // if the display list has not been made yet, create a new one and // fill it with scene contents if(scene_list == 0) { scene_list = glGenLists(1); glNewList(scene_list, GL_COMPILE); // now begin at the root node of the imported data and traverse // the scenegraph by multiplying subsequent local transforms // together on GL's matrix stack. recursive_render(scene, scene->mRootNode); glEndList(); } glCallList(scene_list); } void C_MediaLoader::recursive_render (const struct aiScene *sc, const struct aiNode* nd) { unsigned int i; unsigned int n = 0, t; struct aiMatrix4x4 m = nd->mTransformation; // update transform aiTransposeMatrix4(&m); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++) { int index = face->mIndices[i]; if(mesh->mColors[0] != NULL) glColor4fv((GLfloat*)&mesh->mColors[0][index]); if(mesh->mNormals != NULL) glNormal3fv(&mesh->mNormals[index].x); glVertex3fv(&mesh->mVertices[index].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n]); } glPopMatrix(); } Sorry there is so much code to look through, but I really cannot find the problem, and I would love to have help.

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >