Search Results

Search found 71953 results on 2879 pages for 'work environment'.

Page 337/2879 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • Linux Mint 8 LXDE CE Review: LXDE Done Right

    <b>The Linux Critic:</b> "Ever since I got up close and personal with LXDE a few months ago, I&#8217;ve had my eye out for a distro that had a solid instance of LXDE as its default desktop environment. LXDE is getting more press and more attention, but the number of distros that rely on LXDE is still pretty small."

    Read the article

  • Frustrated where I am, but not sure where to go with my career [closed]

    - by Tom Pickles
    I work (3 years now) as a lead developer for a team developing internal tools and websites for a customer account within large outsourcing company. I'm a self taught programmer and my previous incarnation was a 3rd line support guy, so I have a solid infrastructure knowledge. We use VB.Net/MSSQL/SSIS/SSRS ASP.NET (nTier) in house and I have about 8 years coding experience. Without going into too much detail, my boss is very ambitious and uses our team as his footing to get up the ladder. I've been in the team from the start and the only new dev's we have brought in have been people with a bit of VBA/VBScript experience, much to my chagrin, to bolster his empire. It's been a lot of hard work to bring them up to a standard, but there's still a lot for them to learn. This makes my life stressful as I always get the high profile/complex project work to do as other's simply cannot do it, or it'd take them twice/three times longer to do it. My boss is always seeking stuff for us to build for people who haven't asked for it, which usually get's thrown to me as I have the most experience and can pick new API's (etc) up quicker. He doesn't give us proper requirements, we don't get time to design properly before we code, he wants us to throw something (quick and dirty as he calls it) together so we can get it out ASAP. I take pride in my work so I like to do it properly, make my code clean, maintainable etc, and I train the other guys in the team to do the same. But, we always fall on our faces. The customer we drop the apps on say it doesn't do what they need (due to few requirements), or my boss doesn't like it/changes the spec, so we have to rework it, it get's drawn out, and it makes us and me look and feel like fools. We then get accused by boss of not being reactive enough to change. I've had enough. In order to get my skills and knowledge gap's filled, I've been reading Code Complete 2nd Ed (McConnell) and the Head First Design Patterns books. I'm forcing myself to move into C# from VB at home to broaden my horizons. I'm not sure where to go from here. I don't want to code all my life as I'd like to move into a higher level design/architects role at some point in time (I'm 35). Where do I/can I go from here?

    Read the article

  • Google I/O 2012 - Building High Performance Mobile Web Applications

    Google I/O 2012 - Building High Performance Mobile Web Applications Ryan Fioravanti Learn what it takes to build an HTML5 mobile app that will wow your users. This session will focus on speed, offline support, UI layouts, and the tools necessary to set up a productive development environment. Come to this session if you're looking to make a killer mobile web app that stands out amongst the competition. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 33 0 ratings Time: 49:43 More in Science & Technology

    Read the article

  • How to suppress PHPSESSID in URL for Googlebot?

    - by Roque Santa Cruz
    I use cookie based sessions, and they work for normal interaction with our site. However, when Googlebot comes crawling out PHP framework, Yii, needs to append ?PHPSESSID to each URL, which doesn't look that good in SERP. Any ways to suppress this behavior? PS. I tried to utilize ini_set('session.use_only_cookies', '1');, but it does not work. PPS. To get an impression of the SERP, they look like this: http://www.google.com/search?q=site:wwwdup.uni-leipzig.de+inurl:jobportal

    Read the article

  • Establishing WebLogic Server HTTPS Trust of IIS Using a Microsoft Local Certificate Authority

    - by user647124
    Everyone agrees that self-signed and demo certificates for SSL and HTTPS should never be used in production and preferred not to be used elsewhere. Most self-signed and demo certificates are provided by vendors with the intention that they are used only to integrate within the same environment. In a vendor’s perfect world all application servers in a given enterprise are from the same vendor, which makes this lack of interoperability in a non-production environment an advantage. For us working in the real world, where not only do we not use a single vendor everywhere but have to make do with self-signed certificates for all but production, testing HTTPS between an IIS ASP.NET service provider and a WebLogic J2EE consumer application can be very frustrating to set up. It was for me, especially having found many blogs and discussion threads where various solutions were described but did not quite work and were all mostly similar but just a little bit different. To save both you and my future (who always seems to forget the hardest-won lessons) all of the pain and suffering, I am recording the steps that finally worked here for reference and sanity. How You Know You Need This The first cold clutches of dread that tells you it is going to be a long day is when you attempt to a WSDL published by IIS in WebLogic over HTTPS and you see the following: <Jul 30, 2012 2:51:31 PM EDT> <Warning> <Security> <BEA-090477> <Certificate chain received from myserver.mydomain.com - 10.555.55.123 was not trusted causing SSL handshake failure.> weblogic.wsee.wsdl.WsdlException: Failed to read wsdl file from url due to -- javax.net.ssl.SSLKeyException: [Security:090477]Certificate chain received from myserver02.mydomain.com - 10.555.55.123 was not trusted causing SSL handshake failure. The above is what started a three day sojourn into searching for a solution. Even people who had solved it before would tell me how they did, and then shrug when I demonstrated that the steps did not end in the success they claimed I would experience. Rather than torture you with the details of everything I did that did not work, here is what finally did work. Export the Certificates from IE First, take the offending WSDL URL and paste it into IE (if you have an internal Microsoft CA, you have IE, even if you don’t use it in favor of some other browser). To state the semi-obvious, if you received the error above there is a certificate configured for the IIS host of the service and the SSL port has been configured properly. Otherwise there would be a different error, usually about the site not found or connection failed. Once the WSDL loads, to the right of the address bar there will be a lock icon. Click the lock and then click View Certificates in the resulting dialog (if you do not have a lock icon but do have a Certificate Error message, see http://support.microsoft.com/kb/931850 for steps to install the certificate then you can continue from the point of finding the lock icon). Figure 1: View Certificates in IE Next, select the Details tab in the resulting dialog Figure 2: Use Certificate Details to Export Certificate Click Copy to File, then Next, then select the Base-64 encoded option for the format Figure 3: Select the Base-64 encoded option for the format For the sake of simplicity, I choose to save this to the root of the WebLogic domain. It will work from anywhere, but later you will need to type in the full path rather than just the certificate name if you save it elsewhere. Figure 4: Browse to Save Location Figure 5: Save the Certificate to the Domain Root for Convenience This is the point where I ran into some confusion. Some articles mentioned exporting the entire chain of certificates. This supposedly works for some types of certificates, or if you have a few other tools and the time to learn them. For the SSL experts out there, they already have these tools, know how to use them well, and should not be wasting their time reading this article meant for folks who just want to get things wired up and back to unit testing and development. For the rest of us, the easiest way to make sure things will work is to just export all the links in the chain individually and let WebLogic Server worry about re-assembling them into a chain (which it does quite nicely). While perhaps not the most elegant solution, the multi-step process is easy to repeat and uses only tools that are immediately available and require no learning curve. So… Next, go to Tools then Internet Options then the Content tab and click Certificates. Go to the Trust Root Certificate Authorities tab and find the certificate root for your Microsoft CA cert (look for the Issuer of the certificate you exported earlier). Figure 6: Trusted Root Certification Authorities Tab Export this one the same way as before, with a different name Figure 7: Use a Unique Name for Each Certificate Repeat this once more for the Intermediate Certificate tab. Import the Certificates to the WebLogic Domain Now, open an command prompt, navigate to [WEBLOGIC_DOMAIN_ROOT]\bin and execute setDomainEnv. You should then be in the root of the domain. If not, CD to the domain root. Assuming you saved the certificate in the domain root, execute the following: keytool -importcert -alias [ALIAS-1] -trustcacerts -file [FULL PATH TO .CER 1] -keystore truststore.jks -storepass [PASSWORD] An example with the variables filled in is: keytool -importcert -alias IIS-1 -trustcacerts -file microsftcert.cer -keystore truststore.jks -storepass password After several lines out output you will be prompted with: Trust this certificate? [no]: The correct answer is ‘yes’ (minus the quotes, of course). You’ll you know you were successful if the response is: Certificate was added to keystore If not, check your typing, as that is generally the source of an error at this point. Repeat this for all three of the certificates you exported, changing the [ALIAS-1] and [FULL PATH TO .CER 1] value each time. For example: keytool -importcert -alias IIS-1 -trustcacerts -file microsftcert.cer -keystore truststore.jks -storepass password keytool -importcert -alias IIS-2 -trustcacerts -file microsftcertRoot.cer -keystore truststore.jks -storepass password keytool -importcert -alias IIS-3 -trustcacerts -file microsftcertIntermediate.cer -keystore truststore.jks -storepass password In the above we created a new JKS key store. You can re-use an existing one by changing the name of the JKS file to one you already have and change the password to the one that matches that JKS file. For the DemoTrust.jks  that is included with WebLogic the password is DemoTrustKeyStorePassPhrase. An example here would be: keytool -importcert -alias IIS-1 -trustcacerts -file microsoft.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase keytool -importcert -alias IIS-2 -trustcacerts -file microsoftRoot.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase keytool -importcert -alias IIS-2 -trustcacerts -file microsoftInter.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase Whichever keystore you use, you can check your work with: keytool -list -keystore truststore.jks -storepass password Where “truststore.jks” and “password” can be replaced appropriately if necessary. The output will look something like this: Figure 8: Output from keytool -list -keystore Update the WebLogic Keystore Configuration If you used an existing keystore rather than creating a new one, you can restart your WebLogic Server and skip the rest of this section. For those of us who created a new one because that is the instructions we found online… Next, we need to tell WebLogic to use the JKS file (truststore.jks) we just created. Log in to the WebLogic Server Administration Console and navigate to Servers > AdminServer > Configuration > Keystores. Scroll down to “Custom Trust Keystore:” and change the value to “truststore.jks” and the value of “Custom Trust Keystore Passphrase:” and “Confirm Custom Trust Keystore Passphrase:” to the password you used when earlier, then save your changes. You will get a nice message similar to the following: Figure 9: To Be Safe, Restart Anyways The “No restarts are necessary” is somewhat of an exaggeration. If you want to be able to use the keystore you may need restart the server(s). To save myself aggravation, I always do. Your mileage may vary. Conclusion That should get you there. If there are some erroneous steps included for your situation in particular, I will offer up a semi-apology as the process described above does not take long at all and if there is one step that could be dropped from it, is still much faster than trying to figure this out from other sources.

    Read the article

  • How to fix the "Setting locale failed" error while installing vim?

    - by user211775
    When installing vim through Software Center , I get this error installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Setting up install-info (4.13a.dfsg.1-10ubuntu4) ... /etc/environment: line 1: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games: No such file or directory dpkg: error processing install-info (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: install-info Setting up install-info (4.13a.dfsg.1-10ubuntu4) ... /etc/environment: line 1: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games: No such file or directory dpkg: error processing install-info (--configure): subprocess installed post-installation script returned error exit status 1

    Read the article

  • Adding proper THEAD sections to a GridView

    - by Rick Strahl
    I’m working on some legacy code for a customer today and dealing with a page that has my favorite ‘friend’ on it: A GridView control. The ASP.NET GridView control (and also the older DataGrid control) creates some pretty messed up HTML. One of the more annoying things it does is to generate all rows including the header into the page in the <tbody> section of the document rather than in a properly separated <thead> section. Here’s is typical GridView generated HTML output: <table class="tablesorter blackborder" cellspacing="0" rules="all" border="1" id="Table1" style="border-collapse:collapse;"> <tr> <th scope="col">Name</th> <th scope="col">Company</th> <th scope="col">Entered</th><th scope="col">Balance</th> </tr> <tr> <td>Frank Hobson</td><td>Hobson Inc.</td> <td>10/20/2010 12:00:00 AM</td><td>240.00</td> </tr> ... </table> Notice that all content – both the headers and the body of the table – are generated directly under the <table> tag and there’s no explicit use of <tbody> or <thead> (or <tfooter> for that matter). When the browser renders this the document some default settings kick in and the DOM tree turns into something like this: <table> <tbody> <tr> <-- header <tr> <—detail row <tr> <—detail row </tbody> </table> Now if you’re just rendering the Grid server side and you’re applying all your styles through CssClass assignments this isn’t much of a problem. However, if you want to style your grid more generically using hierarchical CSS selectors it gets a lot more tricky to format tables that don’t properly delineate headers and body content. Also many plug-ins and other JavaScript utilities that work on tables require a properly formed table layout, and many of these simple won’t work out of the box with a GridView. For example, one of the things I wanted to do for this app is use the jQuery TableSorter plug-in which – not surprisingly – requires to work of table headers in the DOM document. Out of the box, the TableSorter plug-in doesn’t work with GridView controls, because the lack of a <thead> section to work on. Luckily with a little help of some jQuery scripting there’s a real easy fix to this problem. Basically, if we know the GridView generated table has a header in it, code like the following will move the headers from <tbody> to <thead>: <script type="text/javascript"> $(document).ready(function () { // Fix up GridView to support THEAD tags $("#gvCustomers tbody").before("<thead><tr></tr></thead>"); $("#gvCustomers thead tr").append($("#gvCustomers th")); $("#gvCustomers tbody tr:first").remove(); $("#gvCustomers").tablesorter({ sortList: [[1, 0]] }); }); </script> And voila you have a table that now works with the TableSorter plug-in. If you use GridView’s a lot you might want something a little more generic so the following does the same thing but should work more generically on any GridView/DataGrid missing its <thead> tag: function fixGridView(tableEl) {            var jTbl = $(tableEl);         if(jTbl.find("tbody>tr>th").length > 0) {         jTbl.find("tbody").before("<thead><tr></tr></thead>");         jTbl.find("thead tr").append(jTbl.find("th"));         jTbl.find("tbody tr:first").remove();     } } which you can call like this: $(document).ready(function () { fixGridView( $("#gvCustomers") ); $("#gvCustomers").tablesorter({ sortList: [[1, 0]] }); }); Server Side THEAD Rendering [updated from comments 11/21/2010] Several commenters pointed out that you can also do this on the server side by using the GridView.HeaderRow.TableSection property to force rendering with a proper table header. I was unaware of this option actually – not exactly an easy one to discover. One issue here is that timing of this needs to happen during the databinding process so you need to use an event handler: this.gvCustomers.DataBound += (object o, EventArgs ev) => { gvCustomers.HeaderRow.TableSection = TableRowSection.TableHeader; }; this.gvCustomers.DataSource = custList; this.gvCustomers.DataBind(); You can apply the same logic for the FooterRow. It’s beyond me why this rendering mode isn’t the default for a GridView – why would you ever want to have a table that doesn’t use a THEAD section??? But I disgress :-) I don’t use GridViews much anymore – opting for more flexible approaches using ListViews or even plain code based views or other custom displays that allow more control over layout, but I still see a lot of old code that does use them old clunkers including my own :) (gulp) and this does make life a little bit easier especially if you’re working with any of the jQuery table related plug-ins that expect a proper table structure.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  jQuery  

    Read the article

  • Where to draw the line between development-led security and administration-led security?

    - by haylem
    There are cases where you have the opportunity, as a developer, to enforce stricter security features and protections on a software, though they could very well be managed at an environmental level (ie, the operating system would take care of it). Where would you say you draw the line, and what elements do you factor in your decision? Concrete Examples User Management is the OS's responsibility Not exactly meant as a security feature, but in a similar case Google Chrome used to not allow separate profiles. The invoked reason (though it now supports multiple profiles for a same OS user) used to be that user management was the operating system's responsibility. Disabling Web-Form Fields A recurrent request I see addressed online is to have auto-completion be disabled on form fields. Auto-completion didn't exist in old browsers, and was a welcome feature at the time it was introduced for people who needed to fill in forms often. But it also brought in some security concerns, and so some browsers started to implement, on top of the (obviously needed) setting in their own preference/customization panel, an autocomplete attribute for form or input fields. And this has now been introduced into the upcoming HTML5 standard. For browsers who do not listen to this attribute, strange hacks *\ are offered, like generating unique IDs and names for fields to avoid them from being suggested in future forms (which comes with another herd of issues, like polluting your local auto-fill cache and not preventing a password from being stored in it, but instead probably duplicating its occurences). In this particular case, and others, I'd argue that this is a user setting and that it's the user's desire and the user's responsibility to enable or disable auto-fill (by disabling the feature altogether). And if it is based on an internal policy and security requirement in a corporate environment, then substitute the user for the administrator in the above. I assume it could be counter-argued that the user may want to access non-critical applications (or sites) with this handy feature enabled, and critical applications with this feature disabled. But then I'd think that's what security zones are for (in some browsers), or the sign that you need a more secure (and dedicated) environment / account to use these applications. * I obviously don't deny the ingenuity of the people who were forced to find workarounds, just the necessity of said workarounds. Questions That was a tad long-winded, so I guess my questions are: Would you in general consider it to be the application's (hence, the developer's) responsiblity? Where do you draw the line, if not in the "general" case?

    Read the article

  • Can I implement the readers and writers algorithm in OpenMP by replacing counting semaphores with another feature?

    - by DeveloperDon
    After reading about OpenMP and not finding functions to support semaphores, I did an internet search for OpenMP and the readers and writers problem, but found no suitable matches. Is there a general method for replacing counting semaphores in OpenMP with something that it supports? Or is there just a gap in the environment where it does not permit things that are asymmetrical like the third readers and writers problem shown on the following page? http://en.wikipedia.org/wiki/Readers-writers_problem#The_third_readers-writers_problem

    Read the article

  • Innovation Java : 8ème édition du Duke's Choice Awards - Les candidatures sont ouvertes, quels sont

    Bonjour, Pour la 8ème année consécutive sont organisés les Duke's Choice Awards. Le principe : récompenser les innovations dans le monde Java (rapport innovation / moyens mis en oeuvre) dans différentes catégories. Quelques catégories présentes les années passées :Java Technology in Education Java Technology for the Environment Java Technology for the Open Source Community Java Everywhere! Java Technology Tools ... Les résultats seront probablement donnés lors de JavaOne du 19 au 23 septembre 2010. Des pronostics ? Voir également :

    Read the article

  • Oracle Sales Cloud Demo environments for partners

    - by Richard Lefebvre
    We are happy to inform our EMEA based CRM & CX partners that a new process for partners to get an access to the Oracle Sales Cloud (Fusion CRM SaaS) demo environment is in place.  If you are interested to take benefit of it, please send a short eMail to [email protected].  This offer - subject to final approval - is limited to EMEA based partners who have certified at least one sales and one presales on Oracle Sales Cloud.

    Read the article

  • Donald Farmer left Microsoft for QlikView

    - by Marco Russo (SQLBI)
    As many of you probably already know, Donald Farmer left Microsoft to join QlikView . There are no doubts that Donald have been “the face of Microsoft BI” in the last years. This news has been initially perceived (me included, I admit) as a possible lack of confidence in Microsoft BI Vision, but after reading his blog and many other comments from Microsoft people, I can say it is not. This is more a personal choice, looking for a new challenge in a brilliant career in a relatively smaller environment...(read more)

    Read the article

  • [News] Tests robotis?s avec Selenium et xUnit

    Dans ce billet, joel abrahamsson illustre un cas concret d'utilisation de l'outil de tests d'IHM, Selenium, avec C# : "I have really enjoyed playing around with Selenium and I?m quite sure that I will start using it at work as well. I?m also quite happy with how my tests work and how I handle starting Selenium Server and I hope that this post will prove useful to others as well"

    Read the article

  • Good resource for business development Techniques

    - by Morons
    I work for an IT consulting firm… As I progress in my career I (like most who work for IT firms) am spending more and more time participating in business development, usually as a technical expert. Can any one recommend a good resource (or book) on business development preferably targeting technology businesses? (I am NOT looking for “how to get leads”… I’m looking for “how to conduct a solid sales pitch\ Demo Software” type stuff)

    Read the article

  • Rapid Planning: Next Generation MRP

    - by john.bermudez
    MRP has been a mainstay of manufacturing systems for 40 years. MRP evolved from simple inventory planning systems to become the heart of the MRPII systems which eventually became ERP. While the applications surrounding it have become broader, more sophisticated and web-based, MRP continues to operate in the loneliness of the Saturday night batch window quietly exploding bills of materials and logging exceptions for hours. During this same 40 years, manufacturing business processes have seen countless changes and improvements including JIT, TQM, Six Sigma, Flow Manufacturing, Lean Manufacturing and Supply Chain Management. Although much logic has been added to MRP to deal with new manufacturing processes, it has not been able to keep up with the real-time pace of today's supply chain. As a result, planners have devised ingenious ways to trick MRP to handle new processes but often need to dump the output into spreadsheets of their own design in the hope of wrestling thousands of exceptions to ground. Oracle's new Rapid Planning application is just what companies still running MRP have been waiting for! The newest member of the Value Chain Planning product line, Rapid Planning is designed to empower planners with comprehensive supply planning that runs online in minutes, not hours. It enables a planner simulate the incremental impact of a new order or re-run an entire plan in a separate sandbox. Rapid Planning does a complete multi-level bill of material explosion like MRP but plans orders considering material and capacity constraints. Considering material and capacity constraints in planning can help you quickly reduce inventory and improve on-time shipments. Rapid Planning is an APS application that leverages years of Oracle development experience and customer feedback. Rather than rely exclusively on black-box heuristics, Rapid Planning is designed to give planners the computing power to use their industry experience and business knowledge to improve MRP. For example, Rapid Planning has a powerful worksheet user interface with built-in query capability that allows the planner to locate the orders she is interested in and use a mass update function to make quick work of large changes. The planner can save these queries and unique user interface to personalize their planning environment. Most importantly, Rapid Planning is designed to do supply planning in today's dynamic supply chain environment. It can be used to supplement MRP or replace MRP entirely. It generates plans that provide order-by-order details with aggregate key performance indicators that enable planners to quickly assess the overall business impact of a plan. To find out more about how Rapid Planning can help improve your MRP, please contact me at [email protected] or your Oracle Account Manager.

    Read the article

  • Improving Manageability of Virtual Environments

    - by Jeff Victor
    Boot Environments for Solaris 10 Branded Zones Until recently, Solaris 10 Branded Zones on Solaris 11 suffered one notable regression: Live Upgrade did not work. The individual packaging and patching tools work correctly, but the ability to upgrade Solaris while the production workload continued running did not exist. A recent Solaris 11 SRU (Solaris 11.1 SRU 6.4) restored most of that functionality, although with a slightly different concept, different commands, and without all of the feature details. This new method gives you the ability to create and manage multiple boot environments (BEs) for a Solaris 10 Branded Zone, and modify the active or any inactive BE, and to do so while the production workload continues to run. Background In case you are new to Solaris: Solaris includes a set of features that enables you to create a bootable Solaris image, called a Boot Environment (BE). This newly created image can be modified while the original BE is still running your workload(s). There are many benefits, including improved uptime and the ability to reboot into (or downgrade to) an older BE if a newer one has a problem. In Solaris 10 this set of features was named Live Upgrade. Solaris 11 applies the same basic concepts to the new packaging system (IPS) but there isn't a specific name for the feature set. The features are simply part of IPS. Solaris 11 Boot Environments are not discussed in this blog entry. Although a Solaris 10 system can have multiple BEs, until recently a Solaris 10 Branded Zone (BZ) in a Solaris 11 system did not have this ability. This limitation was addressed recently, and that enhancement is the subject of this blog entry. This new implementation uses two concepts. The first is the use of a ZFS clone for each BE. This makes it very easy to create a BE, or many BEs. This is a distinct advantage over the Live Upgrade feature set in Solaris 10, which had a practical limitation of two BEs on a system, when using UFS. The second new concept is a very simple mechanism to indicate the BE that should be booted: a ZFS property. The new ZFS property is named com.oracle.zones.solaris10:activebe (isn't that creative? ). It's important to note that the property is inherited from the original BE's file system to any BEs you create. In other words, all BEs in one zone have the same value for that property. When the (Solaris 11) global zone boots the Solaris 10 BZ, it boots the BE that has the name that is stored in the activebe property. Here is a quick summary of the actions you can use to manage these BEs: To create a BE: Create a ZFS clone of the zone's root dataset To activate a BE: Set the ZFS property of the root dataset to indicate the BE To add a package or patch to an inactive BE: Mount the inactive BE Add packages or patches to it Unmount the inactive BE To list the available BEs: Use the "zfs list" command. To destroy a BE: Use the "zfs destroy" command. Preparation Before you can use the new features, you will need a Solaris 10 BZ on a Solaris 11 system. You can use these three steps - on a real Solaris 11.1 server or in a VirtualBox guest running Solaris 11.1 - to create a Solaris 10 BZ. The Solaris 11.1 environment must be at SRU 6.4 or newer. Create a flash archive on the Solaris 10 system s10# flarcreate -n s10-system /net/zones/archives/s10-system.flar Configure the Solaris 10 BZ on the Solaris 11 system s11# zonecfg -z s10z Use 'create' to begin configuring a new zone. zonecfg:s10z create -t SYSsolaris10 zonecfg:s10z set zonepath=/zones/s10z zonecfg:s10z exit s11# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - s10z configured /zones/s10z solaris10 excl Install the zone from the flash archive s11# zoneadm -z s10z install -a /net/zones/archives/s10-system.flar -p You can find more information about the migration of Solaris 10 environments to Solaris 10 Branded Zones in the documentation. The rest of this blog entry demonstrates the commands you can use to accomplish the aforementioned actions related to BEs. New features in action Note that the demonstration of the commands occurs in the Solaris 10 BZ, as indicated by the shell prompt "s10z# ". Many of these commands can be performed in the global zone instead, if you prefer. If you perform them in the global zone, you must change the ZFS file system names. Create The only complicated action is the creation of a BE. In the Solaris 10 BZ, create a new "boot environment" - a ZFS clone. You can assign any name to the final portion of the clone's name, as long as it meets the requirements for a ZFS file system name. s10z# zfs snapshot rpool/ROOT/zbe-0@snap s10z# zfs clone -o mountpoint=/ -o canmount=noauto rpool/ROOT/zbe-0@snap rpool/ROOT/newBE cannot mount 'rpool/ROOT/newBE' on '/': directory is not empty filesystem successfully created, but not mounted You can safely ignore that message: we already know that / is not empty! We have merely told ZFS that the default mountpoint for the clone is the root directory. List the available BEs and active BE Because each BE is represented by a clone of the rpool/ROOT dataset, listing the BEs is as simple as listing the clones. s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.55G 42.9G 31K legacy rpool/ROOT/zbe-0 1K 42.9G 3.55G / rpool/ROOT/newBE 3.55G 42.9G 3.55G / The output shows that two BEs exist. Their names are "zbe-0" and "newBE". You can tell Solaris that one particular BE should be used when the zone next boots by using a ZFS property. Its name is com.oracle.zones.solaris10:activebe. The value of that property is the name of the clone that contains the BE that should be booted. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local Change the active BE When you want to change the BE that will be booted next time, you can just change the activebe property on the rpool/ROOT dataset. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local s10z# zfs set com.oracle.zones.solaris10:activebe=newBE rpool/ROOT s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# shutdown -y -g0 -i6 After the zone has rebooted: s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# zfs mount rpool/ROOT/newBE / rpool/export /export rpool/export/home /export/home rpool /rpool Mount the original BE to see that it's still there. s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# ls /mnt Desktop export platform Documents export.backup.20130607T214951Z proc S10Flar home rpool TT_DB kernel sbin bin lib system boot lost+found tmp cdrom mnt usr dev net var etc opt Patch an inactive BE At this point, you can modify the original BE. If you would prefer to modify the new BE, you can restore the original value to the activebe property and reboot, and then mount the new BE to /mnt (or another empty directory) and modify it. Let's mount the original BE so we can modify it. (The first command is only needed if you haven't already mounted that BE.) s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# patchadd -R /mnt -M /var/sadm/spool 104945-02 Note that the typical usage will be: Create a BE Mount the new (inactive) BE Use the package and patch tools to update the new BE Unmount the new BE Reboot Delete an inactive BE ZFS clones are children of their parent file systems. In order to destroy the parent, you must first "promote" the child. This reverses the parent-child relationship. (For more information on this, see the documentation.) The original rpool/ROOT file system is the parent of the clones that you create as BEs. In order to destroy an earlier BE that is that parent of other BEs, you must first promote one of the child BEs to be the ZFS parent. Only then can you destroy the original BE. Fortunately, this is easier to do than to explain: s10z# zfs promote rpool/ROOT/newBE s10z# zfs destroy rpool/ROOT/zbe-0 s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.56G 269G 31K legacy rpool/ROOT/newBE 3.56G 269G 3.55G / Documentation This feature is so new, it is not yet described in the Solaris 11 documentation. However, MOS note 1558773.1 offers some details. Conclusion With this new feature, you can add and patch packages to boot environments of a Solaris 10 Branded Zone. This ability improves the manageability of these zones, and makes their use more practical. It also means that you can use the existing P2V tools with earlier Solaris 10 updates, and modify the environments after they become Solaris 10 Branded Zones.

    Read the article

  • As a programmer, should I know low and high-level programming languages?

    - by job
    I been contacted to do some work remote controlling LEDs displays over TCP/IP, but my experience and preparation is mostly about high-level programming language. I said that to the person who contact me about the work and he told me that: "if you call yourself a programmer you should know all these things" Should a programmer really know the details of low-level programming? Or can I treat it as a black box concept, as theoretical knowledge but not necessarily doing it or implementing low level language solutions, having in mind that low-level programming is not my expertise?

    Read the article

  • Using IComparable<T> Interface

    - by Pawan_Mishra
    Level : Beginner to Intermediate C# language has constantly evolved over a constant period of time.Each new version introduced new features which changed the way we programmed and solved the problems. Whether it was introduction of generics in C# 2.0 , LINQ in C# 3.0 or concept of dynamic programming in C# 4.0 , each of them had or will have greater impact on our programming style.As a developer we don’t have much option but to evolve and redefine our self in this constantly changing environment...(read more)

    Read the article

  • Workaround: XNA 4 importing only part of 3d model from FBX

    - by Vitus
    Recently I found a problem with importing 3D models from FBX files: it sometimes imported partly. That is when you draw a 3D model, loaded from FBX file, processed by content pipeline, you got only part of meshes. “Sometimes” means that you got this error only for some files. Results of my research below. For example, I have 10Mb binary FBX file with a model, looks like: And when I load it, result Model instance contains only part of meshes and looks like: Because models from other files imported normally, I think that it’s a “bad format” file. When you add FBX file to your XNA Content project and build it, imported file processing by XNA Fbx Importer & Processor. On MSDN I found that FbxImporter designed to work with 2006.11 version of FBX format. My file is FBX 2012 format. Ok, I need to convert it to 2006 format. It can be done by using Autodesk FBX Converter 2012.1. I tried to convert it to other versions of FBX formats, but without success. And I also tried to import my FBX file to 3D MAX, and it imported correctly. Then I export model using 3D MAX, and it generate me other FBX, which I add to my XNA project. After that I got full model, that rendered well! So, internal data structure of FBX file is more important for right XNA import, than it version! Unfortunately, Autodesk FBX is not an open file format. If you want to work with FBX, you should use Autodesk FBX SDK. This way you can manually read content of FBX file, and use it everyway. Then I tried to convert my source FBX file to DAE Collada, and result DAE file back to FBX, using FBX Converter (FBX –> DAE –> FBX). The result FBX file can be imported normally.   Conclusion: XNA FbxImporter correct work doesn't depend on version (2006, 2011, etc) and form (binary, ascii) of FBX file. Internal FBX data structure much more important. To make FBX "readable" for XNA Importer you can use double conversion like FBX -> Collada -> FBX You also can use FBX SDK to manually load data from FBX P.S. Autodesk FBX Converter 2012 is more, than simple converter. It provide you tools like: FBX Explorer, which show you structure of FBX file; FBX Viewer, which render content of FBX and provide basic intercation like model move and zoom; FBX Take Manager, which allow to work with embedded animations

    Read the article

  • Using Sitecore RenderingContext Parameters as MVC controller action arguments

    - by Kyle Burns
    I have been working with the Technical Preview of Sitecore 6.6 on a project and have been for the most part happy with the way that Sitecore (which truly is an MVC implementation unto itself) has been expanded to support ASP.NET MVC. That said, getting up to speed with the combined platform has not been entirely without stumbles and today I want to share one area where Sitecore could have really made things shine from the "it just works" perspective. A couple days ago I was asked by a colleague about the usage of the "Parameters" field that is defined on Sitecore's Controller Rendering data template. Based on the standard way that Sitecore handles a field named Parameters, I was able to deduce that the field expected key/value pairs separated by the "&" character, but beyond that I wasn't sure and didn't see anything from a documentation perspective to guide me, so it was time to dig and find out where the data in the field was made available. My first thought was that it would be really nice if Sitecore handled the parameters in this field consistently with the way that ASP.NET MVC handles the various parameter collections on the HttpRequest object and automatically maps them to parameters of the action method executing. Being the hopeful sort, I configured a name/value pair on one of my renderings, added a parameter with matching name to the controller action and fired up the bugger to see... that the parameter was not populated. Having established that the field's value was not going to be presented to me the way that I had hoped it would, the next assumption that I would work on was that Sitecore would handle this field similar to how they handle other similar data and would plug it into some ambient object that I could reference from within the controller method. After a considerable amount of guessing, testing, and cracking code open with Redgate's Reflector (a must-have companion to Sitecore documentation), I found that the most direct way to access the parameter was through the ambient RenderingContext object using code similar to: string myArgument = string.Empty; var rc = Sitecore.Mvc.Presentation.RenderingContext.CurrentOrNull; if (rc != null) {     var parms = rc.Rendering.Parameters;     myArgument = parms["myArgument"]; } At this point, we know how this field is used out of the box from Sitecore and can provide information from Sitecore's Content Editor that will be available when the controller action is executing, but it feels a little dirty. In order to properly test the action method I would have to do a lot of setup work and possible use an isolation framework such as Pex and Moles to get at a value that my action method is dependent upon. Notice I said that my method is dependent upon the value but in order to meet that dependency I've accepted another dependency upon Sitecore's RenderingContext.  I'm a big believer in, when possible, ensuring that any piece of code explicitly advertises dependencies using the method signature, so I found myself still wanting this to work the same as if the parameters were in the request route, querystring, or form by being able to add a myArgument parameter to the action method and have this parameter populated by the framework. Lucky for us, the ASP.NET MVC framework is extremely flexible and provides some easy to grok and use extensibility points. ASP.NET MVC is able to provide information from the request as input parameters to controller actions because it uses objects which implement an interface called IValueProvider and have been registered to service the application. The most basic statement of responsibility for an IValueProvider implementation is "I know about some data which is indexed by key. If you hand me the key for a piece of data that I know about I give you that data". When preparing to invoke a controller action, the framework queries registered IValueProvider implementations with the name of each method argument to see if the ValueProvider can supply a value for the parameter. (the rest of this post will assume you're working along and make a lot more sense if you do) Let's pull Sitecore out of the equation for a second to simplify things and create an extremely simple IValueProvider implementation. For this example, I first create a new ASP.NET MVC3 project in Visual Studio, selecting "Internet Application" and otherwise taking defaults (I'm assuming that anyone reading this far in the post either already knows how to do this or will need to take a quick run through one of the many available basic MVC tutorials such as the MVC Music Store). Once the new project is created, go to the Index action of HomeController.  This action sets a Message property on the ViewBag to "Welcome to ASP.NET MVC!" and invokes the View, which has been coded to display the Message. For our example, we will remove the hard coded message from this controller (although we'll leave it just as hard coded somewhere else - this is sample code). For the first step in our exercise, add a string parameter to the Index action method called welcomeMessage and use the value of this argument to set the ViewBag.Message property. The updated Index action should look like: public ActionResult Index(string welcomeMessage) {     ViewBag.Message = welcomeMessage;     return View(); } This represents the entirety of the change that you will make to either the controller or view.  If you run the application now, the home page will display and no message will be presented to the user because no value was supplied to the Action method. Let's now write a ValueProvider to ensure this parameter gets populated. We'll start by creating a new class called StaticValueProvider. When the class is created, we'll update the using statements to ensure that they include the following: using System.Collections.Specialized; using System.Globalization; using System.Web.Mvc; With the appropriate using statements in place, we'll update the StaticValueProvider class to implement the IValueProvider interface. The System.Web.Mvc library already contains a pretty flexible dictionary-like implementation called NameValueCollectionValueProvider, so we'll just wrap that and let it do most of the real work for us. The completed class looks like: public class StaticValueProvider : IValueProvider {     private NameValueCollectionValueProvider _wrappedProvider;     public StaticValueProvider(ControllerContext controllerContext)     {         var parameters = new NameValueCollection();         parameters.Add("welcomeMessage", "Hello from the value provider!");         _wrappedProvider = new NameValueCollectionValueProvider(parameters, CultureInfo.InvariantCulture);     }     public bool ContainsPrefix(string prefix)     {         return _wrappedProvider.ContainsPrefix(prefix);     }     public ValueProviderResult GetValue(string key)     {         return _wrappedProvider.GetValue(key);     } } Notice that the only entry in the collection matches the name of the argument to our HomeController's Index action.  This is the important "secret sauce" that will make things work. We've got our new value provider now, but that's not quite enough to be finished. Mvc obtains IValueProvider instances using factories that are registered when the application starts up. These factories extend the abstract ValueProviderFactory class by initializing and returning the appropriate implementation of IValueProvider from the GetValueProvider method. While I wouldn't do so in production code, for the sake of this example, I'm going to add the following class definition within the StaticValueProvider.cs source file: public class StaticValueProviderFactory : ValueProviderFactory {     public override IValueProvider GetValueProvider(ControllerContext controllerContext)     {         return new StaticValueProvider(controllerContext);     } } Now that we have a factory, we can register it by adding the following line to the end of the Application_Start method in Global.asax.cs: ValueProviderFactories.Factories.Add(new StaticValueProviderFactory()); If you've done everything right to this point, you should be able to run the application and be presented with the home page reading "Hello from the value provider!". Now that you have the basics of the IValueProvider down, you have everything you need to enhance your Sitecore MVC implementation by adding an IValueProvider that exposes values from the ambient RenderingContext's Parameters property. I'll provide the code for the IValueProvider implementation (which should look VERY familiar) and you can use the work we've already done as a reference to create and register the factory: public class RenderingContextValueProvider : IValueProvider {     private NameValueCollectionValueProvider _wrappedProvider = null;     public RenderingContextValueProvider(ControllerContext controllerContext)     {         var collection = new NameValueCollection();         var rc = RenderingContext.CurrentOrNull;         if (rc != null && rc.Rendering != null)         {             foreach(var parameter in rc.Rendering.Parameters)             {                 collection.Add(parameter.Key, parameter.Value);             }         }         _wrappedProvider = new NameValueCollectionValueProvider(collection, CultureInfo.InvariantCulture);         }     public bool ContainsPrefix(string prefix)     {         return _wrappedProvider.ContainsPrefix(prefix);     }     public ValueProviderResult GetValue(string key)     {         return _wrappedProvider.GetValue(key);     } } In this post I've discussed the MVC IValueProvider used to map data to controller action method arguments and how this can be integrated into your Sitecore 6.6 MVC solution.

    Read the article

  • What Is Nuclear Meltdown?

    - by Gopinath
    Japan was first hit by a massive earth quake, then a ruthless tsunami washed away thousands of homes and now they fear the worst – meltdown of nuclear power stations in the quake hit year. Nuclear meltdowns are horrifying – remember the Chernobyl incident in Russia? The Chernobyl reactor meltdown released 400 times more radio active material than the atomic bombing of Hiroshima. The effects of nuclear meltdowns are beyond imagination of a common man, thousands of people loose their lives and many more lakhs of people suffer with radiation related diseases for many years. Nuclear Meltdowns are dangerous, but how do they happen? What causes a nuclear meltdown? In simple terms – Nuclear meltdown is an accident that happens due to severe overheating of a nuclear reactor and results in release of nuclear radiation into the environment.  How A Nuclear Meltdown Happens? According to Wikipedia A meltdown occurs when a severe failure of a nuclear power plant system prevents proper cooling of the reactor core, to the extent that the nuclear fuel assemblies overheat and melt. A meltdown is considered very serious because of the potential that radioactive materials could be released into the environment. The fuel assemblies in a reactor core can melt if heat is not removed. A nuclear reactor does not have to remain critical for a core damage incident to occur, because decay heat continues to heat the reactor fuel assemblies after the reactor has shut down, though this heat decreases with time. A core damage accident is caused by the loss of sufficient cooling for the nuclear fuel within the reactor core. The reason may be one of several factors, including a loss of pressure control accident, a loss of coolant accident (LOCA), an uncontrolled power excursion or, in some types, a fire within the reactor core. Failures in control systems may cause a series of events resulting in loss of cooling. Contemporary safety principles of defense in depth, ensure that multiple layers of safety systems are always present to make such accidents unlikely. Video – What Causes Nuclear Meltdown AlJazeera news has a good analysis on feared nuclear meltdown of Japan’s nuclear plants and also an animation on what causes Nuclear Meltdown. cc image credit: flickr/jtjdt This article titled,What Is Nuclear Meltdown?, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Visual Studio Service Pack 1 - Test first!

    - by CraigG
    It appears that our run of fairly benign VS SP’s is over… I've now installed the VS 2010 SP1 in a few simple test environments (x64) and all of them are having issues. Add-in failures, failed package loading, missing SQL Intellisense, XAML designer failure, etc. Make sure you test this Service Pack thoroughly before you release it to your production environment. Microsoft Connect is the official repository for issues with Service Pack 1.

    Read the article

  • Caliber Point Launches Republic, A Platform-Based Multi-Tenant HR Service Delivery Solution

    - by jay.richey
    Caliber Point Business Solutions Ltd., a leading Business Process Outsourcing (BPO) service provider and a Hexaware Technologies subsidiary, today announced the launch of Republic - their multi-tenant HR services delivery solution. Built on the Oracle E-Business Suite Release 12, this ready to use platform will cater to multiple clients under a secure and shared environment. This launch will identify Caliber Point as one of the first BPO service providers in India and one of the few in the world to provide a complete platform-based BPO service. More at http://bit.ly/9yyJDW

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >