Search Results

Search found 2471 results on 99 pages for 'license agreement'.

Page 81/99 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • New JavaScript Editor

    - by Petr
    I did not write a blog post here for a few weeks. I think the last my post was  about releasing NetBeans 7.1 in the beginning of January. The reason is not that I would change the job:), but that I have concentrated on new JavaScript support/editor. The new JavaScript editor is written basically from scratch. The answer for the question "Why from beginning again, why do you just improve the old one?" is not easy and the decision has more aspects. One of the main reasons is that the old support was written 4 years ago and the architecture is limited. Also during the time, the APIs were changed and it was very hard to keep the editor up to date. Also there is a license issue etc. In short, it is time to rewrite the old JS editor.  We build up strong community about the PHP support in NetBeans and because many PHP developers also write JavaScript code I would like to ask you for a help. There is a continual PHP build with the new JavaScript support. You can download the result of the builds here. It's a zip file. You can unzip the file anywhere, where you want. I recommend to run the build with the new userdir, to avoid damaging your current userdir. It shouldn't happened, but just to be sure:). You can achieve this through the switch --userdir. So start the unzipped file from command line from the folder, where you unzipped it, can be done with this command on unix: bin/netbeans.sh --userdir /path/to/new/userdir and on windows: bin\netbeans.exe --userdir D:\path\to\new\userdir For the developers who use continual php build already, it's well known. There is also full IDE build with the new JavaScript support for people, who need more than only PHP support.  Because the builds with the new JavaScript editor is created from a branch, there are not nightly builds available. They will be, when we merge the branch to the trunk, but so far we have to work only with the mentioned continual build. We will merge our branch after branching NetBeans 7.2 from trunk. This is also answer for the question, what release of NetBeans will contain the new JS support. It should be the release after NetBeans 7.2. I'm asking you whether you could play with the builds or better, could work in the builds with new JavaScript support and tell us every issue that you run in. It can be everything what doesn't fit you, something doesn't work as you expected, something is slow, you want change the behaviour of a feature etc. Your input / comments are very important for us and it will help us to achieve the new JavaScript support that you need.  The best way how to communicate issues is through our Bugzilla, because it is simple to track them. Sure you can write comment here:), but still I prefer Bugzilla for any issue. You can click here (you should be already log in Bugzilla), a form for the new JavaScript issue is opened, with pre-filled component Editor and NO72 keyword. I will write about the single features later, but now I will mentioned a few features that should work in better way than in the old support.  Syntactic and semantic colouring Navigator Mark Occurrences and GoTo Declaration  Code Completion Code Completion is invoked through keyboard shortcut CTRL+SPACE. The first invocation offers items that are found through a source model. Almost all editor features are based on the model, that is build from source code. There is a lot of work on the model yet, but it should offer better results. When the pop up window with code completion items is open and you press CTRL+SPACE again, then the code completion offers all elements that are in the project. In the pictures all elements that starts with letter 't'. Formatter with many options and more :) A few features are not still implemented that are supported in the old JavaScript support (for example jQuery support), but we are adding this features ASAP.

    Read the article

  • maintaining a growing, diverse codebase with continuous integration

    - by Nate
    I am in need of some help with philosophy and design of a continuous integration setup. Our current CI setup uses buildbot. When I started out designing it, I inherited (well, not strictly, as I was involved in its design a year earlier) a bespoke CI builder that was tailored to run the entire build at once, overnight. After a while, we decided that this was insufficient, and started exploring different CI frameworks, eventually choosing buildbot. One of my goals in transitioning to buildbot (besides getting to enjoy all the whiz-bang extras) was to overcome some of the inadequacies of our bespoke nightly builder. Humor me for a moment, and let me explain what I have inherited. The codebase for my company is almost 150 unique c++ Windows applications, each of which has dependencies on one or more of a dozen internal libraries (and many on 3rd party libraries as well). Some of these libraries are interdependent, and have depending applications that (while they have nothing to do with each other) have to be built with the same build of that library. Half of these applications and libraries are considered "legacy" and unportable, and must be built with several distinct configurations of the IBM compiler (for which I have written unique subclasses of Compile), and the other half are built with visual studio. The code for each compiler is stored in two separate Visual SourceSafe repositories (which I am simply handling using a bunch of ShellCommands, as there is no support for VSS). Our original nightly builder simply took down the source for everything, and built stuff in a certain order. There was no way to build only a single application, or pick a revision, or to group things. It would launched virtual machines to build a number of the applications. It wasn't very robust, it wasn't distributable. It wasn't terribly extensible. I wanted to be able to overcame all of these limitations in buildbot. The way I did this originally was to create entries for each of the applications we wanted to build (all 150ish of them), then create triggered schedulers that could build various applications as groups, and then subsume those groups under an overall nightly build scheduler. These could run on dedicated slaves (no more virtual machine chicanery), and if I wanted I could simply add new slaves. Now, if we want to do a full build out of schedule, it's one click, but we can also build just one application should we so desire. There are four weaknesses of this approach, however. One is our source tree's complex web of dependencies. In order to simplify config maintenace, all builders are generated from a large dictionary. The dependencies are retrieved and built in a not-terribly robust fashion (namely, keying off of certain things in my build-target dictionary). The second is that each build has between 15 and 21 build steps, which is hard to browse and look at in the web interface, and since there are around 150 columns, takes forever to load (think from 30 seconds to multiple minutes). Thirdly, we no longer have autodiscovery of build targets (although, as much as one of my coworkers harps on me about this, I don't see what it got us in the first place). Finally, aformentioned coworker likes to constantly bring up the fact that we can no longer perform a full build on our local machine (though I never saw what that got us, either, considering that it took three times as long as the distributed build; I think he is just paranoically phobic of ever breaking the build). Now, moving to new development, we are starting to use g++ and subversion (not porting the old repository, mind you - just for the new stuff). Also, we are starting to do more unit testing ("more" might give the wrong picture... it's more like any), and integration testing (using python). I'm having a hard time figuring out how to fit these into my existing configuration. So, where have I gone wrong philosophically here? How can I best proceed forward (with buildbot - it's the only piece of the puzzle I have license to work on) so that my configuration is actually maintainable? How do I address some of my design's weaknesses? What really works in terms of CI strategies for large, (possibly over-)complex codebases?

    Read the article

  • Windows 8 Camp&ndash;Ways to Prepare

    - by Lori Lalonde
    When Windows 8 was announced at the BUILD conference back in September, it created quite a buzz among the developer community. By the spring of 2012,  Windows 8 Developer Camps started popping up everywhere imaginable. I received a lot of questions from CTTDNUG members about whether or not we would be hosting one locally. If you recall my post about the Windows Phone/Azure Developer Workshop that CTTDNUG hosted back in March, you’ll remember that the biggest hurdle to overcome when planning this type of event was finding the right venue. It took some time, but I finally found a venue that was available and provided the prerequisites needed to ensure this camp is a success. I am very excited that CTTDNUG will be hosting a Windows 8 Camp this summer in the Kitchener/Waterloo area. In fact, it’s coming up in less than 2 weeks. Clearly other developers are excited as well, because our registration numbers show that the event is already 70% full! On top of that, I was fortunate enough to also book two well-known evangelists to present and teach at this full day developer camp: Andrei Marukovich and Atley Hunter. This was the icing on the cake. With the content provided by Microsoft, and two local experts that live and breathe Windows 8 development, I know that I, along with other developers that attend this event, will have the opportunity to maximize our learning potential and hit the ground running. If you plan on attending a Windows 8 Developer Camp soon, and want to ensure you get the most “bang for your buck” (figuratively speaking, since these camps are free), there are some things you can do to prepare before the big day: 1) Install the prerequisites on your own device before the big day I can’t stress this enough. Otherwise, you will be spending valuable time during the hands-on period downloading and installing what is needed, rather than digging into the development and using that time to ask the experts on-hand about programming challenges, issues, questions you may have with respect to your development. Prerequisites: Windows 8 Release Preview Visual Studio 2012 RC Download the Windows 8 SDK Samples 2) Purchase, download, and read Charles Petzold’s newest book:  Programming Windows 6th Edition This is a great introduction to the type of content you will be learning about during the camp. Doing some light reading beforehand might raise some questions about the concepts discussed in the book, which will give you the opportunity to write them down and bring them with you to the camp. The experts on hand will be able to answer them for you. 3) Make use of the freebies that are available Telerik has recently released a preview of their RadControls for Metro. You can sign up to receive a license code to give you access to install the preview for free and start playing around with it. Syncfusion also offers a free download of their Metro Studio package, which is a collection of metro style icons that you can customize and use in your own applications. Last but not least, once you’ve installed the Windows 8 Release Preview on your own device, go to the Windows 8 Store and download a handful of the free apps that are available. Testing out other Metro apps may give you ideas of what you can do in your own apps and analyze what features you like: application flow, type of animations used, concepts that were leveraged, how live tiles were used, etc. I hope you found these tips to be useful as you embark on a new development journey! Although this post focused on how to prepare for a Windows 8 camp, the same ideas are there whichever developer camp/workshop/event you attend. Learning does not begin and end on the day of the event. Attending a developer camp is just one step of many to master whatever technology you are interested in. It is a continuous process, which is fully maximized when you do your homework beforehand, actively participate during,  and follow up by putting what you learned to practice afterwards. Happy coding!

    Read the article

  • Live vom Oracle Partner Day 2012 in Frankfurt

    - by A&C Redaktion
    Frankfurt a. M. gegen 11:30 UhrCharmante Idee, mit einem Welcome-Lunch in den Oracle Partner Day 2012 zu starten. So kann man bei einem Snack auch gleich die beeindruckende Atmosphäre der Commerzbank Arena auf sich wirken lassen und ist, ehe man sich versieht, mit dem nebenstehenden Geschäftsführer, einer Managerin und zwei Vertriebsmitarbeitern in ein Gespräch über die jeweils letzten Stadionbesuche verwickelt. Überall fröhliches Wiedersehen, viele haben sich das letzte Mal vor genau einem Jahr getroffen, im Radisson Blu, beim OPN Day Satellite. So, die Masse setzt sich in Bewegung – auf geht’s zur Eröffnung: Silvia Kaske fängt an! 13:45 Uhr Die Keynotes waren mal wieder ein thematischer Rundumschlag – und ein kleines Who-is-Who im Oracle Universum zugleich: Silvia Kaske, Senior Director Channel A&C eröffnete den Partner Day, danach stellte David Callaghan (Senior Vice President UK, Ireland, Israel) die EMEA-Strategien für das FY13 vor und Jürgen Kunz (SVP Technology Northern Europe & Country Leader Germany) sprach über die Geschäftsmöglichkeiten mit Partnern. Christian Werner gab in seiner neuen Funktion als Senior Director Alliances & Channels Germany einen Überblick über die neue Struktur des Oracle Channels und stellte das deutsche Team vor. Zum Abschluss folgte mit Prof. Hermann Maurer ein Gastredner von der Academia Europaea, einer prominent besetzten akademischen Gesellschaft, die sich dem besseren Verständnis der Wissenschaft in der Öffentlichkeit verschrieben hat. Er wagte einen Blick in die Zukunft der IT: „Das Beste kommt erst noch“. Wie immer, in einem so komprimierten Programm, bleibt noch die eine oder andere Frage – aber jetzt ist ja Zeit, bei Coffee & Networking noch mal nachzufragen. Kurz nach 14 Uhr Viele haben inzwischen auch das erste Obergeschoss erkundet. In der Partner Service Zone ist das Angebot breit gefächert: Von Oracle Financing über das License Management bis hin zu OPN Specialized dreht sich hier alles um konkrete Angebote für Partner. Nach einem kurzen Abstecher in die ISV-Lounge, geht es weiter zur Expert Zone: Oracle Database, Oracle Options, Fusion Middleware, Applications und Oracle Hardware heißen hier die Themen und an den Infoständen wird bereits lautstark gefachsimpelt. Zurück im Erdgeschoss sieht man noch diverse Partner, Oracle Executives und andere Teilnehmer durcheinander wuseln, um ihre Breakout Session zu finden. Andere blättern im druckfrischen A&C Kursbuch. In den nächsten zwei Stunden stehen Business Opportunities im Fokus – aufgeteilt nach Hardware, Technology oder Sales Partnern – dazu noch die Angebote der VADs, die A&C Partner Sessions und das 1:1 Speed Dating. Einige Partner nutzen parallel die angebotenen Implementation Tests, um direkt vor Ort die Zertifizierung zu erhalten. Das doppelte Angebot der Breakouts ermöglicht den Teilnehmern, an möglichst vielen Sessions nacheinander teilzunehmen. Kein Thema soll zu kurz kommen! Ein AusblickWas erwartet uns noch, im Laufe des Nachmittags? Sehr informativ wird sicherlich das Leader Panel, in dem die teilnehmenden Partner Fragen an Oracle Executives stellen können. Wenn dann die ersten Teilnehmer unruhig werden, hat das nichts mit den Themen zu tun. Nein, es steht vielmehr noch ein spannender Höhepunkt bevor: die Partner Award Ceremony (über die wir später ausführlich berichten werden). Nach einer hoffentlich gelungenen Veranstaltung stellt sich zum Schluss nur noch die Frage, was sich genau hinter der „Red Stack Arena Sports Challenge“ verbirgt. Brauchen wir Turnschuhe?

    Read the article

  • Building a personal website using Silverlight.

    - by mbcrump
    I’ve always believed that as a developer you should always have a hobby project going on. I think a hobby project needs to contain at least one of following things: Something that you have never done before. Something that you are interested in. Something that you can work on in your spare time without affecting your *paying* job. I decided my hobby project would be an entire web application written in Silverlight that could be used as a self-promotion/marketing tool. This goal of the site is to provide information on the work that I’ve done to conferences, future employers and anyone else that wanted to learn more about me. Before I go any further, if you just want to check out the site then it is located at http://michaelcrump.info. So, what did I use to create it? MVVM Light – I’m a big fan of this software. The item and project templates plus code snippets make this a huge win for any SL/WPF/WP7 application. Jetpack Theme by Microsoft – I suck at designing so I used this template to help speed up this project. ComponentOne 3rd Party Controls – I have a license and really like several of their products. A User Control that Jeremy Likness created called DynamicXaml (used with his permission). I had created my own version of this a while back, but Jeremy’s implementation was simply better. Main Page – Designed to create my “brand”. This was built for a quick glimpse of who I am and what do I do.  Blog – The best marketing tool for a developer is their blog. I decided to go with an HTML page displaying my site and the user could pop into full-screen if desired. I also included my feed and Silverlight-Zone. (Another site I work on) Online – This page links to sites that I have been featured on as well as community involvement and awards. I also have a web service that I can update this information without re-compiling the Silverlight App. Projects – I’ve been wanting to use a CoverFlow for a really long time now. =) This page list several hobby projects as well as a few professional projects.  Resume Page – This page only exist because I got tired of sending companies my resume in e-mail. I can now provide a deep link to this page and the recruiter can print, search or save my resume. The PDF of my resume exist in a folder that I can easily update without recompiling the app. Contact Page – Just a contact page with a web service that sends the email. The Send button becomes disabled after a successful send. I thought of adding captcha to this page but in the end didn’t think it was worth it. Looking back at this app, I’m happy with how it turned out. I love Silverlight and I am already thinking of my next hobby project. (Thinking another Windows Phone 7 app or MVC3).  Subscribe to my feed

    Read the article

  • How can I best manage making open source code releases from my company's confidential research code?

    - by DeveloperDon
    My company (let's call them Acme Technology) has a library of approximately one thousand source files that originally came from its Acme Labs research group, incubated in a development group for a couple years, and has more recently been provided to a handful of customers under non-disclosure. Acme is getting ready to release perhaps 75% of the code to the open source community. The other 25% would be released later, but for now, is either not ready for customer use or contains code related to future innovations they need to keep out of the hands of competitors. The code is presently formatted with #ifdefs that permit the same code base to work with the pre-production platforms that will be available to university researchers and a much wider range of commercial customers once it goes to open source, while at the same time being available for experimentation and prototyping and forward compatibility testing with the future platform. Keeping a single code base is considered essential for the economics (and sanity) of my group who would have a tough time maintaining two copies in parallel. Files in our current base look something like this: > // Copyright 2012 (C) Acme Technology, All Rights Reserved. > // Very large, often varied and restrictive copyright license in English and French, > // sometimes also embedded in make files and shell scripts with varied > // comment styles. > > > ... Usual header stuff... > > void initTechnologyLibrary() { > nuiInterface(on); > #ifdef UNDER_RESEARCH > holographicVisualization(on); > #endif > } And we would like to convert them to something like: > // GPL Copyright (C) Acme Technology Labs 2012, Some rights reserved. > // Acme appreciates your interest in its technology, please contact [email protected] > // for technical support, and www.acme.com/emergingTech for updates and RSS feed. > > ... Usual header stuff... > > void initTechnologyLibrary() { > nuiInterface(on); > } Is there a tool, parse library, or popular script that can replace the copyright and strip out not just #ifdefs, but variations like #if defined(UNDER_RESEARCH), etc.? The code is presently in Git and would likely be hosted somewhere that uses Git. Would there be a way to safely link repositories together so we can efficiently reintegrate our improvements with the open source versions? Advice about other pitfalls is welcome.

    Read the article

  • SQL 2012 Licensing Thoughts

    - by Geoff N. Hiten
    The only thing more controversial than new Federal Tax plans is new Licensing plans from Microsoft.  In both cases, everyone calculates several numbers.  First, will I pay more or less under this plan?  Second, will my competition pay more or less than now?  Third, will <insert interesting person/company here> pay more or less?  Not that items 2 and 3 are meaningful, that is just how people think. Much like tax plans, the devil is in the details, so lets see how this looks.  Microsoft shows it here: http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-licensing.aspx First up is a switch from per-socket to per-core licensing.  Anyone who didn’t see something like this coming should rapidly search for a new line of work because you are not paying attention.  The explosion of multi-core processors has made SQL Server a bargain.  Microsoft is in business to make money and the old per-socket model was not going to do that going forward. Per-core licensing also simplifies virtualization licensing.  Physical Core = Virtual Core, at least for licensing.  Oversubscribe your processors, that’s your lookout.  You still pay for  what is exposed to the VM.  The cool part is you can seamlessly move physical and virtual workloads around and the licenses follow.  The catch is you have to have Software Assurance to make the licenses mobile.  Nice touch there. Let’s have a moment of silence for the late, unlamented, largely ignored Workgroup Edition.  To quote the Microsoft  FAQ:  “Standard becomes our sole edition for basic database needs”.  Considering I haven’t encountered a singe instance of SQL Server Workgroup Edition in the wild, I don’t think this will be all that controversial. As for pricing, it looks like a wash with current per-socket pricing based on four core sockets.  Interestingly, that is the minimum core count Microsoft proposes to swap to transition per-socket to per-core if you are on Software Assurance.  Reading the fine print shows that if you are using more, you will get more core licenses: From the licensing FAQ. 15. How do I migrate from processor licenses to core licenses?  What is the migration path? Licenses purchased with Software Assurance (SA) will upgrade to SQL Server 2012 at no additional cost. EA/EAP customers can continue buying processor licenses until your next renewal after June 30, 2012. At that time, processor licenses will be exchanged for core-based licenses sufficient to cover the cores in use by processor-licensed databases (minimum of 4 cores per processor for Standard and Enterprise, and minimum of 8 EE cores per processor for Datacenter). Looks like the folks who invested in the AMD 12-core chips will make out like bandits. Now, on to something new: SQL Server Business Intelligence Edition. Yep, finally a BI-specific SKU licensed for server+CAL configurations only.  Note that Enterprise Edition still supports the complete feature set; the BI Edition is intended for smaller shops who want to use the full BI feature set but without needing Enterprise Edition scale (or costs).  No, you don’t get ColumnStore, Compression, or Partitioning in the BI Edition.  Those are Enterprise scale features, ThankYouVeryMuch.  Then again, your starting licensing costs are about one sixth of an Enterprise Edition system (based on an 8 core server). The only part of the message I am missing is if the current Failover Licensing Policy will change.  Do we need to fully or partially license failover servers?  That is a detail I definitely want to know.

    Read the article

  • AccelerometerInput XNA GameComponent

    - by Michael B. McLaughlin
    Bad accelerometer controls kill otherwise good games. I decided to try to do something about it. So I create an XNA GameComponent called AccelerometerInput. It’s still a beta project but you are welcome to try it, use it, modify it, etc. I’m releasing under the terms of the Microsoft Public License. Important info: First, it only supports tilt-style controls currently. I have not implemented motion-style controls yet (and make no promises as to when I might find time to do so). Second, I commented it heavily so that you can (hopefully) understand what it is doing. Please read the comments and examine the sample game for a usage overview. There are configurable parameters which I encourage you to make use of (both by modifying the default values where your testing shows it to be appropriate and also by implementing a calibration mechanism in your game that lets the user adjust those configurable values based on his or her own circumstances). Third, even with this code, accelerometer controls are still a fairly advanced topic area; you will likely find nothing but disappointment if you simply plunk this into some project without testing it on a device (or preferably on several devices). Fourth, if you do try this code and find that something doesn’t work as expected on your phone, please let me know as I want to improve it and can only do so with your help. Let me know what phone model it is, what you tried doing, what you expected, and what result you had instead. I may or may not be able to incorporate it into the code, but I can let others know at the very least so that they can make appropriate modifications to their games (I’m hopeful that all phones are reasonably similar in their workings and require, at most, a slight calibration change, but I simply don’t know). Fifth, although I’ll do my best to answer any questions you may have about it, I’m very busy with a number of things currently so it might take a little while. Please look through the code and examine the comments and sample game first before asking any questions. It’s likely that the answer is in there. If not, or if you just aren’t really sure, ask away. Sixth, there are differences between a portrait-mode game and a landscape mode game (specifically in the appropriate default tilt adjustment for toward the user/away from the user calculations). This is documented and the default is set for landscape. If you use this for a portrait game, make the appropriate change (look for the TODO: comment in AccelerometerInput.cs). Seventh, no provision whatsoever is made for disabling screen locking. It is up to you to implement that and to take appropriate measures to detect when the user has been idle for too long and timeout the game. That code is very game-specific. If you have questions about such matters, consult the relevant MSDN documentation and, if you still have questions, visit the App Hub forums and ask there. I answer questions there a lot and so I may even stumble across your question and answer it. But that’s a much better forum than the comments section here for questions of that sort so I would appreciate it if you asked idle detection-related questions there (or on some other suitable site that you may be more familiar and comfortable with). Eighth, this is an XNA GameComponent intended for XNA-based games on WP7. A sufficiently knowledgeable Silverlight developer should have no problem adapting it for use in a Silverlight game or app. I may create a Silverlight version at some point myself. Right now I do not have the time, unfortunately. Ok. Without further ado: http://www.bobtacoindustries.com/developers/utils/AccelerometerInput.zip Have a great St. Patrick’s Day!

    Read the article

  • XNA RenderTarget2D Sample

    - by Michael B. McLaughlin
    I remember being scared of render targets when I first started with XNA. They seemed like weird magic and I didn’t understand them at all. There’s nothing to be frightened of, though, and they are pretty easy to learn how to use. The first thing you need to know is that when you’re drawing in XNA, you aren’t actually drawing to the screen. Instead you’re drawing to this thing called the “back buffer”. Internally, XNA maintains two sections of graphics memory. Each one is exactly the same size as the other and has all the same properties (such as surface format, whether there’s a depth buffer and/or a stencil buffer, and so on). XNA flips between these two sections of memory every update-draw cycle. So while you are drawing to one, it’s busy drawing the other one on the screen. Then the current update-draw cycle ends, it flips, and the section you were just drawing to gets drawn to the screen while the one that was being drawn to the screen before is now the one you’ll be drawing on. This is what’s meant by “double buffering”. If you drew directly to the screen, the player would see all of those draws taking place as they happened and that would look odd and not very good at all. Those two sections of graphics memory are render targets. All a render target is, is a section of graphics memory to which things can be drawn. In addition to the two that XNA maintains automatically, you can also create and set your own using RenderTarget2D and GraphicsDevice.SetRenderTarget. Using render targets lets you do all sorts of neat post-processing effects (like bloom) to make your game look cooler. It also just lets you do things like motion blur and lets you create mirrors in 3D games. There are quite a lot of things that render targets let you do. To go along with this post, I wrote up a simple sample for how to create and use a RenderTarget2D. It’s available under the terms of the Microsoft Public License and is available for download on my website here: http://www.bobtacoindustries.com/developers/utils/RenderTarget2DSample.zip . Other than the ‘using’ statements, every line is commented in detail so that it should (hopefully) be easy to follow along with and understand. If you have any questions, leave a comment here or drop me a line on Twitter. One last note. While creating the sample I came across an interesting quirk. If you start by creating a Windows Game, and then make a copy for Windows Phone 7, the drop-down that lets you choose between drawing to a WP7 device and the WP7 emulator stays grayed-out. To resolve this, you need to right click on the Windows Phone 7 version in the Solution Explorer, and choose “Set as StartUp Project”. The bar will then become active, letting you change the target you which to deploy to. If you want another version to be the one that starts up when you press F5 to start debugging, just go and right-click on that version and choose “Set as StartUp Project” for it once you’ve set the WP7 target (device or emulator) that you want.

    Read the article

  • How to give my user permission to add/edit files on local apache server? [duplicate]

    - by Logan
    Possible Duplicate: How to make Apache run as current user I'm setting up my local test server again, and I seem to have forgotten how to successfully set up the LAMP server. I have installed LAMP server via tasksel command and I have configured the /var/www directory according to a guide I've found: After the lamp server installation you will need write permissions to the /var/www directory. Follow these steps to configure permissions. Add your user to the www-data group sudo usermod -a -G www-data <your user name> now add the /var/www folder to the www-data group sudo chgrp -R www-data /var/www now give write permissions to the www-data group sudo chmod -R g+w /var/www So logan user is now part of www-data group and the file/folder permissions look like the output below: logan@computer:/var/www$ ls -lart total 172 -rw-r--r-- 1 www-data www-data 1997 Oct 23 2010 wp-links-opml.php -rw-r--r-- 1 www-data www-data 3177 Nov 1 2010 wp-config-sample.php -rw-r--r-- 1 www-data www-data 3700 Jan 8 2012 wp-trackback.php -rw-r--r-- 1 www-data www-data 271 Jan 8 2012 wp-blog-header.php -rw-r--r-- 1 www-data www-data 395 Jan 8 2012 index.php -rw-r--r-- 1 www-data www-data 3522 Apr 10 2012 wp-comments-post.php -rw-r--r-- 1 www-data www-data 19929 May 6 2012 license.txt -rw-r--r-- 1 www-data www-data 18219 Sep 11 08:27 wp-signup.php -rw-r--r-- 1 www-data www-data 2719 Sep 11 16:11 xmlrpc.php -rw-r--r-- 1 www-data www-data 2718 Sep 23 12:57 wp-cron.php -rw-r--r-- 1 www-data www-data 7723 Sep 25 01:26 wp-mail.php -rw-r--r-- 1 www-data www-data 2408 Oct 26 15:40 wp-load.php -rw-r--r-- 1 www-data www-data 4663 Nov 17 10:11 wp-activate.php -rw-r--r-- 1 www-data www-data 9899 Nov 22 04:52 wp-settings.php -rw-r--r-- 1 www-data www-data 9175 Nov 29 19:57 readme.html -rw-r--r-- 1 www-data www-data 29310 Nov 30 08:40 wp-login.php drwxr-xr-x 14 root root 4096 Dec 24 17:41 .. drwx------ 9 www-data www-data 4096 Dec 26 16:11 wp-admin drwx------ 9 www-data www-data 4096 Dec 26 16:11 wp-includes -rw-rw-rw- 1 www-data www-data 3448 Dec 26 16:14 wp-config.php drwxrwxr-x 5 www-data www-data 4096 Dec 26 16:14 . drwx------ 6 www-data www-data 4096 Dec 26 16:19 wp-content Things work perfectly at http://localhost, I can view the website fine. The thing with this is that I will be working on a plugin for wordpress and I don't want to deal with separate owners under www directory to create or modify files/folders. When I give my user the ownership of /var/www recursively as logan:www-data I can create/modify files but cannot view the http://localhost. I get a Forbidden error. I'm assuming that this is because of the Apache's configuration? Which one is healthier or easier considering this is just a local test website, configuring apache to give user logan to view website and chmod /var/www logan:logan so that I can create files etc. without any sudo commands; or is it easier to configure user groups to get www-data user to act like my logan user? (Idk how that's possible, maybe putting www-data user under logan group?) Please shed some light to this subject. All I want is to be able to create/modifiy files under my user, and yet to be able to successfully view http://localhost I appreciate the help!

    Read the article

  • Oracle MAA Part 1: When One Size Does Not Fit All

    - by JoeMeeks
    The good news is that Oracle Maximum Availability Architecture (MAA) best practices combined with Oracle Database 12c (see video) introduce first-in-the-industry database capabilities that truly make unplanned outages and planned maintenance transparent to users. The trouble with such good news is that Oracle’s enthusiasm in evangelizing its latest innovations may leave some to wonder if we’ve lost sight of the fact that not all database applications are created equal. Afterall, many databases don’t have the business requirements for high availability and data protection that require all of Oracle’s ‘stuff’. For many real world applications, a controlled amount of downtime and/or data loss is OK if it saves money and effort. Well, not to worry. Oracle knows that enterprises need solutions that address the full continuum of requirements for data protection and availability. Oracle MAA accomplishes this by defining four HA service level tiers: BRONZE, SILVER, GOLD and PLATINUM. The figure below shows the progression in service levels provided by each tier. Each tier uses a different MAA reference architecture to deploy the optimal set of Oracle HA capabilities that reliably achieve a given service level (SLA) at the lowest cost.  Each tier includes all of the capabilities of the previous tier and builds upon the architecture to handle an expanded fault domain. Bronze is appropriate for databases where simple restart or restore from backup is ‘HA enough’. Bronze is based upon a single instance Oracle Database with MAA best practices that use the many capabilities for data protection and HA included with every Oracle Enterprise Edition license. Oracle-optimized backups using Oracle Recovery Manager (RMAN) provide data protection and are used to restore availability should an outage prevent the database from being able to restart. Silver provides an additional level of HA for databases that require minimal or zero downtime in the event of database instance or server failure as well as many types of planned maintenance. Silver adds clustering technology - either Oracle RAC or RAC One Node. RMAN provides database-optimized backups to protect data and restore availability should an outage prevent the cluster from being able to restart. Gold raises the game substantially for business critical applications that can’t accept vulnerability to single points-of-failure. Gold adds database-aware replication technologies, Active Data Guard and Oracle GoldenGate, which synchronize one or more replicas of the production database to provide real time data protection and availability. Database-aware replication greatly increases HA and data protection beyond what is possible with storage replication technologies. It also reduces cost while improving return on investment by actively utilizing all replicas at all times. Platinum introduces all of the sexy new Oracle Database 12c capabilities that Oracle staff will gush over with great enthusiasm. These capabilities include Application Continuity for reliable replay of in-flight transactions that masks outages from users; Active Data Guard Far Sync for zero data loss protection at any distance; new Oracle GoldenGate enhancements for zero downtime upgrades and migrations; and Global Data Services for automated service management and workload balancing in replicated database environments. Each of these technologies requires additional effort to implement. But they deliver substantial value for your most critical applications where downtime and data loss are not an option. The MAA reference architectures are inherently designed to address conflicting realities. On one hand, not every application has the same objectives for availability and data protection – the Not One Size Fits All title of this blog post. On the other hand, standard infrastructure is an operational requirement and a business necessity in order to reduce complexity and cost. MAA reference architectures address both realities by providing a standard infrastructure optimized for Oracle Database that enables you to dial-in the level of HA appropriate for different service level requirements. This makes it simple to move a database from one HA tier to the next should business requirements change, or from one hardware platform to another – whether it’s your favorite non-Oracle vendor or an Oracle Engineered System. Please stay tuned for additional blog posts in this series that dive into the details of each MAA reference architecture. Meanwhile, more information on Oracle HA solutions and the Maximum Availability Architecture can be found at: Oracle Maximum Availability Architecture - Webcast Maximize Availability with Oracle Database 12c - Technical White Paper

    Read the article

  • Planning in the Cloud - For Real

    - by jmorourke
    One of the hottest topics at Oracle OpenWorld 2012 this week is “the cloud”.  Over the past few years, Oracle has made major investments in cloud-based applications, including some acquisitions, and now has over 100 applications available through Oracle Cloud services.  At OpenWorld this week, Oracle announced seven new offerings delivered via the Oracle Cloud services platform, one of which is the Oracle Planning and Budgeting Cloud Service.  Based on Oracle Hyperion Planning, this service is the first of Oracle’s EPM applications to be to be offered in the Cloud.    This solution is targeted to organizations that are struggling with spreadsheets or legacy planning and budgeting applications, want to deploy a world class solution for financial planning and budgeting, but are constrained by IT resources and capital budgets. With the Oracle Planning and Budgeting Cloud Service, organizations can fast track their way to world-class financial planning, budgeting and forecasting – at cloud speed, with no IT infrastructure investments and with minimal IT resources. Oracle Hyperion Planning is a market-leading budgeting, planning and forecasting application that is used by over 3,300 organizations worldwide.  Prior to this announcement, Oracle Hyperion Planning was only offered on a license and maintenance basis.  It could be deployed on-premise, or hosted through Oracle On-Demand or third party hosting partners.  With this announcement, Oracle’s market-leading Hyperion Planning application will be available as a Cloud Service and through subscription-based pricing. This lowers the cost of entry and deployment for new customers and provides a scalable environment to support future growth. With this announcement, Oracle is the first major vendor to offer one of its core EPM applications as a cloud-based service.  Other major vendors have recently announced cloud-based EPM solutions, but these are only BI dashboards delivered via a cloud platform.   With this announcement Oracle is providing a market-leading, world-class financial budgeting, planning and forecasting as a cloud service, with the following advantages: ·                     Subscription-based pricing ·                     Available standalone or as an extension to Oracle Fusion Financials Cloud Service ·                     Implementation services available from Oracle and the Oracle Partner Network ·                     High scalability and performance ·                     Integrated financial reporting and MS Office interface ·                     Seamless integration with Oracle and non-Oracle transactional applications ·                     Provides customers with more options for their planning and budgeting deployment vs. strictly on-premise or cloud-only solution providers. The OpenWorld announcement of Oracle Planning and Budgeting Cloud Service is a preview announcement, with controlled availability expected in calendar year 2012.  For more information, check out the links below: Press Release Web site If you have any questions or need additional information, please feel free to contact me at [email protected].

    Read the article

  • Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

    - by darrenm
    Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together. When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it. We start by configuring the zone and specifying an rootzpool resource: # zonecfg -z eizoss Use 'create' to begin configuring a new zone. zonecfg:eizoss> create create: Using system default template 'SYSdefault' zonecfg:eizoss> set zonepath=/zones/eizoss zonecfg:eizoss> set file-mac-profile=fixed-configuration zonecfg:eizoss> add rootzpool zonecfg:eizoss:rootzpool> add storage \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 zonecfg:eizoss:rootzpool> end zonecfg:eizoss> verify zonecfg:eizoss> commit zonecfg:eizoss> Now lets create the pool and specify encryption: # suriadm map \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 PROPERTY VALUE mapped-dev /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # echo "zfscrypto" > /zones/p # zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \ /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # zpool export eizoss Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install: zoneadm -z eizoss install -x force-zpool-import Configured zone storage resource(s) from: iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 Imported zone zpool: eizoss_rpool Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install Image: Preparing at /zones/eizoss/root. AI Manifest: /tmp/manifest.xml.ujaq54 SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: eizoss Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/solaris/release/ Please review the licenses for the following packages post-install: consolidation/osnet/osnet-incorporation (automatically accepted, not displayed) Package licenses may be viewed using the command: pkg info --license <pkg_fmri> DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 187/187 33575/33575 227.0/227.0 384k/s PHASE ITEMS Installing new actions 47449/47449 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 929.606 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install That was really all we had to do, when the install is done boot it up as normal. The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases: rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally. # zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local  

    Read the article

  • Seeking advice on tools and technology for my new game [closed]

    - by k.k. slider
    I'm a C# developer who has been programming a game in my spare time using XNA and Visual Studio. The game's logic is mostly done and I've completed a prototype that has most of the functionality of (what I envision to be) the final game. However, having heard about the uncertain future and (possibly) limited audience for XNA games, I'm looking to switch platforms... but I don't know what technology would best suit my needs. Below are some specifics about my game and what exactly I'm looking for, if you're interested: The game is a 2D turn-based tactical RPG (strategy game) for two players. It is a basic sprite and tile based game with animations and sound. 3D capabilities are not necessary. I'd like to allow players to compete with others online, and have a basic ranking/matchmaking system. I will probably need something that can interact with a server and a database (the game is turn-based and has no RNG, so cheating would be easy to detect even if most computation is done client-side and minimal data is sent to the server). Ideally, I would be able to release an early version of the game and have people give feedback as I develop additional features (similar to Minecraft). I'd prefer to have a way to release periodic updates to the game instead of releasing an absolute final product. To reach the widest possible audience, I'd prefer technology that allows me to release on PC, Android, iOS, and (maybe) Mac. This is a game with simple mouse inputs which can fit on a mobile touch screen. The game should be monetizable. If I find success with this game, then I may consider becoming a full-time indie game developer. I have several other game ideas and have learned quite a bit from my first attempt at game development. My first thought was an F2P/microtransaction model, but I'm open to other suggestions. Language isn't a primary concern of mine, since I have a decent amount of experience using several languages to program large projects. I'm willing to spend money (e.g. on a developer's license), but the more expensive it gets, the more hesitant I am to use it. I've looked into the following solutions... there are a LOT of tools out there... if anyone has experience with any of these and would like to recommend/reject any of them, it would be helpful. C#/.NET (XNA/MonoGame/SDL/SlimDX/Xamarin/ExEn/ANX?) HTML5/JS (AppMobi/PhoneGap/Marmalade/FlashCanvas/Cordova/libRocket?) Python (Pyglet/Pygame/Kivy?) Java (JavaFX/libGDX?) Unity/Construct 2/Cocos2D/NME/Corona/other game creation software? I'd like something that can do 2D and isn't limited by being too high-level. Other languages (Lua/LOVE? Moai?) Thanks for answering this rather long and tedious question...

    Read the article

  • With a little effort you can &ldquo;SEMI&rdquo;-protect your C# assemblies with obfuscation.

    - by mbcrump
    This method will not protect your assemblies from a experienced hacker. Everyday we see new keygens, cracks, serials being released that contain ways around copy protection from small companies. This is a simple process that will make a lot of hackers quit because so many others use nothing. If you were a thief would you pick the house that has security signs and an alarm or one that has nothing? To so begin: Obfuscation is the concealment of meaning in communication, making it confusing and harder to interpret. Lets begin by looking at the cartoon below:     You are probably familiar with the term and probably ignored this like most programmers ignore user security. Today, I’m going to show you reflection and a way to obfuscate it. Please understand that I am aware of ways around this, but I believe some security is better than no security.  In this sample program below, the code appears exactly as it does in Visual Studio. When the program runs, you get either a true or false in a console window. Sample Program. using System; using System.Diagnostics; using System.Linq;   namespace ObfuscateMe {     class Program     {                static void Main(string[] args)         {               Console.WriteLine(IsProcessOpen("notepad")); //Returns a True or False depending if you have notepad running.             Console.ReadLine();         }             public static bool IsProcessOpen(string name)         {             return Process.GetProcesses().Any(clsProcess => clsProcess.ProcessName.Contains(name));         }     } }   Pretend, that this is a commercial application. The hacker will only have the executable and maybe a few config files, etc. After reviewing the executable, he can determine if it was produced in .NET by examing the file in ILDASM or Redgate’s Reflector. We are going to examine the file using RedGate’s Reflector. Upon launch, we simply drag/drop the exe over to the application. We have the following for the Main method:   and for the IsProcessOpen method:     Without any other knowledge as to how this works, the hacker could export the exe and get vs project build or copy this code in and our application would run. Using Reflector output. using System; using System.Diagnostics; using System.Linq;   namespace ObfuscateMe {     class Program     {                static void Main(string[] args)         {               Console.WriteLine(IsProcessOpen("notepad"));             Console.ReadLine();         }             public static bool IsProcessOpen(string name)         {             return Process.GetProcesses().Any<Process>(delegate(Process clsProcess)             {                 return clsProcess.ProcessName.Contains(name);             });         }       } } The code is not identical, but returns the same value. At this point, with a little bit of effort you could prevent the hacker from reverse engineering your code so quickly by using Eazfuscator.NET. Eazfuscator.NET is just one of many programs built for this. Visual Studio ships with a community version of Dotfoscutor. So download and load Eazfuscator.NET and drag/drop your exectuable/project into the window. It will work for a few minutes depending if you have a quad-core or not. After it finishes, open the executable in RedGate Reflector and you will get the following: Main After Obfuscation IsProcessOpen Method after obfuscation: As you can see with the jumbled characters, it is not as easy as the first example. I am aware of methods around this, but it takes more effort and unless the hacker is up for the challenge, they will just pick another program. This is also helpful if you are a consultant and make clients pay a yearly license fee. This would prevent the average software developer from jumping into your security routine after you have left. I hope this article helped someone. If you have any feedback, please leave it in the comments below.

    Read the article

  • Inside Red Gate - Exercising Externally

    - by simonc
    Over the next few weeks, we'll be performing experiments on SmartAssembly to confirm or refute various hypotheses we have about how people use the product, what is stopping them from using it to its full extent, and what we can change to make it more useful and easier to use. Some of these experiments can be done within the team, some within Red Gate, and some need to be done on external users. External testing Some external testing can be done by standard usability tests and surveys, however, there are some hypotheses that can only be tested by building a version of SmartAssembly with some things in the UI or implementation changed. We'll then be able to look at how the experimental build is used compared to the 'mainline' build, which forms our baseline or control group, and use this data to confirm or refute the relevant hypotheses. However, there are several issues we need to consider before running experiments using separate builds: Ideally, the user wouldn't know they're running an experimental SmartAssembly. We don't want users to use the experimental build like it's an experimental build, we want them to use it like it's the real mainline build. Only then will we get valid, useful, and informative data concerning our hypotheses. There's no point running the experiments if we can't find out what happens after the download. To confirm or refute some of our hypotheses, we need to find out how the tool is used once it is installed. Fortunately, we've applied feature usage reporting to the SmartAssembly codebase itself to provide us with that information. Of course, this then makes the experimental data conditional on the user agreeing to send that data back to us in the first place. Unfortunately, even though this does limit the amount of useful data we'll be getting back, and possibly skew the data, there's not much we can do about this; we don't collect feature usage data without the user's consent. Looks like we'll simply have to live with this. What if the user tries to buy the experiment? This is something that isn't really covered by the Lean Startup book; how do you support users who give you money for an experiment? If the experiment is a new feature, and the user buys a license for SmartAssembly based on that feature, then what do we do if we later decide to pivot & scrap that feature? We've either got to spend time and money bringing that feature up to production quality and into the mainline anyway, or we've got disgruntled customers. Either way is bad. Again, there's not really any good solution to this. Similarly, what if we've removed some features for an experiment and a potential new user downloads the experimental build? (As I said above, there's no indication the build is an experimental build, as we want to see what users really do with it). The crucial feature they need is missing, causing a bad trial experience, a lost potential customer, and a lost chance to help the customer with their problem. Again, this is something not really covered by the Lean Startup book, and something that doesn't have a good solution. So, some tricky issues there, not all of them with nice easy answers. Turns out the practicalities of running Lean Startup experiments are more complicated than they first seem! Cross posted from Simple Talk.

    Read the article

  • SSMS Tools Pack 2.7 is released. New website, improved licensing and features.

    - by Mladen Prajdic
    New website Nice, isn't it? Cleaner, simpler, better looking and more modern. If you have any suggestions for further improvements I'd be glad to hear them. Simpler licensing With SSMS tools Pack 2.7 the licensing is finally where it should be. It is now based on the activate/deactivate model. This way you can move a license from machine to machine with simple deactivation on one and reactivation on another machine. Much better, no? Because of very good feedback I have added an option for 6 machines and lowered the 4 machines option to 3 machines. This should make it much simpler for you to choose the right option for yourself. Improved features Version 2.5.3 was already extremely stable and 2.7 continues with that tradition. Because of that I could fully focus on features and why 3.0 will rock even more that 2.7! ;) In version 2.7 I have addressed quite a few improvements you were requesting for a while now. SQL History This is probably the biggest time saver out there, therefore it's only fair it gets a few important updates. If you have an existing .sql file opened, the Window Content History now saves your code to that existing file and also makes a backup in the SQL History log default location. Search is still done through the SQL History log but the Tab Sessions Restore opens your existing .sql file. This way you don't have to remember to save your existing files by yourself anymore. A bug when you couldn't search properly if you copied the log files to a new location was fixed. Unfortunately this removed the option to filter a search with the time component. The smallest search interval is now one day. The SSMS Tools Pack now remembers the visibility of the Current Window History window when you exit SSMS. SQL Snippets You can now set the position of the cursor in your snippets by placing {C} somewhere in your snippet. It's a small improvement but can be a huge time saver since you don't have to move through the snippet to the desired location anymore. Run script on multiple databases Database choices can now be saved with a name and then loaded again next time. You can also choose to run the script in a new window for each chosen database. Search through grid results You can now go previous/next search result with the Prev/Next control inside the search window. This is extremely useful if you have a large resultset. IT saves you the scrolling. CRUD generator Four new variables have been added: |CurrentDate| writes current date in format yyyy-MM-dd to your script |CurrentTime| writes current time in 24h format HH:mm:ss to your script |CurrentWinUser| writes current Windows logged on user to your script |CurrentSqlUser| writes current SQL logged on login to your script This was actually quite a requested feature so if you have any other ideas for extra variables, do let me know. That's about it. I hope you're going to enjoy this version as much as the previous ones. Have fun!

    Read the article

  • Upgrading to 9.2 - Info You Can Use (part 1)

    - by John Webb
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Rebekah Jackson joins our blog with a series of helpful hints on planning your upgrade to PeopleSoft 9.2.   Find Features & Capabilities There are many ways that you might learn about new features and capabilities within our releases, but if you aren’t sure where to start or how best to go about it, we recommend: Go to www.peoplesoftinfo.com Select the product line you are interested in, and go to the ‘Release Content’ tab Use the Video Feature Overviews (VFOs) on YouTube and the Cumulative Feature Overview (CFO) tool to find features and functions. The VFOs are brief recordings that summarize some of our most popular capabilities. These recordings are great tools for learning about new features, or helping others to visualize the value they can bring to your organization. The VFOs focus on some of our highest value and most compelling new capabilities. We also provide summarized ‘Why Upgrade to 9.2’ VFOs for HCM, Financials, and Supply Chain. The CFO is a spreadsheet based tool that allows you to select the release you are currently on, and compare it to the new release. It will return the list of all new features and capabilities, by product. You can browse the full list and / or highlight areas that look particularly interesting. Once you have a list of features by product, use the Release Value Proposition, Pre-Release Notes, and the Release Notes documents to get more details on and supporting value statements about why those features will be helpful. Gather additional data and supporting information, including: Go to the Product Data Sheets tab, and review the respective data sheets. These summarize the capabilities in the product, and provide succinct value statements for the product and capabilities. The PeopleSoft 9.2 Upgrade page, which has many helpful resources. Important Notes:   -  We recommend that you go through the above steps for the application areas of interest, as well as for PeopleTools. There are many areas in PeopleTools 8.53 and the 9.2 application releases that combine technical and functional capabilities to deliver transformative value.    - We also recommend that you review the Portal Solutions content. With your license to PeopleSoft applications, you have access to many of the most powerful capabilities within the Interaction Hub.    -  If you have recently upgraded to PeopleSoft 9.1, and an immediate upgrade to 9.2 is simply not realistic, you can apply the same approaches described here to find untapped capabilities in your current products. Many of the features in 9.2 were delivered first in our 9.1 Feature Packs. To find the Release Value Proposition, Pre-Release Notes, and Release Notes for these releases, search on ‘PeopleSoft 9.1 Documentation Home Page’ on My Oracle Support, and select your desired product area. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Hosted Monitoring

    - by Grant Fritchey
    The concept of using services to take the place of writing a lot of your own code goes way, way back in computing history. The fundamentals of the concept go back to the dawn of computing with places like IBM hosting time-shares for computing power that you could rent for short periods of time. But things really took off with the building of the Web. Now, all the growth with virtual machines, hosted machines, hosted services from vendors like Amazon and Microsoft, the need to keep all of your software locally on physical boxes is just going the way of the dodo. There will likely always be some pieces of software that you keep on machines on your property or on your person, but the concept of keeping fundamental services locally is going away. As someone put it to me once, if you were starting a business right now, would you bother setting up an Exchange server to manage your email or would you just go to one of the external mail services for everything? For most of us (who are not Exchange admins) the answer is pretty easy. With all this momentum to having external services manage more and more of the infrastructure that’s not business unique, why would you burn up a server and license instance setting up monitoring for your SQL Servers? Of course, some of you are dealing with hyper-sensitive data that might require, through law or treaty, that you lock it down and never expose it to the intertubes, but most of us are not. So, what if someone else took on the basic hassle of setting up monitoring on your systems? That’s what we’re working on here at Red Gate. Right now it’s a private test, but we’re growing it and developing it and it’ll be going to a public beta, probably (hopefully) this year. I’m running it on my machines right now. The concept is pretty simple. You put a relay on your server, poke a hole in your firewall for it, and we start monitoring your server using SQL Monitor. It’s actually shocking how easy it is to get going. You still have to adjust your alerting thresholds, but that’s a standard part of alerting. Your pain threshold and my pain threshold for any given alert may be different. But from there, we do all the heavy lifting, keeping your data online and available, providing you with access to the information about how your servers are behaving, everything. Maybe it’s just me, but I’m really excited by this. I think we’re getting to a place where we can really help the small and medium sized businesses get a monitoring solution in place, quickly and easily. All you crazy busy, and possibly accidental, DBAs and system admins finally can set up monitoring without taking all the time to configure systems, run installs, and all the rest. You just have to tweak your alerts and you’re ready to run. If you are interested in checking it out, you can apply for the closed beta through the Monitor web page.

    Read the article

  • Create Custom Speech Bubbles in Silverlight.

    - by mbcrump
    I had a reader email me the following question: “How do you create Speech Bubbles in Silverlight/WPF without adding any extra .dlls? Right off the bat, I know at least two ways to create the speech bubbles that look just like the ones in comic books. Using the Callout Shapes included with Blend 4. Using the free 3rd party control named FreeBubbles (I used this before Blend 4). Unfortunately, we cannot use either of these as they will both add extra .dll’s to the project. So why wouldn’t you want to use one of those? I can think of a few reasons: You do not want to increase the size of your .XAP by including extra .dll’s. You do not have Expression Blend or the license to the use the .dll’s. You want a custom Speech Bubble that is not included in the four “Callout” Controls with Blend. Instead of using one of these methods, we will create a Speech Bubble in Blend 4 using Path element and a TextBlock. Before we get started, lets look at the Callout Shapes included with Blend 4. Using Blend 4 you can simply drag/drop these controls onto your Silverlight application and you are ready to go. We can create all of these Speech Bubbles and even some of the modern bubbles used in recent comic books. Lets get started. Start up Expression Blend 4 and select the Pen Tool. On the Art Board, start connecting the dots like I did below. You can add a color if you wish. …keep going …complete Let’s go ahead and add some text to the Speech Bubble. Drag a TextBlock from the Panel and put it directly inside the Speech Bubble. Go ahead and set the TextAlignment to Center for the TextBlock. and give it some text. At this point, you could go ahead and create a user control if you want to reuse the Speech Bubble you created. Select both the Path and the TextBlock by clicking then while holding down CTRL and then Right Click them. Select Make Into User Control. Give it a name and then Build your project. Lets create another one using the Ellipse for the older comic book style of Speech Bubbles. Drag an Ellipse to the Artboard and give it a color. Now, grab the Pen and drag a triangle like I did below. Simply drag it over a corner of the Ellipse. Select Combine then Unite and you will have a Path. At this point, you can go ahead and add a TextBlock like we did earlier. Lets go ahead and create a rounded rectangle one by adding a Rectangle to the Artboard. Go ahead and set the RadiuX and RadiusY to 25 to give it rounded edges. Let’s create another path and drag it right on top of our rounded rectangle like we did earlier. …looking good Select Combine then Unite and you will have a Path. At this point, you can go ahead and add a TextBlock like we did earlier. So let’s look at what we’ve created today using the path element and TextBlock. As you can tell, it required more work but meets the requirements. This was actually fun to do and I encourage anyone that visits my blog to send in request like this.  Subscribe to my feed

    Read the article

  • The new direction of the gaming industry

    - by raccoon_tim
    Just recently I read a great blog post by David Darling, the founder of Codemasters: http://www.develop-online.net/blog/347/Jurassic-consoles-could-become-extinct. In the blog post he talks about how traditional retail games are experiencing a downfall thanks to the increasing popularity of digital distribution. I personally think of retail games as being relics of the past. It does not really make much sense to still keep distributing boxed games when the same game can be elegantly downloaded and updated over the air through a digital distribution channel. The world is not all rainbows, however. One big issue with mixing digital distribution with boxed retail games is that resellers will not condone you selling your game for 10€ digitally while their selling the same game for 70€. The only way to get around this issue is to move to full digital distribution. This has the added benefit of minimizing piracy as the game can be tightly bound to the service you downloaded the game from. Many players are, however, complaining about not being able to play the games offline. Having games tightly bound to the internet is a problem when games are bought from a retailer as we tend to expect that once we have the product we can use it anywhere because we physically own it. The truth is that we don’t actually own the product. Instead, the typical EULA actually states that we only have a license to use the product. We’re not, for instance, allowed to disassemble the product, which the owner is indeed permitted to do. Digital distribution allows us to provide games as services, instead of selling them as standalone products. This means that for a service to work you have to be connected to the internet but you still have the same rights to use the product. It’s really straightforward; if you downloaded a client from the internet you are expected to have an internet connection so you’re able to connect to the server. A game distributed digitally that is built using a client-server architecture has the added benefit of allowing you to play anywhere as long as you have the client installed and you are able to log in with your user information. Your save games can be backed up and your game can continue anywhere. Another development we’re seeing in the gaming industry is the increasing popularity of free-to-play games. These are games that let you play for free but allow you to boost your gaming experience with real world money. The nature of these games is that players are constantly rewarded with new content and the game can evolve according to their way of playing and their wishes can be incorporated into the product. Free-to-play games can quickly gain a large player basis and monetization is done by providing players valuable things to buy making their gaming experience more fun. I am personally very excited about free-to-play games as it’s possible to start building the game together with your players and there is no need to work on the game for 5 years from start to finish and only then see if it’s actually something the players like. This is a typical problem with big movie-like retail games and recent news about Radical Entertainment practically closing its doors paints a clear picture of what can happen when the risk does not pay off: http://news.teamxbox.com/xbox/25874/Prototype-Developer-Radical-Entertainment-Closes/.

    Read the article

  • How can I work around problems with certificate configuration in Remote Desktop Services?

    - by Michael Steele
    I am setting up a Remote Desktop Services farm, and am having trouble configuring certificates for it to use. A demonstration of the problem I'm seeing can be found in Step #4. At this point I am convinced that there are problems with the user interface, and am looking for ways around them. Is there any way to configure certificates in Remote Desktop Services so that the settings hold and are reflected in the GUI? If not, is there any way for me to verify that the settings are correct? Step #1 - Create certificate to be used. I've configured a certificate to use with RD Web Access. The certificate is stored with in the Certificates MMC on my RD Connection Broker, and I am configuring the farm from that computer. I found by letting RD Web Access generate its own certificate that the following properties are required: Enhanced Key Usage Server Authentication Client Authentication This may not be required, but the self-signed certificate includes it. Key Usage Digital Signature Key Agreement Subject Alternative Name DNS Name=domain.com Detour about self-signed certificate generation As a quick detour, I was able to work around a problem with creating self-signed certificates using powershell. The documentation for the New-RDCertificate cmdlet gives the following example: PS C:\> $password = ConvertTo-SecureString -string "password" -asplaintext -force New-RDCertificate -Role RDWebAccess -DnsName "test-rdwa.contoso.com" -Password $password -ConnectionBroker rdcb.contoso.com -ExportPath "c:\test-rdwa.pfx" Typing this into the shell will result in an error message claiming that a function, Get-Server cannot be found. Prior to using New-RDCertificate, you must import the RemoteDesktop Module with Import-Module RemoteDesktop. Step #2 - Observe out-of-box behavior The first time you visit the Deployment Properties dialog box by navigating to Server Manager - Remote Desktop Services - Collections and selecting "Edit Deployment Properties" from the "TASKS" dropdown list in the "COLLECTIONS" grouping, you will see the following screen: This window is misleading because the level field is listed as "Not Configured". If I understand correctly all three of the role services are using a self-signed certificate. For the RD Web Access role this can be verified by visiting the website: The certificate being used also appears in the Certificates MMC: Step #3 - Assign new certificate The Deployment Properties dialog box will allow me to select my existing certificate. The certificate must be placed within the local computers Certificates MMC in the "Personal" certificate store. The private key will need to be exportable, and you will need to provide the password. I temporarily exported my certificate to a file named temp.pfx with a password, and then imported it into Remote Desktop Services from there. Once this is done the GUI will indicate that it is ready to accept the new configuration. Once I click the "Apply" button, the GUI indicates success. This can be verified by visiting the RD Web Access web site a second time. There is no certificate error. Step #4 - The GUI fails to maintain its state If the GUI is closed and reopened, all of these settings appear to be lost. Actually, the certificate I configured is still being used. I am able to continue accessing the RD Web Access site without any certificate errors. Oddly, if I use the "Create new certificate..." button to generate a self-signed certificate this window will update to an "Untrusted" level. This setting will then be maintained through the opening and closing of the Deployment Properties dialog box. Is there anything I can do to have my settings appear to stick? I feel like something is wrong when the GUI claims I haven't fully configured certificates.

    Read the article

  • WiX/Windows Installer: Re-install to a new folder

    - by vitalyval
    1. I am using WiX for creating installer and would like to implement the following behaviour: If a user launches msi installer for the product and the product already installed, then wizard works similar to pure (first time) installation with exception of some things (e.g. license aggrement screen is omitted). The wizard should allow for example to change installation folder, select whether to place desktop shortcut,... I tried to do: <Publish Event="ReinstallMode" Value="amus"><![CDATA[INSTALL_MODE = "Change"]]></Publish> <Publish Event="Reinstall" Value="ALL"><![CDATA[INSTALL_MODE = "Change"]]></Publish> But after installation completes: the product is in the same folder, where it was installed first time; desktop icon in the same state as it was after first time install. MSDN says: "Do not attempt to change the target directory path if some components that use the path are already installed for the current user or for a different user". Is there a way to re-install in another forlder and add/remove desktop icon in re-install? 2. Is this normal to use the same KeyPath for some components? For example the same registry values for DeskTop and Programs menu shortcuts? MSDN says: "Two components cannot share the same key path value". But compiling and verifying goes OK. And I did not discover problems using the same keypaths.

    Read the article

  • Python ImportError when executing 'import.py', but not when executing 'python import.py'

    - by Martin Del Vecchio
    I am running Cygwin Python version 2.5.2. I have a three-line source file, called import.py: #!/usr/bin/python import xml.etree.ElementTree as ET print "Success!" When I execute "python import.py", it works: C:\Temp>python import.py Success! When I run the python interpreter and type the commands, it works: C:\Temp>python Python 2.5.2 (r252:60911, Dec 2 2008, 09:26:14) [GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)] on cygwin Type "help", "copyright", "credits" or "license" for more information. >>> #!/usr/bin/python ... import xml.etree.ElementTree as ET >>> print "Success!" Success! >>> But when I execute "import.py', it does not work: C:\Temp>which python /usr/bin/python C:\Temp>import.py Traceback (most recent call last): File "C:\Temp\import.py", line 2, in ? import xml.etree.ElementTree as ET ImportError: No module named etree.ElementTree When I remove the first line (#!/usr/bin/python), I get the same error. I need that line in there, though, for when this script runs on Linux. And it works fine on Linux. Any ideas? Thanks.

    Read the article

  • Does Subversion have an analogue to VSS's links?

    - by bta
    I am migrating a Visual SourceSafe code repository to Subversion and I am running into a problem. Here is a simplified layout of our current source code tree (in VSS): project_root\ |-libs\ |-tools\ |-arch_1\ | |-include | |-source |-arch_2\ |-include |-source My problem is in our two arch_ folders. Each arch_ folder will be built for a different hardware architecture, but the contents of the two folders are practically identical. The files in arch_2 are merely VSS links to the files in arch_1, with only a small handful of exceptions. Work is generally checked into and out of the arch_1 folder, and the VSS links make sure that any code checked in here is updated in the arch_2 folder as well. Moving to Subversion, is there anything that will behave like VSS's links? That is, is there a way to have two files in separate folders magically associated with one another such that they will always be in sync with each other (changes to one will affect the other as well)? Note: I know the correct answer here is to fix the build system. The build system on this project was pieced together roughly a decade ago, back when our compiler/build system wasn't intelligent enough to compile the same folder full of source code for two different architectures. Thanks to make and updated compilers, we can re-write the build system to eliminate this dependency on two parallel source folders. However, this will take time that we don't have at the moment (we are losing our license to our VSS server and are being forced to migrate on rather short notice). I am hoping to find a Subversion solution to this problem because at the moment, our time would be much better spent making the migration run smoothly than re-writing the build system (which is next on my to-do list!). Thank you for your help!

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >