Search Results

Search found 11747 results on 470 pages for 'music manager'.

Page 438/470 | < Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >

  • Adding an Admin user to an ASP.NET MVC 4 application using a single drop-in file

    - by Jon Galloway
    I'm working on an ASP.NET MVC 4 tutorial and wanted to set it up so just dropping a file in App_Start would create a user named "Owner" and assign them to the "Administrator" role (more explanation at the end if you're interested). There are reasons why this wouldn't fit into most application scenarios: It's not efficient, as it checks for (and creates, if necessary) the user every time the app starts up The username, password, and role name are hardcoded in the app (although they could be pulled from config) Automatically creating an administrative account in code (without user interaction) could lead to obvious security issues if the user isn't informed However, with some modifications it might be more broadly useful - e.g. creating a test user with limited privileges, ensuring a required account isn't accidentally deleted, or - as in my case - setting up an account for demonstration or tutorial purposes. Challenge #1: Running on startup without requiring the user to install or configure anything I wanted to see if this could be done just by having the user drop a file into the App_Start folder and go. No copying code into Global.asax.cs, no installing addition NuGet packages, etc. That may not be the best approach - perhaps a NuGet package with a dependency on WebActivator would be better - but I wanted to see if this was possible and see if it offered the best experience. Fortunately ASP.NET 4 and later provide a PreApplicationStartMethod attribute which allows you to register a method which will run when the application starts up. You drop this attribute in your application and give it two parameters: a method name and the type that contains it. I created a static class named PreApplicationTasks with a static method named, then dropped this attribute in it: [assembly: PreApplicationStartMethod(typeof(PreApplicationTasks), "Initializer")] That's it. One small gotcha: the namespace can be a problem with assembly attributes. I decided my class didn't need a namespace. Challenge #2: Only one PreApplicationStartMethod per assembly In .NET 4, the PreApplicationStartMethod is marked as AllMultiple=false, so you can only have one PreApplicationStartMethod per assembly. This was fixed in .NET 4.5, as noted by Jon Skeet, so you can have as many PreApplicationStartMethods as you want (allowing you to keep your users waiting for the application to start indefinitely!). The WebActivator NuGet package solves the multiple instance problem if you're in .NET 4 - it registers as a PreApplicationStartMethod, then calls any methods you've indicated using [assembly: WebActivator.PreApplicationStartMethod(type, method)]. David Ebbo blogged about that here:  Light up your NuGets with startup code and WebActivator. In my scenario (bootstrapping a beginner level tutorial) I decided not to worry about this and stick with PreApplicationStartMethod. Challenge #3: PreApplicationStartMethod kicks in before configuration has been read This is by design, as Phil explains. It allows you to make changes that need to happen very early in the pipeline, well before Application_Start. That's fine in some cases, but it caused me problems when trying to add users, since the Membership Provider configuration hadn't yet been read - I got an exception stating that "Default Membership Provider could not be found." The solution here is to run code that requires configuration in a PostApplicationStart method. But how to do that? Challenge #4: Getting PostApplicationStartMethod without requiring WebActivator The WebActivator NuGet package, among other things, provides a PostApplicationStartMethod attribute. That's generally how I'd recommend running code that needs to happen after Application_Start: [assembly: WebActivator.PostApplicationStartMethod(typeof(TestLibrary.MyStartupCode), "CallMeAfterAppStart")] This works well, but I wanted to see if this would be possible without WebActivator. Hmm. Well, wait a minute - WebActivator works in .NET 4, so clearly it's registering and calling PostApplicationStartup tasks somehow. Off to the source code! Sure enough, there's even a handy comment in ActivationManager.cs which shows where PostApplicationStartup tasks are being registered: public static void Run() { if (!_hasInited) { RunPreStartMethods(); // Register our module to handle any Post Start methods. But outside of ASP.NET, just run them now if (HostingEnvironment.IsHosted) { Microsoft.Web.Infrastructure.DynamicModuleHelper.DynamicModuleUtility.RegisterModule(typeof(StartMethodCallingModule)); } else { RunPostStartMethods(); } _hasInited = true; } } Excellent. Hey, that DynamicModuleUtility seems familiar... Sure enough, K. Scott Allen mentioned it on his blog last year. This is really slick - a PreApplicationStartMethod can register a new HttpModule in code. Modules are run right after application startup, so that's a perfect time to do any startup stuff that requires configuration to be read. As K. Scott says, it's this easy: using System; using System.Web; using Microsoft.Web.Infrastructure.DynamicModuleHelper; [assembly:PreApplicationStartMethod(typeof(MyAppStart), "Start")] public class CoolModule : IHttpModule { // implementation not important // imagine something cool here } public static class MyAppStart { public static void Start() { DynamicModuleUtility.RegisterModule(typeof(CoolModule)); } } Challenge #5: Cooperating with SimpleMembership The ASP.NET MVC Internet template includes SimpleMembership. SimpleMembership is a big improvement over traditional ASP.NET Membership. For one thing, rather than forcing a database schema, it can work with your database schema. In the MVC 4 Internet template case, it uses Entity Framework Code First to define the user model. SimpleMembership bootstrap includes a call to InitializeDatabaseConnection, and I want to play nice with that. There's a new [InitializeSimpleMembership] attribute on the AccountController, which calls \Filters\InitializeSimpleMembershipAttribute.cs::OnActionExecuting(). That comment in that method that says "Ensure ASP.NET Simple Membership is initialized only once per app start" which sounds like good advice. I figured the best thing would be to call that directly: new Mvc4SampleApplication.Filters.InitializeSimpleMembershipAttribute().OnActionExecuting(null); I'm not 100% happy with this - in fact, it's my least favorite part of this solution. There are two problems - first, directly calling a method on a filter, while legal, seems odd. Worse, though, the Filter lives in the application's namespace, which means that this code no longer works well as a generic drop-in. The simplest workaround would be to duplicate the relevant SimpleMembership initialization code into my startup code, but I'd rather not. I'm interested in your suggestions here. Challenge #6: Module Init methods are called more than once When debugging, I noticed (and remembered) that the Init method may be called more than once per page request - it's run once per instance in the app pool, and an individual page request can cause multiple resource requests to the server. While SimpleMembership does have internal checks to prevent duplicate user or role entries, I'd rather not cause or handle those exceptions. So here's the standard single-use lock in the Module's init method: void IHttpModule.Init(HttpApplication context) { lock (lockObject) { if (!initialized) { //Do stuff } initialized = true; } } Putting it all together With all of that out of the way, here's the code I came up with: using Mvc4SampleApplication.Filters; using System.Web; using System.Web.Security; using WebMatrix.WebData; [assembly: PreApplicationStartMethod(typeof(PreApplicationTasks), "Initializer")] public static class PreApplicationTasks { public static void Initializer() { Microsoft.Web.Infrastructure.DynamicModuleHelper.DynamicModuleUtility .RegisterModule(typeof(UserInitializationModule)); } } public class UserInitializationModule : IHttpModule { private static bool initialized; private static object lockObject = new object(); private const string _username = "Owner"; private const string _password = "p@ssword123"; private const string _role = "Administrator"; void IHttpModule.Init(HttpApplication context) { lock (lockObject) { if (!initialized) { new InitializeSimpleMembershipAttribute().OnActionExecuting(null); if (!WebSecurity.UserExists(_username)) WebSecurity.CreateUserAndAccount(_username, _password); if (!Roles.RoleExists(_role)) Roles.CreateRole(_role); if (!Roles.IsUserInRole(_username, _role)) Roles.AddUserToRole(_username, _role); } initialized = true; } } void IHttpModule.Dispose() { } } The Verdict: Is this a good thing? Maybe. I think you'll agree that the journey was undoubtedly worthwhile, as it took us through some of the finer points of hooking into application startup, integrating with membership, and understanding why the WebActivator NuGet package is so useful Will I use this in the tutorial? I'm leaning towards no - I think a NuGet package with a dependency on WebActivator might work better: It's a little more clear what's going on Installing a NuGet package might be a little less error prone than copying a file A novice user could uninstall the package when complete It's a good introduction to NuGet, which is a good thing for beginners to see This code either requires either duplicating a little code from that filter or modifying the file to use the namespace Honestly I'm undecided at this point, but I'm glad that I can weigh the options. If you're interested: Why are you doing this? I'm updating the MVC Music Store tutorial to ASP.NET MVC 4, taking advantage of a lot of new ASP.NET MVC 4 features and trying to simplify areas that are giving people trouble. One change that addresses both needs us using the new OAuth support for membership as much as possible - it's a great new feature from an application perspective, and we get a fair amount of beginners struggling with setting up membership on a variety of database and development setups, which is a distraction from the focus of the tutorial - learning ASP.NET MVC. Side note: Thanks to some great help from Rick Anderson, we had a draft of the tutorial that was looking pretty good earlier this summer, but there were enough changes in ASP.NET MVC 4 all the way up to RTM that there's still some work to be done. It's high priority and should be out very soon. The one issue I ran into with OAuth is that we still need an Administrative user who can edit the store's inventory. I thought about a number of solutions for that - making the first user to register the admin, or the first user to use the username "Administrator" is assigned to the Administrator role - but they both ended up requiring extra code; also, I worried that people would use that code without understanding it or thinking about whether it was a good fit.

    Read the article

  • 2012 Oracle Fusion Innovation Awards - Part 2

    - by Michelle Kimihira
    Author: Moazzam Chaudry Continuing from Friday's blog on 2012 Oracle Fusion Innovation Awards, this blog (Part 2) will provide more details around the customers. It was a tremendous honor to be in single room of winners. We only wish we could have had more time to share stories from all the winners.  We received great insight from all the innovative solutions that our customers deploy and would like to share them broadly, so that others can benefit from best practices. There was a customer panel session joined by Ingersoll Rand, Nike and Motability and here is what was discussed: Barry Bonar, Enterprise Architect from Ingersoll Rand shared details around their solution, comprised of Oracle Exalogic, Oracle WebLogic Server and Oracle SOA Suite. This combined solutoin enabled their business transformation to increase decision-making, speed and efficiency, resulting in 40% reduced IT spend, 41X Faster response time and huge cost savings. Ashok Balakrishnan, Architect from Nike shared how they leveraged Oracle Coherence to analyze their digital "footprint" of activities. This helps them compete, collaborate and compare athletic data over time. Lastly, Ashley Doodly, Head of IT from Motability shared details around their solution compromised of Oracle SOA Suite, Service Bus, ADF, Coherence, BO and E-Business Suite. This solution helped Motability achieve 100% ROI within the first few months, performance in seconds vs. 10's of minutes and tremendous improvement in throughput that increased up to 50%.  This year's winners by category are: Oracle Exalogic Customer Results using Fusion Middleware Netshoes ATG on Exalogic: 6X Reduced H/W foot print, 6.2X increased throughput and 3 weeks time to market Claro Part of America Movil, running mission critical Java Application on Exalogic with 35X Faster Java response time, 5X Throughput Underwriters Laboratories Exalogic as an Apps Consolidation platform to power tremendous growth Ingersoll Rand EBS on Exalogic: Up to 40% Reduction in overall IT budget, 3x reduced foot print Oracle Cloud Application Foundation Customer Results using Fusion Middleware  Mazda Motor Corporation Tuxedo ART Batch runtime environment to migrate their batch apps on new open environment and reduce main frame cost. HOTELBEDS Technology Open Source to WebLogic transformation Globalia Corporation Introduced Oracle Coherence to fully reengineer DTH system and provide multiple business and technical benefits Nike Nike+, digital sports platform, has 8M users and is expecting an 5X increase in users, many of who will carry multiple devices that frequently sync data with the Digital Sport platform Comcast Corporation The solution is expected to increase availability, continuity, performance, and simplify and make the code at the application layer more flexible. Oracle SOA and Oracle BPM Customer Results using Fusion Middleware NTT Docomo Network traffic solution based on Oracle event processing and coherence - massive in scale: 12M users (50M in future) - 800,000 events/sec. Schneider National, Inc. SOA/B2B/ADF/Data Integration to orchestrate key order processes across Siebel, OTM & EBS.  Platform runs 60M trans/day and  50 million composite SOA instances per day across 10G and 11G Amadeus Oracle BPM solution: Business Rules and processes vary across local (80), regional (~10) and corporate approval process. Up to 10 levels of approval. Plans to deploy across 20+ markets Navitar SOA solution integrates a fully non-Oracle legacy application/ERP environment using Oracle’s SOA Suite and Oracle AIA Foundation Pack. Motability Uses SOA Suite to synchronize data across the systems and to manage the vehicle remarketing process Oracle WebCenter Customer Results using Fusion Middleware  News Limited Single platform running websites for 50% of Australia's newspapers University of Louisville “Facebook for Medicine”: Oracle Webcenter platform and Oracle BIEE to analyze patient test data and uncover potential health issues. Expecting annualized ROI of 277% China Mobile Jiangsu Company portal (25k users) to drive collaboration & productivity Life Technologies Portal for remotely monitoring & repairing biotech instruments LA Dept. of Water & Power Oracle WebCenter Portal to power ladwp.com on desktop and mobile for 1.6million users Oracle Identity Management Customer Results using Fusion Middleware Education Testing Service Identity Management platform for provisioning & SSO of 6 million GRE, GMAT, TOEFL customers Avea Oracle Identity Manager allowing call center personnel to quickly change Identity Profile to handle varying call loads based on a user self service interface. Decreased Admin Cost by 30% Oracle Data Integration Customer Results using Fusion Middleware Raymond James Near real-time integration for improved systems (throughput & performance) and enhanced operational flexibility in a 24 X 7 environment Wm Morrison Supermarkets Electronic Point of Sale integration handling over 80 million transactions a day in near real time (15 min intervals) Oracle Application Development Framework and Oracle Fusion Development Customer Results using Fusion Middleware Qualcomm Incorporated Solution providing  immediate business value enabling a self-service model necessary for growing the new customer base, an increase in customer satisfaction, reduced “time-to-deliver” Micros Systems, Inc. ADF, SOA Suite, WebCenter  enables services that include managing distribution of hotel rooms availability and rates to channels such as Hotel Web-site, Expedia, etc. Marfin Egnatia Bank A new web 2.0 UI provides a much richer experience through the ADF solution with the end result being one of boosting end-user productivity    Business Analytics (Oracle BI, Oracle EPM, Oracle Exalytics) Customer Results using Fusion Middleware INC Research Self-service customer portal delivering 5–10% of the overall revenue - expected to grow fast with the BI solution Experian Reduction in Time to Complete the Financial Close Process Hologic Inc Solution, saving months of decision-making uncertainty! We look forward to seeing many more innovative nominations. The nominatation process for 2013 begins in April 2013.    Additional Information: Blog: Oracle WebCenter Award Winners Blog: Oracle Identity Management Winners Blog: Oracle Exalogic Winners Blog: SOA, BPM and Data Integration will be will feature award winners in its respective areas this week Subscribe to our regular Fusion Middleware Newsletter Follow us on Twitter and Facebook

    Read the article

  • Getting Started with Prism (aka Composite Application Guidance for WPF and Silverlight)

    - by dotneteer
    Overview Prism is a framework from the Microsoft Patterns and Practice team that allow you to create WPF and Silverlight in a modular way. It is especially valuable for larger projects in which a large number of developers can develop in parallel. Prism achieves its goal by supplying several services: · Dependency Injection (DI) and Inversion of control (IoC): By using DI, Prism takes away the responsibility of instantiating and managing the life time of dependency objects from individual components to a container. Prism relies on containers to discover, manage and compose large number of objects. By varying the configuration, the container can also inject mock objects for unit testing. Out of the box, Prism supports Unity and MEF as container although it is possible to use other containers by subclassing the Bootstrapper class. · Modularity and Region: Prism supplies the framework to split application into modules from the application shell. Each module is a library project that contains both UI and code and is responsible to initialize itself when loaded by the shell. Each window can be further divided into regions. A region is a user control with associated model. · Model, view and view-model (MVVM) pattern: Prism promotes the user MVVM. The use of DI container makes it much easier to inject model into view. WPF already has excellent data binding and commanding mechanism. To be productive with Prism, it is important to understand WPF data binding and commanding well. · Event-aggregation: Prism promotes loosely coupled components. Prism discourages for components from different modules to communicate each other, thus leading to dependency. Instead, Prism supplies an event-aggregation mechanism that allows components to publish and subscribe events without knowing each other. Architecture In the following, I will go into a little more detail on the services provided by Prism. Bootstrapper In a typical WPF application, application start-up is controls by App.xaml and its code behind. The main window of the application is typically specified in the App.xaml file. In a Prism application, we start a bootstrapper in the App class and delegate the duty of main window to the bootstrapper. The bootstrapper will start a dependency-injection container so all future object instantiations are managed by the container. Out of box, Prism provides the UnityBootstrapper and MefUnityBootstrapper abstract classes. All application needs to either provide a concrete implementation of one of these bootstrappers, or alternatively, subclass the Bootstrapper class with another DI container. A concrete bootstrapper class must implement the CreateShell method. Its responsibility is to resolve and create the Shell object through the DI container to serve as the main window for the application. The other important method to override is ConfigureModuleCatalog. The bootstrapper can register modules for the application. In a more advance scenario, an application does not have to know all its modules at compile time. Modules can be discovered at run time. Readers to refer to one of the Open Modularity Quick Starts for more information. Modules Once modules are registered with or discovered by Prism, they are instantiated by the DI container and their Initialize method is called. The DI container can inject into a module a region registry that implements IRegionViewRegistry interface. The module, in its Initialize method, can then call RegisterViewWithRegion method of the registry to register its regions. Regions Regions, once registered, are managed by the RegionManager. The shell can then load regions either through the RegionManager.RegionName attached property or dynamically through code. When a view is created by the region manager, the DI container can inject view model and other services into the view. The view then has a reference to the view model through which it can interact with backend services. Service locator Although it is possible to inject services into dependent classes through a DI container, an alternative way is to use the ServiceLocator to retrieve a service on demard. Prism supplies a service locator implementation and it is possible to get an instance of the service by calling: ServiceLocator.Current.GetInstance<IServiceType>() Event aggregator Prism supplies an IEventAggregator interface and implementation that can be injected into any class that needs to communicate with each other in a loosely-coupled fashion. The event aggregator uses a publisher/subscriber model. A class can publishes an event by calling eventAggregator.GetEvent<EventType>().Publish(parameter) to raise an event. Other classes can subscribe the event by calling eventAggregator.GetEvent<EventType>().Subscribe(EventHandler, other options). Getting started The easiest way to get started with Prism is to go through the Prism Hands-On labs and look at the Hello World QuickStart. The Hello World QuickStart shows how bootstrapper, modules and region works. Next, I would recommend you to look at the Stock Trader Reference Implementation. It is a more in depth example that resemble we want to set up an application. Several other QuickStarts cover individual Prism services. Some scenarios, such as dynamic module discovery, are more advanced. Apart from the official prism document, you can get an overview by reading Glen Block’s MSDN Magazine article. I have found the best free training material is from the Boise Code Camp. To be effective with Prism, it is important to understands key concepts of WPF well first, such as the DependencyProperty system, data binding, resource, theme and ICommand. It is also important to know your DI container of choice well. I will try to explorer these subjects in depth in the future. Testimony Recently, I worked on a desktop WPF application using Prism. I had a wonderful experience with Prism. The Prism is flexible enough even in the presence of third party controls such as Telerik WPF controls. We have never encountered any significant obstacle.

    Read the article

  • How to Sync Any Folder With SkyDrive on Windows 8.1

    - by Chris Hoffman
    Before Windows 8.1, it was possible to sync any folder on your computer with SkyDrive using symbolic links. This method no longer works now that SkyDrive is baked into Windows 8.1, but there are other tricks you can use. Creating a symbolic link or directory junction inside your SkyDrive folder will give you an empty folder in your SkyDrive cloud storage. Confusingly, the files will appear inside the SkyDrive Modern app as if they were being synced, but they aren’t. The Solution With SkyDrive refusing to understand and accept symbolic links in its own folder, the best option is probably to use symbolic links anyway — but in reverse. For example, let’s say you have a program that automatically saves important data to a folder anywhere on your hard drive — whether it’s C:\Users\USER\Documents\, C:\Program\Data, or anywhere else. Rather than trying to trick SkyDrive into understanding a symbolic link, we could instead move the actual folder itself to SkyDrive and then use a symbolic link at the folder’s original location to trick the original program. This may not work for every single program out there. But it will likely work for most programs, which use standard Windows API calls to access folders and save files. We’re just flipping the old solution here — we can’t trick SkyDrive anymore, so let’s try to trick other programs instead. Moving a Folder and Creating a Symbolic Link First, ensure no program is using the external folder. For example, if it’s a program data or settings folder, close the program that’s using the folder. Next, simply move the folder to your SkyDrive folder. Right-click the external folder, select Cut, go to the SkyDrive folder, right-click and select Paste. The folder will now be located in the SkyDrive folder itself, so it will sync normally. Next, open a Command Prompt window as Administrator. Right-click the Start button on the taskbar or press Windows Key + X and select Command Prompt (Administrator) to open it. Run the following command to create a symbolic link at the original location of the folder: mklink /d “C:\Original\Folder\Location” “C:\Users\NAME\SkyDrive\FOLDERNAME\” Enter the correct paths for the exact location of the original folder and the current location of the folder in your SkyDrive. Windows will then create a symbolic link at the folder’s original location. Most programs should hopefully be tricked by this symbolic location, saving their files directly to SkyDrive. You can test this yourself. Put a file into the folder at its original location. It will be saved to SkyDrive and sync normally, appearing in your SkyDrive storage online. One downside here is that you won’t be able to save a file onto SkyDrive without it taking up space on the same hard drive SkyDrive is on. You won’t be able to scatter folders across multiple hard drives and sync them all. However, you could always change the location of the SkyDrive folder on Windows 8.1 and put it on a drive with a larger amount of free space. To do this, right-click the SkyDrive folder in File Explorer, select Properties, and use the options on the Location tab. You could even use Storage Spaces to combine the drives into one larger drive. Automatically Copy the Original Files to SkyDrive Another option would be to run a program that automatically copies files from another folder on your computer to your SkyDrive folder. For example, let’s say you want to sync copies of important log files that a program creates in a specific folder. You could use a program that allows you to schedule automatic folder-mirroring, configuring the program to regularly copy the contents of your log folder to your SkyDrive folder. This may be a useful alternative for some use cases, although it isn’t the same as standard syncing. You’ll end up with two copies of the files taking up space on your system, which won’t be ideal for large files. The files also won’t be instantly uploaded to your SkyDrive storage after they’re created, but only after the scheduled task runs. There are many options for this, including Microsoft’s own SyncToy, which continues to work on Windows 8. If you were using the symbolic link trick to automatically sync copies of PC game save files with SkyDrive, you could just install GameSave Manager. It can be configured to automatically create backup copies of your computer’s PC game save files on a schedule, saving them to SkyDrive where they’ll be synced and backed up online. SkyDrive support was completely rewritten for Windows 8.1, so it’s not surprising that this trick no longer works. The ability to use symbolic links in previous versions of SkyDrive was never officially supported, so it’s not surprising to see it break after a rewrite. None of the methods above are as convenient and quick as the old symbolic link method, but they’re the best we can do with the SkyDrive integration Microsoft has given us in Windows 8.1. It’s still possible to use symbolic links to easily sync other folders with competing cloud storage services like Dropbox and Google Drive, so you may want to consider switching away from SkyDrive if this feature is critical to you.     

    Read the article

  • User is trying to leave! Set at-least confirm alert on browser(tab) close event!!

    - by kaushalparik27
    This is something that might be annoying or irritating for end user. Obviously, It's impossible to prevent end user from closing the/any browser. Just think of this if it becomes possible!!!. That will be a horrible web world where everytime you will be attacked by sites and they will not allow to close your browser until you confirm your shopping cart and do the payment. LOL:) You need to open the task manager and might have to kill the running browser exe processes.Anyways; Jokes apart, but I have one situation where I need to alert/confirm from the user in any anyway when they try to close the browser or change the url. Think of this: You are creating a single page intranet asp.net application where your employee can enter/select their TDS/Investment Declarations and you wish to at-least ALERT/CONFIRM them if they are attempting to:[1] Close the Browser[2] Close the Browser Tab[3] Attempt to go some other site by Changing the urlwithout completing/freezing their declaration.So, Finally requirement is clear. I need to alert/confirm the user what he is going to do on above bulleted events. I am going to use window.onbeforeunload event to set the javascript confirm alert box to appear.    <script language="JavaScript" type="text/javascript">        window.onbeforeunload = confirmExit;        function confirmExit() {            return "You are about to exit the system before freezing your declaration! If you leave now and never return to freeze your declaration; then they will not go into effect and you may lose tax deduction, Are you sure you want to leave now?";        }    </script>See! you are halfway done!. So, every time browser unloads the page, above confirm alert causes to appear on front of user like below:By saying here "every time browser unloads the page"; I mean to say that whenever page loads or postback happens the browser onbeforeunload event will be executed. So, event a button submit or a link submit which causes page to postback would tend to execute the browser onbeforeunload event to fire!So, now the hurdle is how can we prevent the alert "Not to show when page is being postback" via any button/link submit? Answer is JQuery :)Idea is, you just need to set the script reference src to jQuery library and Set the window.onbeforeunload event to null when any input/link causes a page to postback.Below will be the complete code:<head runat="server">    <title></title>    <script src="jquery.min.js" type="text/javascript"></script>    <script language="JavaScript" type="text/javascript">        window.onbeforeunload = confirmExit;        function confirmExit() {            return "You are about to exit the system before freezing your declaration! If you leave now and never return to freeze your declaration; then they will not go into effect and you may lose tax deduction, Are you sure you want to leave now?";        }        $(function() {            $("a").click(function() {                window.onbeforeunload = null;            });            $("input").click(function() {                window.onbeforeunload = null;            });        });    </script></head><body>    <form id="form1" runat="server">    <div></div>    </form></body></html>So, By this post I have tried to set the confirm alert if user try to close the browser/tab or try leave the site by changing the url. I have attached a working example with this post here. I hope someone might find it helpful.

    Read the article

  • SQL SERVER – Replace a Column Name in Multiple Stored Procedure all together

    - by pinaldave
    I receive a lot of emails every day. I try to answer each and every email and comments on Facebook and Twitter. I prefer communication on social media as this gives opportunities to others to read the questions and participate along with me. There is always some question which everyone likes to read and remember. Here is one of the questions which I received in email. I believe the same question will be there any many developers who are beginning with SQL Server. I decided to blog about it so everyone can read it and participate. “I am beginner in SQL Server. I have a very interesting situation and need your help. I am beginner to SQL Server and that is why I do not have access to the production server and I work entirely on the development server. The project I am working on is also in the infant stage as well. In product I had to create a multiple tables and every table had few columns. Later on I have written Stored Procedures using those tables. During a code review my manager has requested to change one of the column which I have used in the table. As per him the naming convention was not accurate. Now changing the columname in the table is not a big issue. I figured out that I can do it very quickly either using T-SQL script or SQL Server Management Studio. The real problem is that I have used this column in nearly 50+ stored procedure. This looks like a very mechanical task. I believe I can go and change it in nearly 50+ stored procedure but is there a better solution I can use. Someone suggested that I should just go ahead and find the text in system table and update it there. Is that safe solution? If not, what is your solution. In simple words, How to replace a column name in multiple stored procedure efficiently and quickly? Please help me here with keeping my experience and non-production server in mind.” Well, I found this question very interesting. Honestly I would have preferred if this question was asked on my social media handles (Facebook and Twitter) as I am very active there and quite often before I reach there other experts have already answered this question. Anyway I am now answering the same question on the blog so all of us can participate here and come up with an appropriate answer. Here is my answer - “My Friend, I do not advice to touch system table. Please do not go that route. It can be dangerous and not appropriate. The issue which you faced today is what I used to face in early career as well I still face it often. There are two sets of argument I have observed – there are people who see no value in the name of the object and name objects like obj1, obj2 etc. There are sets of people who carefully chose the name of the object where object name is self-explanatory and almost tells a story. I am not here to take any side in this blog post – so let me go to a quick solution for your problem. Note: Following should not be directly practiced on Production Server. It should be properly tested on development server and once it is validated they should be pushed to your production server with your existing deployment practice. The answer is here assuming you have regular stored procedures and you are working on the Development NON Production Server. Go to Server Note >> Databases >> DatabaseName >> Programmability >> Stored Procedure Now make sure that Object Explorer Details are open (if not open it by clicking F7). You will see the list of all the stored procedures there. Now you will see a list of all the stored procedures on the right side list. Select either all of them or the one which you believe are relevant to your query. Now… Right click on the stored procedures >> SELECT DROP and CREATE to >> Now select New Query Editor Window or Clipboard. Paste the complete script to a new window if you have selected Clipboard option. Now press Control+H which will bring up the Find and Replace Screen. In this screen insert the column to be replaced in the “Find What”box and new column name into “Replace With” box. Now execute the whole script. As we have selected DROP and CREATE to, it will created drop the old procedure and create the new one. Another method would do all the same procedure but instead of DROP and CREATE manually replace the CREATE word with ALTER world. There is a small advantage in doing this is that if due to any reason the error comes up which prevents the new stored procedure to be created you will have your old stored procedure in the system as it is. “ Well, this was my answer to the question which I have received. Do you see any other workaround or solution? Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Stored Procedure, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Slow boot on Ubuntu 12.04

    - by Hailwood
    My Ubuntu is booting really slow (Windows is booting faster...). I am using Ubuntu a Dell Inspiron 1545 Pentium(R) Dual-Core CPU T4300 @ 2.10GHz, 4GB Ram, 500GB HDD running Ubuntu 12.04 with gnome-shell 3.4.1. After running dmesg the culprit seems to be this section, in particular the last three lines: [26.557659] ADDRCONF(NETDEV_UP): eth0: link is not ready [26.565414] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.355355] Console: switching to colour frame buffer device 170x48 [27.362346] fb0: radeondrmfb frame buffer device [27.362347] drm: registered panic notifier [27.362357] [drm] Initialized radeon 2.12.0 20080528 for 0000:01:00.0 on minor 0 [27.617435] init: udev-fallback-graphics main process (1049) terminated with status 1 [30.064481] init: plymouth-stop pre-start process (1500) terminated with status 1 [51.708241] CE: hpet increased min_delta_ns to 20113 nsec [59.448029] eth2: no IPv6 routers present But I have no idea how to start debugging this. sudo lshw -C video $ sudo lshw -C video *-display description: VGA compatible controller product: RV710 [Mobility Radeon HD 4300 Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:48 memory:e0000000-efffffff ioport:de00(size=256) memory:f6df0000-f6dfffff memory:f6d00000-f6d1ffff After loading the propriety driver my new dmesg log is below (starting from the first major time gap): [2.983741] EXT4-fs (sda6): mounted filesystem with ordered data mode. Opts: (null) [25.094327] ADDRCONF(NETDEV_UP): eth0: link is not ready [25.119737] udevd[520]: starting version 175 [25.167086] lp: driver loaded but no devices found [25.215341] fglrx: module license 'Proprietary. (C) 2002 - ATI Technologies, Starnberg, GERMANY' taints kernel. [25.215345] Disabling lock debugging due to kernel taint [25.231924] wmi: Mapper loaded [25.318414] lib80211: common routines for IEEE802.11 drivers [25.318418] lib80211_crypt: registered algorithm 'NULL' [25.331631] [fglrx] Maximum main memory to use for locked dma buffers: 3789 MBytes. [25.332095] [fglrx] vendor: 1002 device: 9552 count: 1 [25.334206] [fglrx] ioport: bar 1, base 0xde00, size: 0x100 [25.334229] pci 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [25.334235] pci 0000:01:00.0: setting latency timer to 64 [25.337109] [fglrx] Kernel PAT support is enabled [25.337140] [fglrx] module loaded - fglrx 8.96.4 [Mar 12 2012] with 1 minors [25.342803] Adding 4189180k swap on /dev/sda7. Priority:-1 extents:1 across:4189180k [25.364031] type=1400 audit(1338241723.027:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=606 comm="apparmor_parser" [25.364491] type=1400 audit(1338241723.031:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=606 comm="apparmor_parser" [25.364760] type=1400 audit(1338241723.031:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=606 comm="apparmor_parser" [25.394328] wl 0000:0c:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [25.394343] wl 0000:0c:00.0: setting latency timer to 64 [25.415531] acpi device:36: registered as cooling_device2 [25.416688] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A03:00/device:34/LNXVIDEO:00/input/input6 [25.416795] ACPI: Video Device [VID] (multi-head: yes rom: no post: no) [25.416865] [Firmware Bug]: Duplicate ACPI video bus devices for the same VGA controller, please try module parameter "video.allow_duplicates=1"if the current driver doesn't work. [25.425133] lib80211_crypt: registered algorithm 'TKIP' [25.448058] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [25.448321] snd_hda_intel 0000:00:1b.0: irq 47 for MSI/MSI-X [25.448353] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [25.738867] eth1: Broadcom BCM4315 802.11 Hybrid Wireless Controller 5.100.82.38 [25.761213] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 [25.761406] input: HDA Intel Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8 [25.783432] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2) [25.908318] EXT4-fs (sda6): re-mounted. Opts: errors=remount-ro [25.928155] input: Dell WMI hotkeys as /devices/virtual/input/input9 [25.960561] udevd[543]: renamed network interface eth1 to eth2 [26.285688] init: failsafe main process (835) killed by TERM signal [26.396426] input: PS/2 Mouse as /devices/platform/i8042/serio2/input/input10 [26.423108] input: AlpsPS/2 ALPS GlidePoint as /devices/platform/i8042/serio2/input/input11 [26.511297] Bluetooth: Core ver 2.16 [26.511383] NET: Registered protocol family 31 [26.511385] Bluetooth: HCI device and connection manager initialized [26.511388] Bluetooth: HCI socket layer initialized [26.511391] Bluetooth: L2CAP socket layer initialized [26.512079] Bluetooth: SCO socket layer initialized [26.530164] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [26.530168] Bluetooth: BNEP filters: protocol multicast [26.553893] type=1400 audit(1338241724.219:5): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=928 comm="apparmor_parser" [26.554860] Bluetooth: RFCOMM TTY layer initialized [26.554866] Bluetooth: RFCOMM socket layer initialized [26.554868] Bluetooth: RFCOMM ver 1.11 [26.557910] type=1400 audit(1338241724.223:6): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=927 comm="apparmor_parser" [26.559166] type=1400 audit(1338241724.223:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=928 comm="apparmor_parser" [26.559574] type=1400 audit(1338241724.223:8): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=928 comm="apparmor_parser" [26.575519] type=1400 audit(1338241724.239:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=931 comm="apparmor_parser" [26.581100] type=1400 audit(1338241724.247:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=931 comm="apparmor_parser" [26.582794] type=1400 audit(1338241724.247:11): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince" pid=929 comm="apparmor_parser" [26.605672] ppdev: user-space parallel port driver [27.592475] sky2 0000:09:00.0: eth0: enabling interface [27.604329] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.606962] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.852509] vesafb: mode is 1024x768x32, linelength=4096, pages=0 [27.852513] vesafb: scrolling: redraw [27.852515] vesafb: Truecolor: size=0:8:8:8, shift=0:16:8:0 [27.852523] mtrr: type mismatch for e0000000,400000 old: write-back new: write-combining [27.852527] mtrr: type mismatch for e0000000,200000 old: write-back new: write-combining [27.852531] mtrr: type mismatch for e0000000,100000 old: write-back new: write-combining [27.852534] mtrr: type mismatch for e0000000,80000 old: write-back new: write-combining [27.852538] mtrr: type mismatch for e0000000,40000 old: write-back new: write-combining [27.852541] mtrr: type mismatch for e0000000,20000 old: write-back new: write-combining [27.852544] mtrr: type mismatch for e0000000,10000 old: write-back new: write-combining [27.852548] mtrr: type mismatch for e0000000,8000 old: write-back new: write-combining [27.852551] mtrr: type mismatch for e0000000,4000 old: write-back new: write-combining [27.852554] mtrr: type mismatch for e0000000,2000 old: write-back new: write-combining [27.852558] mtrr: type mismatch for e0000000,1000 old: write-back new: write-combining [27.853154] vesafb: framebuffer at 0xe0000000, mapped to 0xffffc90005580000, using 3072k, total 3072k [27.853405] Console: switching to colour frame buffer device 128x48 [27.853426] fb0: VESA VGA frame buffer device [28.539800] fglrx_pci 0000:01:00.0: irq 48 for MSI/MSI-X [28.540552] [fglrx] Firegl kernel thread PID: 1168 [28.540679] [fglrx] Firegl kernel thread PID: 1169 [28.540789] [fglrx] Firegl kernel thread PID: 1170 [28.540932] [fglrx] IRQ 48 Enabled [29.845620] [fglrx] Gart USWC size:1236 M. [29.845624] [fglrx] Gart cacheable size:489 M. [29.845629] [fglrx] Reserved FB block: Shared offset:0, size:1000000 [29.845632] [fglrx] Reserved FB block: Unshared offset:fc21000, size:3df000 [29.845635] [fglrx] Reserved FB block: Unshared offset:1fffb000, size:5000 [59.700023] eth2: no IPv6 routers present

    Read the article

  • WebLogic 12.1.2 launch webcast on-demand & WebLogic Community feedback

    - by JuergenKress
    You missed the WebLogic & Coherence & JDeveloper 12.1.2 launch Webcast? Watch it on-demand: View On-Demand Version Read the Q&A from this Webcast Special thanks for Frank Munz and Simon Haslams our WebLogic Community experts on the phone!Thanks for the community for the great twitter feedback send us your tweets @wlscommunity #WebLogicCommunity WebLogic Community Join the #WebLogic Partner Community for the latest WebLogic 12.1.2 details and upcoming trainings http://www.WeblogicCommunity.com #OracleCAF Oracle WebLogic ?Unified update, patch, install process is a key component in reducing Ops cost in #WebLogic 12c #OracleCAF WebLogic Community Demo time #WebLogic cluster creation in seconds #OracleCAF by @mike_lehmann & Will Lyons #WebLogicCommunity pic.twitter.com/gyb8YqnKco Oracle WebLogic ?Dynamic server clusters to scale apps - coming up in #WebLogic 12c launch. #OracleCAF http://pub.vitrue.com/lBmE Oracle WebLogic ?Key feature of #WebLogic 12.1.2 release: @Oracle Database 12c integration. #OracleCAF #OracleDB OTNArchBeat ?Many tech posts on #weblogic available on #oracleace Rene van Wijk's blog. #OracleCAF http://pub.vitrue.com/O9Cn Frank Munz ?Correct me if I am wrong, but this could be the first WebLogic 12.1.2 training ever: http://www.ausoug.org.au/insync13/insync13-frank-munz.html … Cloud Foundation ?.#WebLogic 12.1.2 deep dive starts NOW during #OracleCAF launch. #Coherence up next in a few minutes. http://pub.vitrue.com/HPHM Maciej Gruszka ?Watch http://www.youtube.com/watch?v=KiCoO_QGBsU&feature=c4-overview&list=UUrEIV9YO17leE9aJWamKEPw … at #WebLogic channel with @dave_cabelus about Elastic JMS Oracle WebLogic ?Pick up the new book by @frankmunz on WLS 12c http://amzn.to/1ceppgZ #WebLogic #OracleCAF OTNArchBeat ?@OTNArchBeat 31 Jul @frankmunz 's #WebLogic YouTube channel >> watch and learn #OracleCAF http://pub.vitrue.com/B4IM WebLogic Community ?@frankmunz WebLogic expert build elastic clouds with #WebLogic http://www.munzandmore.com/blog #OracleCAF #WebLogicCommunity pic.twitter.com/UK5UKjXUVl OTNArchBeat @frankmunz 's blog, covering #weblog #cloud and more #OracleCAF http://pub.vitrue.com/N8ST OTNArchBeat ?oracladmin: @simon_haslam 's Oracle Fusion Middleware blog #OracleCAF #oracleace http://pub.vitrue.com/cwGx Yuri Grinshteyn ?Coherence uses WLS tooling, including deployment, and can be part of the WLS cluster. Well done there. #OracleCAF Maciej Gruszka ?#Coherence 12.1.2 auto updates data grid on changes inside DB thru #GoldenGate HotCache - another cool feature of #OracleCAF Oracle WebLogic ?From #OracleCAF launch: Tight integration tween WLS, #Coherence and #OracleDB. Dynamic clusters, OSS support & more http://pub.vitrue.com/3NL9 OTNArchBeat ?25 recent no-fluff technical articles on Oracle WebLogic #OracleCAF http://pub.vitrue.com/FEG5 Maciej Gruszka ?@dave_cabelus Elastic JMS is my favourite capability of #WebLogic 12.1.2 WebLogic Community ?Dynamic WebLogic Clustering COOL - what is Wour favorite 12.1.2 feature? #OracleCAF #WebLogicCommunity pic.twitter.com/T8lvDMJ1U0 WebLogic Community ?What is the coolest #WebLogic 12.1.2 feature? Let us know @wlscommunity http://weblogiccommunity.com/2013/07/30/launch-webcast-weblogic-coherence-jdeveloper-adf-12-1-2-00-july-31st-2013/ … #WebLogicCommunity Simon Haslam ?I'm speaking(!) on the panel session with @frankmunz & Matt Rosen on the CAF/WebLogic 12.1.2 launch: 6pm UK today https://event.on24.com/eventRegistration/EventLobbyServlet?target=registration.jsp&eventid=651242&partnerref=CAF_Launch_OCOM_07312013&sourcepage=register … Markus Eisele ?#WebLogic 12.1.2 - an Important New Release for Middleware Admins http://bit.ly/1cmtqhX by @simon_haslam OracleEnterpriseMgr ?The JVM diagnostics features of #EM12c are now shown in a demo by @hawkinsg1 at the #OracleCAF launch http://bit.ly/caflaunch Shaun Smith ?Curious about the new #Coherence 12.1.2 GoldenGate HotCache feature? I explain all on youtube: http://www.youtube.com/watch?v=O0TIG3hgbg0&feature=share&list=PLxqhEJ4CA3JtQwuPS8Qmd88lGX-gsIbHV … #OracleCAF Maciej Gruszka ?Try for Yourself -- Download the products Oracle WebLogic 12.1.2: http://www.oracle.com/technetwork/middleware/fusion-middleware/downloads/index.html … Oracle Coherence 12c: http://www.oracle.com/technetwork/middleware/coherence/downloads/index.htm … WebLogic Community ?What is Your favorite feature in #WebLogic 12.1.2 ? cool stuff! #OracleCAF #WebLogicCommunity http://WeblogicCommunity.com pic.twitter.com/xjR05tiaQj We encourage you to learn more about all the products by reviewing the following resources: Try for Yourself -- Download the products Oracle WebLogic 12.1.2 Oracle Coherence 12c Enterprise Manager Developer Tools WebLogic Community blog Learn more Read the Oracle WebLogic Business Whitepaper Read the Oracle Coherence Business Whitepaper Read the Oracle WebLogic and Oracle Database Integration Whitepaper Get Training from Oracle University Check out the Oracle WebLogic YouTube Channel Check out the Oracle Coherence YouTube Channel WebLogic Partner Community Registration The Webcast is available on-demand Watch Webcast Now WebLogic Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Weblogic 12.1.2,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Set-Cookie Headers getting stripped in ASP.NET HttpHandlers

    - by Rick Strahl
    Yikes, I ran into a real bummer of an edge case yesterday in one of my older low level handler implementations (for West Wind Web Connection in this case). Basically this handler is a connector for a backend Web framework that creates self contained HTTP output. An ASP.NET Handler captures the full output, and then shoves the result down the ASP.NET Response object pipeline writing out the content into the Response.OutputStream and seperately sending the HttpHeaders in the Response.Headers collection. The headers turned out to be the problem and specifically Http Cookies, which for some reason ended up getting stripped out in some scenarios. My handler works like this: Basically the HTTP response from the backend app would return a full set of HTTP headers plus the content. The ASP.NET handler would read the headers one at a time and then dump them out via Response.AppendHeader(). But I found that in some situations Set-Cookie headers sent along were simply stripped inside of the Http Handler. After a bunch of back and forth with some folks from Microsoft (thanks Damien and Levi!) I managed to pin this down to a very narrow edge scenario. It's easiest to demonstrate the problem with a simple example HttpHandler implementation. The following simulates the very much simplified output generation process that fails in my handler. Specifically I have a couple of headers including a Set-Cookie header and some output that gets written into the Response object.using System.Web; namespace wwThreads { public class Handler : IHttpHandler { /* NOTE: * * Run as a web.config set handler (see entry below) * * Best way is to look at the HTTP Headers in Fiddler * or Chrome/FireBug/IE tools and look for the * WWHTREADSID cookie in the outgoing Response headers * ( If the cookie is not there you see the problem! ) */ public void ProcessRequest(HttpContext context) { HttpRequest request = context.Request; HttpResponse response = context.Response; // If ClearHeaders is used Set-Cookie header gets removed! // if commented header is sent... response.ClearHeaders(); response.ClearContent(); // Demonstrate that other headers make it response.AppendHeader("RequestId", "asdasdasd"); // This cookie gets removed when ClearHeaders above is called // When ClearHEaders is omitted above the cookie renders response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); // *** This always works, even when explicit // Set-Cookie above fails and ClearHeaders is called //response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); response.Write(@"Output was created.<hr/> Check output with Fiddler or HTTP Proxy to see whether cookie was sent."); } public bool IsReusable { get { return false; } } } } In order to see the problem behavior this code has to be inside of an HttpHandler, and specifically in a handler defined in web.config with: <add name=".ck_handler" path="handler.ck" verb="*" type="wwThreads.Handler" preCondition="integratedMode" /> Note: Oddly enough this problem manifests only when configured through web.config, not in an ASHX handler, nor if you paste that same code into an ASPX page or MVC controller. What's the problem exactly? The code above simulates the more complex code in my live handler that picks up the HTTP response from the backend application and then peels out the headers and sends them one at a time via Response.AppendHeader. One of the headers in my app can be one or more Set-Cookie. I found that the Set-Cookie headers were not making it into the Response headers output. Here's the Chrome Http Inspector trace: Notice, no Set-Cookie header in the Response headers! Now, running the very same request after removing the call to Response.ClearHeaders() command, the cookie header shows up just fine: As you might expect it took a while to track this down. At first I thought my backend was not sending the headers but after closer checks I found that indeed the headers were set in the backend HTTP response, and they were indeed getting set via Response.AppendHeader() in the handler code. Yet, no cookie in the output. In the simulated example the problem is this line:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); which in my live code is more dynamic ( ie. AppendHeader(token[0],token[1[]) )as it parses through the headers. Bizzaro Land: Response.ClearHeaders() causes Cookie to get stripped Now, here is where it really gets bizarre: The problem occurs only if: Response.ClearHeaders() was called before headers are added It only occurs in Http Handlers declared in web.config Clearly this is an edge of an edge case but of course - knowing my relationship with Mr. Murphy - I ended up running smack into this problem. So in the code above if you remove the call to ClearHeaders(), the cookie gets set!  Add it back in and the cookie is not there. If I run the above code in an ASHX handler it works. If I paste the same code (with a Response.End()) into an ASPX page, or MVC controller it all works. Only in the HttpHandler configured through Web.config does it fail! Cue the Twilight Zone Music. Workarounds As is often the case the fix for this once you know the problem is not too difficult. The difficulty lies in tracking inconsistencies like this down. Luckily there are a few simple workarounds for the Cookie issue. Don't use AppendHeader for Cookies The easiest and obvious solution to this problem is simply not use Response.AppendHeader() to set Cookies. Duh! Under normal circumstances in application level code there's rarely a reason to write out a cookie like this:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); but rather create the cookie using the Response.Cookies collection:response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); Unfortunately, in my case where I dynamically read headers from the original output and then dynamically  write header key value pairs back  programmatically into the Response.Headers collection, I actually don't look at each header specifically so in my case the cookie is just another header. My first thought was to simply trap for the Set-Cookie header and then parse out the cookie and create a Cookie object instead. But given that cookies can have a lot of different options this is not exactly trivial, plus I don't really want to fuck around with cookie values which can be notoriously brittle. Don't use Response.ClearHeaders() The real mystery in all this is why calling Response.ClearHeaders() prevents a cookie value later written with Response.AppendHeader() to fail. I fired up Reflector and took a quick look at System.Web and HttpResponse.ClearHeaders. There's all sorts of resetting going on but nothing that seems to indicate that headers should be removed later on in the request. The code in ClearHeaders() does access the HttpWorkerRequest, which is the low level interface directly into IIS, and so I suspect it's actually IIS that's stripping the headers and not ASP.NET, but it's hard to know. Somebody from Microsoft and the IIS team would have to comment on that. In my application it's probably safe to simply skip ClearHeaders() in my handler. The ClearHeaders/ClearContent was mainly for safety but after reviewing my code there really should never be a reason that headers would be set prior to this method firing. However, if for whatever reason headers do need to be cleared, it's easy enough to manually clear the headers out:private void RemoveHeaders(HttpResponse response) { List<string> headers = new List<string>(); foreach (string header in response.Headers) { headers.Add(header); } foreach (string header in headers) { response.Headers.Remove(header); } response.Cookies.Clear(); } Now I can replace the call the Response.ClearHeaders() and I don't get the funky side-effects from Response.ClearHeaders(). Summary I realize this is a total edge case as this occurs only in HttpHandlers that are manually configured. It looks like you'll never run into this in any of the higher level ASP.NET frameworks or even in ASHX handlers - only web.config defined handlers - which is really, really odd. After all those frameworks use the same underlying ASP.NET architecture. Hopefully somebody from Microsoft has an idea what crazy dependency was triggered here to make this fail. IAC, there are workarounds to this should you run into it, although I bet when you do run into it, it'll likely take a bit of time to find the problem or even this post in a search because it's not easily to correlate the problem to the solution. It's quite possible that more than cookies are affected by this behavior. Searching for a solution I read a few other accounts where headers like Referer were mysteriously disappearing, and it's possible that something similar is happening in those cases. Again, extreme edge case, but I'm writing this up here as documentation for myself and possibly some others that might have run into this. © Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   IIS7   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • RTS Voxel Engine using LWJGL - Textures glitching

    - by Dieter Hubau
    I'm currently working on an RTS game engine using voxels. I have implemented a basic chunk manager using an Octree of Octrees which contains my voxels (simple square blocks, as in Minecraft). I'm using a Voronoi-based terrain generation to get a simplistic yet relatively realistic heightmap. I have no problem showing a 256*256*256 grid of voxels with a decent framerate (250), because of frustum culling, face culling and only rendering visible blocks. For example, in a random voxel grid of 256*256*256 I generally only render 100k-120k faces, not counting frustum culling. Frustum culling is only called every 100ms, since calling it every frame seemed a bit overkill. Now I have reached the stage of texturing and I'm experiencing some problems: Some experienced people might already see the problem, but if we zoom in, you can see the glitches more clearly: All the seams between my blocks are glitching and kind of 'overlapping' or something. It's much more visible when you're moving around. I'm using a single, simple texture map to draw on my cubes, where each texture is 16*16 pixels big: I have added black edges around the textures to get a kind of cellshaded look, I think it's cool. The texture map has 256 textures of each 16*16 pixels, meaning the total size of my texture map is 256*256 pixels. The code to update the ChunkManager: public void update(ChunkManager chunkManager) { for (Octree<Cube> chunk : chunks) { if (chunk.getId() < 0) { // generate an id for the chunk to be able to call it later chunk.setId(glGenLists(1)); } glNewList(chunk.getId(), GL_COMPILE); glBegin(GL_QUADS); faces += renderChunk(chunk); glEnd(); glEndList(); } } Where my renderChunk method is: private int renderChunk(Octree<Cube> node) { // keep track of the number of visible faces in this chunk int faces = 0; if (!node.isEmpty()) { if (node.isLeaf()) { faces += renderItem(node); } List<Octree<Cube>> children = node.getChildren(); if (children != null && !children.isEmpty()) { for (Octree<Cube> child : children) { faces += renderChunk(child); } } return faces; } Where my renderItem method is the following: private int renderItem(Octree<Cube> node) { Cube cube = node.getItem(-1, -1, -1); int faces = 0; float x = node.getPosition().x; float y = node.getPosition().y; float z = node.getPosition().z; float size = cube.getSize(); Vector3f point1 = new Vector3f(-size + x, -size + y, size + z); Vector3f point2 = new Vector3f(-size + x, size + y, size + z); Vector3f point3 = new Vector3f(size + x, size + y, size + z); Vector3f point4 = new Vector3f(size + x, -size + y, size + z); Vector3f point5 = new Vector3f(-size + x, -size + y, -size + z); Vector3f point6 = new Vector3f(-size + x, size + y, -size + z); Vector3f point7 = new Vector3f(size + x, size + y, -size + z); Vector3f point8 = new Vector3f(size + x, -size + y, -size + z); TextureCoordinates tc = textureManager.getTextureCoordinates(cube.getCubeType()); // front face if (cube.isVisible(CubeSide.FRONT)) { faces++; glTexCoord2f(TEXTURE_U_COORDINATES[tc.u], TEXTURE_V_COORDINATES[tc.v]); glVertex3f(point1.x, point1.y, point1.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u + 1], TEXTURE_V_COORDINATES[tc.v]); glVertex3f(point4.x, point4.y, point4.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u + 1], TEXTURE_V_COORDINATES[tc.v + 1]); glVertex3f(point3.x, point3.y, point3.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u], TEXTURE_V_COORDINATES[tc.v + 1]); glVertex3f(point2.x, point2.y, point2.z); } // back face if (cube.isVisible(CubeSide.BACK)) { faces++; glTexCoord2f(TEXTURE_U_COORDINATES[tc.u + 1], TEXTURE_V_COORDINATES[tc.v]); glVertex3f(point5.x, point5.y, point5.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u + 1], TEXTURE_V_COORDINATES[tc.v + 1]); glVertex3f(point6.x, point6.y, point6.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u], TEXTURE_V_COORDINATES[tc.v + 1]); glVertex3f(point7.x, point7.y, point7.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u], TEXTURE_V_COORDINATES[tc.v]); glVertex3f(point8.x, point8.y, point8.z); } // left face if (cube.isVisible(CubeSide.SIDE_LEFT)) { faces++; glTexCoord2f(TEXTURE_U_COORDINATES[tc.u], TEXTURE_V_COORDINATES[tc.v]); glVertex3f(point5.x, point5.y, point5.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u + 1], TEXTURE_V_COORDINATES[tc.v]); glVertex3f(point1.x, point1.y, point1.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u + 1], TEXTURE_V_COORDINATES[tc.v + 1]); glVertex3f(point2.x, point2.y, point2.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u], TEXTURE_V_COORDINATES[tc.v + 1]); glVertex3f(point6.x, point6.y, point6.z); } // ETC ETC return faces; } When all this is done, I simply render my lists every frame, like this: public void render(ChunkManager chunkManager) { glBindTexture(GL_TEXTURE_2D, textureManager.getCubeTextureId()); // load all chunks from the tree List<Octree<Cube>> chunks = chunkManager.getTree().getAllItems(); for (Octree<Cube> chunk : chunks) { if (frustum.cubeInFrustum(chunk.getPosition(), chunk.getSize() / 2)) { glCallList(chunk.getId()); } } } I don't know if anyone is willing to go through all of this code or maybe you can spot the problem right away, but that is basically the problem, and I can't find a solution :-) Thanks for reading and any help is appreciated!

    Read the article

  • Backpacks and Booth Paint: TechEd 2012

    - by The Un-T Guy
    Arriving in the parking lot of the Orange County Convention Center, I immediately knew I was in the right place. As far as the eye could see, the acres of asphalt were awash in backpacks, quirky (to be kind) outfits, and bad haircuts. This was the place. This was Microsoft Mecca v2012 for geeks and nerds, the Central Florida event of the year, a gathering of high tech professionals whose skills I both greatly respect and, frankly, fear a little. I was wholly and completely out of element, a dork in a vast sea of geek jumbo. It like was wearing dockers and a golf shirt walking into a RenFaire, but one with really crappy costumes and no turkey legs...save those attached to some of the attendees. Of course the corporate whores...errrr, vendors were in place, ready to parlay the convention's fre-nerd-ic energy into millions of dollars by convincing the big-brained and under-sexed in the crowd (i.e., virtually all of them...present company excluded, of course) that their product or service was the only thing standing between them and professional success, industry fame, and clear skin. "With KramTech 2012," they seemed to scream, "you will be THE ROCK STAR of your company's IT department!" As car shows and tattoo parlors learned long ago, Tech companies seem to believe that the best way to attract the attention of this crowd is through the hint of the promise of sex. They recruit and deploy an army of "sales reps" whose primary qualifications appear to be long hair, short skirts, high heels, and a vagina. Unlike their distant cousins in the car and body art industries, however, this sub-species of booth paint (semi-gloss decoration that adds nothing to the substance of the product) seems torn between committing to being all-out sex objects and recognition that they are in the presence of intelligent, discerning people. People who are smart enough to know exactly what these vendors are doing. Also unlike their distant car show and tattoo shop cousins, these young women (what…are there no gay tech professionals who could use some eye candy?) seem to realize that while IT remains a male-dominated field, there are ever-increasing numbers of intelligent, capable, strong professional women – women who’ve battled to make it in this field through hard work and work performance rather than a hard body and performing after work. This is not to say that all of the young female sales reps are there only because of their physical attributes. Many are competent, intelligent, and driven -- not to mention attractive. They're working hard on the front lines of delivering the next generation of technology. The distinction is pretty clear, however, between these young professionals and the booth paint. The former enthusiastically deliver credible information about the products they’re hawking. The latter are positioned in the aisles, uncomfortably avoiding eye contact as they struggle to operate the badge readers. Surprisingly, not all of the women in attendance seemed to object to the objectification of their younger sisters. One IT professional woman who came of age in the industry (mostly in IT marketing) said, “I have no problem with it. I was a ‘booth babe’ for years and it doesn’t bother me at all.” Others, however, weren’t quite so gracious. One woman I spoke with, an IT manager from Cheyenne, Wyoming, said it was demeaning and frankly, as more and more women grow into IT management positions, not a great marketing idea. “Using these young women is, to me, no different than vendors giving out t-shirts to attract attention. It’s sad because it’s still hard for a woman to be respected in the IT field and this just perpetuates the outdated notion that IT is a male-dominated field.” She went on to say that decisions by vendors to employ these young women in this “inappropriate way” could impact her purchasing decisions. “I might be swayed toward a vendor who has women on staff who are intelligent and dynamic rather than the vendors who use the ‘decoration’ girls.” So in many ways, the IT industry is no different than most other industries as it struggles to maximize performance by finding and developing talent – all of the talent, not just the 50% with a penis. Women in IT, like their brethren, struggle to find their niche in the field, to grow professionally, and reach for the brass ring, struggling to overcome obstacles as they climb the mountain of professional success in a never-ending cycle of economic uncertainty. But as (generally) well-educated and highly-trained professionals, they are probably better positioned than those in many other industries. Beside, they’ve got one other advantage over their non-IT counterparts as they attempt their ascent to the summit: They’ve already got the backpacks.

    Read the article

  • Unable to sign in. How to debug?

    - by Dmitriy Budnik
    I had to reboot system with reset button. After reboot I can't sign in. When I enter my password It seems like X-server just restarts. I can sing in as guest and also I can sign in in text TTY. Here is first 150 lines of my lightdm.log: [+0.04s] DEBUG: Logging to /var/log/lightdm/lightdm.log [+0.04s] DEBUG: Starting Light Display Manager 1.2.1, UID=0 PID=1070 [+0.04s] DEBUG: Loaded configuration from /etc/lightdm/lightdm.conf [+0.04s] DEBUG: Using D-Bus name org.freedesktop.DisplayManager [+0.04s] DEBUG: Registered seat module xlocal [+0.04s] DEBUG: Registered seat module xremote [+0.04s] DEBUG: Adding default seat [+0.04s] DEBUG: Starting seat [+0.04s] DEBUG: Starting new display for automatic login as user dmytro [+0.04s] DEBUG: Starting local X display [+3.64s] DEBUG: X server :0 will replace Plymouth [+3.66s] DEBUG: Using VT 7 [+3.66s] DEBUG: Activating VT 7 [+3.66s] DEBUG: Logging to /var/log/lightdm/x-0.log [+3.66s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+3.66s] DEBUG: Launching X Server [+3.66s] DEBUG: Launching process 1154: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none [+3.66s] DEBUG: Waiting for ready signal from X server :0 [+3.66s] DEBUG: Acquired bus name org.freedesktop.DisplayManager [+3.66s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0 [+10.78s] DEBUG: Got signal 10 from process 1154 [+10.78s] DEBUG: Got signal from X server :0 [+10.78s] DEBUG: Stopping Plymouth, X server is ready [+10.80s] DEBUG: Connecting to XServer :0 [+10.80s] DEBUG: Automatically logging in user dmytro [+10.80s] DEBUG: Started session 1303 with service 'lightdm-autologin', username 'dmytro' [+13.22s] DEBUG: Session 1303 authentication complete with return value 0: Success [+13.26s] DEBUG: Autologin user dmytro authorized [+13.27s] DEBUG: Autologin using session ubuntu [+14.44s] DEBUG: Dropping privileges to uid 1000 [+14.48s] DEBUG: Restoring privileges [+14.49s] DEBUG: Dropping privileges to uid 1000 [+14.49s] DEBUG: Writing /home/dmytro/.dmrc [+14.61s] DEBUG: Restoring privileges [+14.81s] DEBUG: Starting session ubuntu as user dmytro [+14.81s] DEBUG: Session 1303 running command /usr/sbin/lightdm-session gnome-session --session=ubuntu [+15.76s] DEBUG: New display ready, switching to it [+15.76s] DEBUG: Activating VT 7 [+15.76s] DEBUG: Registering session with bus path /org/freedesktop/DisplayManager/Session0 [+16.63s] DEBUG: Session 1303 exited with return value 0 [+16.63s] DEBUG: User session quit [+16.63s] DEBUG: Stopping display [+16.63s] DEBUG: Sending signal 15 to process 1154 [+17.19s] DEBUG: Process 1154 exited with return value 0 [+17.19s] DEBUG: X server stopped [+17.19s] DEBUG: Removing X server authority /var/run/lightdm/root/:0 [+17.19s] DEBUG: Releasing VT 7 [+17.19s] DEBUG: Display server stopped [+17.19s] DEBUG: Display stopped [+17.19s] DEBUG: Active display stopped, switching to greeter [+17.19s] DEBUG: Switching to greeter [+17.19s] DEBUG: Starting new display for greeter [+17.19s] DEBUG: Starting local X display [+17.19s] DEBUG: Using VT 7 [+17.19s] DEBUG: Logging to /var/log/lightdm/x-0.log [+17.19s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+17.19s] DEBUG: Launching X Server [+17.19s] DEBUG: Launching process 1563: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+17.19s] DEBUG: Waiting for ready signal from X server :0 [+17.48s] DEBUG: Got signal 10 from process 1563 [+17.48s] DEBUG: Got signal from X server :0 [+17.48s] DEBUG: Connecting to XServer :0 [+17.48s] DEBUG: Starting greeter [+17.48s] DEBUG: Started session 1575 with service 'lightdm', username 'lightdm' [+17.61s] DEBUG: Session 1575 authentication complete with return value 0: Success [+17.61s] DEBUG: Greeter authorized [+17.61s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log [+17.68s] DEBUG: Session 1575 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/unity-greeter [+20.86s] DEBUG: Greeter connected version=1.2.1 [+20.86s] DEBUG: Greeter connected, display is ready [+20.86s] DEBUG: New display ready, switching to it [+20.86s] DEBUG: Activating VT 7 [+20.86s] DEBUG: Stopping greeter display being switched from [+24.90s] DEBUG: Greeter start authentication for dmytro [+24.90s] DEBUG: Started session 1746 with service 'lightdm', username 'dmytro' [+25.10s] DEBUG: Session 1746 got 1 message(s) from PAM [+25.10s] DEBUG: Prompt greeter with 1 message(s) [+31.87s] DEBUG: Continue authentication [+33.75s] DEBUG: Session 1746 authentication complete with return value 7: Authentication failure [+33.75s] DEBUG: Authenticate result for user dmytro: Authentication failure [+33.75s] DEBUG: Greeter start authentication for dmytro [+33.75s] DEBUG: Session 1746: Sending SIGTERM [+33.75s] DEBUG: Started session 2264 with service 'lightdm', username 'dmytro' [+33.75s] DEBUG: Session 2264 got 1 message(s) from PAM [+33.75s] DEBUG: Prompt greeter with 1 message(s) [+36.41s] DEBUG: Continue authentication [+36.53s] DEBUG: Session 2264 authentication complete with return value 0: Success [+36.53s] DEBUG: Authenticate result for user dmytro: Success [+36.54s] DEBUG: User dmytro authorized [+36.54s] DEBUG: Greeter requests session ubuntu [+36.54s] DEBUG: Using session ubuntu [+36.54s] DEBUG: Stopping greeter [+36.54s] DEBUG: Session 1575: Sending SIGTERM [+37.41s] DEBUG: Greeter closed communication channel [+37.41s] DEBUG: Session 1575 exited with return value 0 [+37.41s] DEBUG: Greeter quit [+37.42s] DEBUG: Dropping privileges to uid 1000 [+37.42s] DEBUG: Restoring privileges [+37.43s] DEBUG: Dropping privileges to uid 1000 [+37.43s] DEBUG: Writing /home/dmytro/.dmrc [+38.35s] DEBUG: Restoring privileges [+40.37s] DEBUG: Starting session ubuntu as user dmytro [+40.37s] DEBUG: Session 2264 running command /usr/sbin/lightdm-session gnome-session --session=ubuntu [+40.39s] DEBUG: Registering session with bus path /org/freedesktop/DisplayManager/Session1 [+50.78s] DEBUG: Session 2264 exited with return value 0 [+50.78s] DEBUG: User session quit [+50.78s] DEBUG: Stopping display [+50.78s] DEBUG: Sending signal 15 to process 1563 [+51.53s] DEBUG: Process 1563 exited with return value 0 [+51.53s] DEBUG: X server stopped [+51.53s] DEBUG: Removing X server authority /var/run/lightdm/root/:0 [+51.53s] DEBUG: Releasing VT 7 [+51.53s] DEBUG: Display server stopped [+51.53s] DEBUG: Display stopped [+51.53s] DEBUG: Active display stopped, switching to greeter [+51.53s] DEBUG: Switching to greeter [+51.53s] DEBUG: Starting new display for greeter [+51.53s] DEBUG: Starting local X display [+51.53s] DEBUG: Using VT 7 [+51.53s] DEBUG: Logging to /var/log/lightdm/x-0.log [+51.53s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+51.53s] DEBUG: Launching X Server [+51.53s] DEBUG: Launching process 2894: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+51.53s] DEBUG: Waiting for ready signal from X server :0 [+51.75s] DEBUG: Got signal 10 from process 2894 [+51.75s] DEBUG: Got signal from X server :0 [+51.75s] DEBUG: Connecting to XServer :0 [+51.75s] DEBUG: Starting greeter [+51.75s] DEBUG: Started session 2898 with service 'lightdm', username 'lightdm' [+51.76s] DEBUG: Session 2898 authentication complete with return value 0: Success [+51.76s] DEBUG: Greeter authorized [+51.76s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log [+51.76s] DEBUG: Session 2898 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/unity-greeter [+53.26s] DEBUG: Greeter connected version=1.2.1 [+53.26s] DEBUG: Greeter connected, display is ready [+53.26s] DEBUG: New display ready, switching to it [+53.26s] DEBUG: Activating VT 7 [+53.26s] DEBUG: Stopping greeter display being switched from [+54.17s] DEBUG: Greeter start authentication for dmytro [+54.17s] DEBUG: Started session 3152 with service 'lightdm', username 'dmytro' [+54.18s] DEBUG: Session 3152 got 1 message(s) from PAM [+54.18s] DEBUG: Prompt greeter with 1 message(s) [+58.61s] DEBUG: Continue authentication [+58.65s] DEBUG: Session 3152 authentication complete with return value 0: Success [+58.65s] DEBUG: Authenticate result for user dmytro: Success [+58.66s] DEBUG: User dmytro authorized [+58.66s] DEBUG: Greeter requests session ubuntu [+58.66s] DEBUG: Using session ubuntu [+58.66s] DEBUG: Stopping greeter [+58.66s] DEBUG: Session 2898: Sending SIGTERM How can I fix it? What other .log files could possibly give me a clue? Update: Possibly it's duplicate of Desktop login fails, terminal works

    Read the article

  • concurrency::accelerator

    - by Daniel Moth
    Overview An accelerator represents a "target" on which C++ AMP code can execute and where data can reside. Typically (but not necessarily) an accelerator is a GPU device. Accelerators are represented in C++ AMP as objects of the accelerator class. For many scenarios, you do not need to obtain an accelerator object, since the runtime has a notion of a default accelerator, which is what it thinks is the best one in the system. Examples where you need to deal with accelerator objects are if you need to pick your own accelerator (based on your specific criteria), or if you need to use more than one accelerators from your app. Construction and operator usage You can query and obtain a std::vector of all the accelerators on your system, which the runtime discovers on startup. Beyond enumerating accelerators, you can also create one directly by passing to the constructor a system-wide unique path to a device if you know it (i.e. the “Device Instance Path” property for the device in Device Manager), e.g. accelerator acc(L"PCI\\VEN_1002&DEV_6898&SUBSYS_0B001002etc"); There are some predefined strings (for predefined accelerators) that you can pass to the accelerator constructor (and there are corresponding constants for those on the accelerator class itself, so you don’t have to hardcode them every time). Examples are the following: accelerator::default_accelerator represents the default accelerator that the C++ AMP runtime picks for you if you don’t pick one (the heuristics of how it picks one will be covered in a future post). Example: accelerator acc; accelerator::direct3d_ref represents the reference rasterizer emulator that simulates a direct3d device on the CPU (in a very slow manner). This emulator is available on systems with Visual Studio installed and is useful for debugging. More on debugging in general in future posts. Example: accelerator acc(accelerator::direct3d_ref); accelerator::direct3d_warp represents a target that I will cover in future blog posts. Example: accelerator acc(accelerator::direct3d_warp); accelerator::cpu_accelerator represents the CPU. In this first release the only use of this accelerator is for using the staging arrays technique that I'll cover separately. Example: accelerator acc(accelerator::cpu_accelerator); You can also create an accelerator by shallow copying another accelerator instance (via the corresponding constructor) or simply assigning it to another accelerator instance (via the operator overloading of =). Speaking of operator overloading, you can also compare (for equality and inequality) two accelerator objects between them to determine if they refer to the same underlying device. Querying accelerator characteristics Given an accelerator object, you can access its description, version, device path, size of dedicated memory in KB, whether it is some kind of emulator, whether it has a display attached, whether it supports double precision, and whether it was created with the debugging layer enabled for extensive error reporting. Below is example code that accesses some of the properties; in your real code you'd probably be checking one or more of them in order to pick an accelerator (or check that the default one is good enough for your specific workload): void inspect_accelerator(concurrency::accelerator acc) { std::wcout << "New accelerator: " << acc.description << std::endl; std::wcout << "is_debug = " << acc.is_debug << std::endl; std::wcout << "is_emulated = " << acc.is_emulated << std::endl; std::wcout << "dedicated_memory = " << acc.dedicated_memory << std::endl; std::wcout << "device_path = " << acc.device_path << std::endl; std::wcout << "has_display = " << acc.has_display << std::endl; std::wcout << "version = " << (acc.version >> 16) << '.' << (acc.version & 0xFFFF) << std::endl; } accelerator_view In my next blog post I'll cover a related class: accelerator_view. Suffice to say here that each accelerator may have from 1..n related accelerator_view objects. You can get the accelerator_view from an accelerator via the default_view property, or create new ones by invoking the create_view method that creates an accelerator_view object for you (by also accepting a queuing_mode enum value of deferred or immediate that we'll also explore in the next blog post). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Going Paperless

    - by Jesse
    One year ago I came to work for a company where the entire development team is 100% “remote”; we’re spread over 3 time zones and each of us works from home. This seems to be an increasingly popular way for people to work and there are many articles and blog posts out there enumerating the advantages and disadvantages of working this way. I had read a lot about telecommuting before accepting this job and felt as if I had a pretty decent idea of what I was getting into, but I’ve encountered a few things over the past year that I did not expect. Among the most surprising by-products of working from home for me has been a dramatic reduction in the amount of paper that I use on a weekly basis. Hoarding In The Workplace Prior to my current telecommute job I worked in what most would consider pretty traditional office environments. I sat in cubicles furnished with an enormous plastic(ish) modular desks, had a mediocre (at best) PC workstation, and had ready access to a seemingly endless supply of legal pads, pens, staplers and paper clips. The ready access to paper, countless conference room meetings, and abundance of available surface area on my desk and in drawers created a perfect storm for wasting paper. I brought a pad of paper with me to every meeting I ever attended, scrawled some brief notes, and then tore that sheet off to keep next to my keyboard to follow up on any needed action items. Once my immediate need for the notes was fulfilled, that sheet would get shuffled off into a corner of my desk or filed away in a drawer “just in case”. I would guess that for all of the notes that I ever filed away, I might have actually had to dig up and refer to 2% of them (and that’s probably being very generous). That said, on those rare occasions that I did have to dig something up from old notes, it was usually pretty important and I ended up being very glad that I saved them. It was only when I would leave a job or move desks that I would finally gather all those notes together and take them to shredding bin to be disposed of. When I left my last job the amount of paper I had accumulated over my three years there was absurd, and I knew coworkers who had substance-abuse caliber paper wasting addictions that made my bad habit look like nail-biting in comparison. A Product Of My Environment I always hated using all of this paper, but simply couldn’t bring myself to stop. It would look bad if I showed up to an important conference room meeting without a pad of paper. What if someone said something profound! Plus, everyone else always brought paper with them. If you saw someone walking down the hallway with a pad of paper in hand you knew they must be on their way to a conference room meeting. Some people even had fancy looking portfolio notebook sheaths that gave their legal pads all the prestige of a briefcase. No one ever worried about running out of fresh paper because there was an endless supply, and there certainly was no shortage of places to store and file used paper. In short, the traditional office was setup for using tons and tons of paper; it’s baked into the culture there. For that reason, it didn’t take long for me to kick the paper habit once I started working from home. In my home office, desk and drawer space are at a premium. I don’t have the budget (or the tolerance) for huge modular office furniture in my spare bedroom. I also no longer have access to a bottomless pit of office supplies stock piled in cabinets and closets. If I want to use some paper, I have to go out and buy it. Finally (and most importantly), all of the meetings that I have to attend these days are “virtual”. We use instant messaging, VOIP, video conferencing, and e-mail to communicate with each other. All I need to take notes during a meeting is my computer, which I happen to be sitting right in front of all day. I don’t have any hard numbers for this, but my gut feeling is that I actually take a lot more notes now than I ever did when I worked in an office. The big difference is I don’t have to use any paper to do so. This makes it far easier to keep important information safe and organized. The Right Tool For The Job When I first started working from home I tried to find a single application that would fill the gap left by the pen and paper that I always had at my desk when I worked in an office. Well, there are no silver bullets and I’ve evolved my approach over time to try and find the best tool for the job at hand. Here’s a quick summary of how I take notes and keep everything organized. Notepad++ – This is the first application I turn to when I feel like there’s some bit of information that I need to write down and save. I use Launchy, so opening Notepad++ and creating a new file only takes a few keystrokes. If I find that the information I’m trying to get down requires a more sophisticated application I escalate as needed. The Desktop – By default, I save every file or other bit of information to the desktop. Anyone who has ever had to fix their parents computer before knows that this is a dangerous game (any file my mother has ever worked on is saved directly to the desktop and rarely moves anywhere else). I agree that storing things on the desktop isn’t a great long term approach to keeping organized, which is why I treat my desktop a bit like my e-mail inbox. I strive to keep both empty (or as close to empty as I possibly can). If something is on my desktop, it means that it’s something relevant to a task or project that I’m currently working on. About once a week I take things that I’m not longer working on and put them into my ‘Notes’ folder. The ‘Notes’ Folder – As I work on a task, I tend to accumulate multiple files associated with that task. For example, I might have a bit of SQL that I’m working on to gather data for a new report, a quick C# method that I came up with but am not yet ready to commit to source control, a bulleted list of to-do items in a .txt file, etc. If the desktop starts to get too cluttered, I create a new sub-folder in my ‘Notes’ folder. Each sub-folder’s name is the current date followed by a brief description of the task or project. Then all files related to that task or project go into that sub folder. By using the date as the first part of the folder name, these folders are automatically sorted in reverse chronological order. This means that things I worked on recently will generally be near the top of the list. Using the built-in Windows search functionality I now have a pretty quick and easy way to try and find something that I worked on a week ago or six months ago. Dropbox – Dropbox is a free service that lets you store up to 2GB of files “in the cloud” and have those files synced to all of the different computers that you use. My ‘Notes’ folder lives in Dropbox, meaning that it’s contents are constantly backed up and are always available to me regardless of which computer I’m using. They also have a pretty decent iPhone application that lets you browse and view all of the files that you have stored there. The free 2GB edition is probably enough for just storing notes, but I also pay $99/year for the 50GB storage upgrade and keep all of my music, e-books, pictures, and documents in Dropbox. It’s a fantastic service and I highly recommend it. Evernote – I use Evernote mostly to organize information that I access on a fairly regular basis. For example, my Evernote account has a running grocery shopping list, recipes that my wife and I use a lot, and contact information for people I contact infrequently enough that I don’t want to keep them in my phone. I know some people that keep nearly everything in Evernote, but there’s something about it that I find a bit clunky, so I tend to use it sparingly. Google Tasks – One of my biggest paper wasting habits was keeping a running task-list next to my computer at work. Every morning I would sit down, look at my task list, cross off what was done and add new tasks that I thought of during my morning commute. This usually resulted in having to re-copy the task list onto a fresh sheet of paper when I was done. I still keep a running task list at my desk, but I’ve started using Google Tasks instead. This is a dead-simple web-based application for quickly adding, deleting, and organizing tasks in a simple checklist style. You can quickly move tasks up and down on the list (which I use for prioritizing), and even create sub-tasks for breaking down larger tasks into smaller pieces. Balsamiq Mockups – This is a simple and lightweight tool for creating drawings of user interfaces. It’s great for sketching out a new feature, brainstorm the layout of a interface, or even draw up a quick sequence diagram. I’m terrible at drawing, so Balsamiq Mockups not only lets me create sketches that other people can actually understand, but it’s also handy because you can upload a sketch to a common location for other team members to access. I can honestly say that using these tools (and having limited resources at home) have lead me to cut my paper usage down to virtually none. If I ever were to return to a traditional office workplace (hopefully never!) I’d try to employ as many of these applications and techniques as I could to keep paper usage low. I feel far less cluttered and far better organized now.

    Read the article

  • How does one find out which application is associated with an indicator icon?

    - by Amos Annoy
    It is trivial to do this in Ubuntu 10.04. The question is specific to Ubuntu 12.04. some pertinent references (src: answer to What is the difference between indicators and a system tray?: Here is the documentation for indicators: Application indicators | Ubuntu App Developer libindicate Reference Manual libappindicator Reference Manual also DesktopExperienceTeam/ApplicationIndicators - Ubuntu Wiki ref: How can the application that makes an indicator icon be identified? bookmark: How does one find out which application is associated with an indicator icon in Ubuntu 12.04? is a serious question for reasons & problems outlined below and for which a significant investment has been made and is necessary for remedial purposes. reviewing refs. to find an orchestrated resolution ... (an indicator ap. indicator maybe needed) This has nothing to do (does it?) with right click. How can an indicator's icon in Ubuntu 12.04 be matched with the program responsible for it's manifestation on the top panel? A list of running applications can include all processes using System Monitor. How is the correct matching process found for an indicator? How are the sub-indicator applications identified? These are the aps associated with the components of an indicators drop-down menu. (This was to be a separate question and quite naturally follows up the progression. It is included here as it is obvious there is no provisioning to track down offending either sub or indicator aps. easily.) (The examination of SM points out a rather poignant factor in the faster battery depletion and shortened run time - the ambient quiescent CPU rate in 12.04 is now well over 20% when previously, in 10.04, it was well under 10%, between 5% and 7%! - the huge inordinate cpu overhead originates from Xorg and compiz - after booting the system, only SM is run and All Processes are selected, sorting on %CPU - switching between Resources and Processes profiles the execution overhead problem - running another ap like gedit "Text Editor" briefly gives it CPU priority - going back to S&M several aps. are at the top of the list in order: gnome-system-monitor as expected, then: Xorg, compiz, unity-panel-service, hud-service, with dbus-daemon and kworker/x:y's mixed in with some expected daemons and background tasks like nm-applet - not only do Xorg and compiz require excessive CPU time but their entourage has to come along too! further exacerbating the problem - our compute bound tasks no longer work effectively in the field - reduced battery life, reduced CPU time for custom ap.s etc. - and all this precipitated from an examination of what is going on with the battery ap. indicator - this was and is not a flippant, rhetorical or idle musing but has consequences for the credible deployment of 12.04 to reduce the negative impact of its overhead in a production environment) (I have a problem with the battery indicator - it sometimes has % and other times hh:mm - it is necessary to know the ap. & v. to get more info on controlling same. ditto: There are issues with other indicator aps.: NM vs. iwlist/iwconfig conflict, BT ap. vs RF switch, Battery ap. w/ no suspend/sleep for poor battery runtime, ... the list goes on) Details from: How can I find Application Indicator ID's? suggests looking at: file:///usr/share/indicator-application/ordering-override.keyfile [Ordering Index Overrides] nm-applet=1 gnome-power-manager=2 ibus=3 gst-keyboard-xkb=4 gsd-keyboard-xkb=5 which solves the battery ap. identification, and presumably nm is NetworkManager for the rf icon, but the envelope, blue tooth and speaker indicator aps. are still a mystery. (Also, the ordering is not correlated.) Mind you, it was simple in the past to simply right click to get the About option to find the ap. & v. info. browsing around and about: file:///usr/share/indicator-application/ordering-override.keyfile examined: file:///usr/share/indicators file:///usr/share/indicators/messages/applications/ ... perhaps?/presumably? the information sought may be buried in file:///usr/share/indicators A reference in the comments was given to: What is the difference between indicators and a system tray? quoting from that source ... Unfortunately desktop indicators are not well documented yet: I couldn't find any specification doc ... Well ... the actual document https://wiki.ubuntu.com/DesktopExperienceTeam/ApplicationIndicators#Summary does not help much but it's existential information provides considerable insight ...

    Read the article

  • The Oracle Retail Week Awards - in review

    - by user801960
    The Oracle Retail Week Awards 2012 were another great success, building on the legacy of previous award ceremonies. Over 1,600 of the UK's top retailers gathered at the Grosvenor House Hotel and many of Europe's top retail leaders attended the prestigious Oracle Retail VIP Reception in the Grosvenor House Hotel's Red Bar. Over the years the Oracle Retail Week Awards have become a rallying point for the morale of the retail industry, and each nominated retailer served as a demonstration that the industry is fighting fit. It was an honour to speak to so many figureheads of UK - and global - retail. All of us at Oracle Retail would like to congratulate both the winners and the nominees for the awards. Retail is a cornerstone of the economy and it was inspiring to see so many outstanding demonstrations of innovation and dedication in the entries. Winners 2012   The Market Force Customer Service Initiative of the Year Winner: Dixons Retail: Knowhow Highly Commended: Hughes Electrical: Digital Switchover     The Deloitte Employer of the Year Winner: Morrisons     Growing Retailer of the Year Winner: Hallett Retail - The Concessions People Highly Commended: Blue Inc     The TCC Marketing/Advertising Campaign of the Year Winner: Sainsbury's: Feed your Family for £50     The Brandbank Multichannel Retailer of the Year Winner: Debenhams Highly Commended: Halfords     The Ashton Partnership Product Innovation of the Year Winner: Argos: Chad Valley Highly Commended: Halfords: Private label bikes     The RR Donnelley Pure-play Online Retailer of the Year Winner: Wiggle     The Hitachi Consulting Responsible Retailer of the Year Winner: B&Q: One Planet Home     The CA Technologies Retail Technology Initiative of the Year Winner: Oasis: Argyll Street flagship launch with iPad PoS     The Premier Tax Free Speciality Retailer of the Year Winner: Holland & Barrett     Store Design of the Year Winner: Next Home and Garden, Shoreham, Sussex Highly Commended: Dixons Retail, Black concept store, Birmingham Bullring     Store Manager of the Year Winner: Ian Allcock, Homebase, Aylesford Highly Commended: Darren Parfitt, Boots UK, Melton Mowbray Health Centre     The Wates Retail Destination of the Year Winner: Westfield, Stratford     The AlixPartners Emerging Retail Leader of the Year Winner: Catriona Marshall, HobbyCraft, Chief Executive     The Wipro Retail International Retailer of the Year Winner: Apple     The Clarity Search Retail Leader of the Year Winner: Ian Cheshire, Chief Executive, Kingfisher     The Oracle Retailer of the Year Winner: Burberry     Outstanding Contribution to Retail Winner: Lord Harris of Peckham     Oracle Retail and "Your Experience Platform" Technology is the key to providing that differentiated retail experience. More specifically, it is what we at Oracle call ‘the experience platform’ - a set of integrated, cross-channel business technology solutions, selected and operated by a retail business and IT team, and deployed in accordance with that organisation’s individual strategy and processes. This business systems architecture simultaneously: Connects customer interactions across all channels and touchpoints, and every customer lifecycle phase to provide a differentiated customer experience that meets consumers’ needs and expectations. Delivers actionable insight that enables smarter decisions in planning, forecasting, merchandising, supply chain management, marketing, etc; Optimises operations to align every aspect of the retail business to gain efficiencies and economies, to align KPIs to eliminate strategic conflicts, and at the same time be working in support of customer priorities.   Working in unison, these three goals not only help retailers to successfully navigate the challenges of today but also to focus on delivering that personalised customer experience based on differentiated products, pricing, services and interactions that will help you to gain market share and grow sales.  

    Read the article

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Web Site Performance and Assembly Versioning – Part 2 Versioning Combined Files Using Subversion

    - by capgpilk
    Ok so it took a while to post this second part. Many apologies, we had a big roll out of a new platform at work and many things had to get sidelined. So this is the second part in a short series of website performance and using versioning to help improve it. Minification and Concatination of JavaScript and CSS Files Versioning Combined Files Using Subversion – this post Versioning Combined Files Using Mercurial – published shortly In the previous post we used AjaxMin to shrink js and css files then concatenated them into one file each which had the file name of site-script.combined.min.js and site-style.combined.min.css. These file names are fine, but you can configure IIS 7 to cache these static files and so lower the amount of data transferred between server and client. This is done by editing the response headers in IIS. 1. In IIS7 Manager, choose the directory where these files are located and select HTTP Response Headers. 2. Check the Expire Web Content and set a time period well into the future. 3. When refreshing the web page, the server will respond with HTTP 304 forcing the browser to retrieve the file from its cache. 4. As can be seen in FireBug, the Cache-Control header has a max age of 31536000 seconds which equates to 365 days.   The server will always send this HTTP 304 message unless the file changes forcing it to send new content. To help force this we can change the file name based on the latest build using the SVN revision number in the filename. So we have lowered data transfer on content that hasn’t changed, but forced it to be sent when you have made a change to the css or js files. Now to get the SVN revision number in to the file name. 1. Import the MSBuildCommunityTasks targets which can be dowloaded from here. 1: <Import Project="$(MSBuildExtensionsPath) 2: \MSBuildCommunityTasks 3: \MSBuild.Community.Tasks.Targets" /> 2. Edit the BeforeBuild target to call out to svn and get the latest revision 1: <SvnVersion LocalPath="$(MSBuildProjectDirectory)" 2: ToolPath="$(ProgramFiles)\VisualSVN Server\bin"> 3: <Output TaskParameter="Revision" PropertyName="Revision" /> 4: </SvnVersion> 3. Set it to update the project AssemblyInfo.cs file for the svn revision. 1: <FileUpdate Files="Properties\AssemblyInfo.cs" 2: Regex="(\d+)\.(\d+)\.(\d+)\.(\d+)" 3: ReplacementText="$1.$2.$3.$(Revision)" /> 4. Now edit the AfterBuild target to get the full dll version. You could combine these two steps and just get the version from svn, I am working on one project that updates the AssemblyInfo file and another project that allows manual editing of the file, but needs that version within the file name; so I just combined the two for this post. 1: <MSBuild.ExtensionPack.Framework.Assembly 2: TaskAction="GetInfo" 3: NetAssembly="$(OutputPath)\mydll.dll"> 4: <Output TaskParameter="OutputItems" ItemName="Info" /> 5: </MSBuild.ExtensionPack.Framework.Assembly> 6: <Message Text="Version: %(Info.AssemblyVersion)" 7: Importance="High" /> 5. Use this Info.AssemblyVersion to write out the combined css and js files as described in the last post. 1: <WriteLinestoFile File="Scripts\site-%(Info.AssemblyVersion).combined.min.js" 2: Lines="@(JSLinesSite)" Overwrite="true" />   In the next post I will cover doing the same, but for a Mercurial repository.

    Read the article

  • Oracle Social Network Developer Challenge Winners

    - by kellsey.ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. Now that OpenWorld 2012 has wrapped, I have time to tell you all about what happened. Maybe you recall that Noel (@noelportugal) and I were running a modified hackathon during the show, the Oracle Social Network Developer Challenge. Without further ado, congratulations to Dimitri Gielis (@dgielis) and Martin Giffy D’Souza (@martindsouza) on their winning entry, an integration between Oracle APEX and Oracle Social Network that integrates feedback and bug submission with Oracle Social Network Conversations, allowing developers, end-users and project leaders to view and discuss the feedback on their APEX applications from within Oracle Social Network. Update: Bob Rhubart of OTN (@brhubart) interviewed Dimitri and Martin right after their big win. Money quote from Dimitri when asked what he’d buy with the $500 in Amazon gift cards, “Oracle Social Network.” Nice one. In their own words: In the developers perspective it’s important to get feedback soon, so after a first iteration and end-users start to test, they can give feedback of the application. Previously it stopped there, and it was up to the developer to communicate further with email, phone etc. With OSN every feedback and communication gets logged and other people can see the discussion immediately as well. For the end users perspective he can now communicate in a more efficient way to not only the developers, but also between themselves. Maybe many end-users (in different locations) would like to change some behaviour, by using OSN they can see the entry somebody put in with a screenshot and they can just start to chat about it. Some key technical end users can have lighten the tasks of the development team by looking at the feedback first and start to communicate with their peers. For the project manager he has now the ability to really see what communication has taken place in certain areas and can make decisions on that. Later, if things come up again, he can always go back in OSN and see what was said at that moment in time. Integrating OSN in the APEX applications enhances the user experience, makes the lives of the developers easier and gives a better overview to project managers. Incidentally, you may already know Dimitri and Martin, since both are Oracle Ace Directors. I ran into Martin at the Ace Director briefings Friday before the conference started, and at that point, he wasn’t sure he’d have time to enter the Challenge. After some coaxing, he and Dimitri agreed to give it a go and banged out their entry on Tuesday night, or more accurately, very early Wednesday morning, the day of the Challenge judging. I think they said it took them about four hours of hardcore coding to get it done, very much like a traditional hackathon, which is essentially a code sprint from idea to finished product. Here are some screenshots of the workflow they built. #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; } I love this idea, i.e. closing the loop between web developers and users, a very common pain point, and so did our judges. Speaking of, special thanks to our panel of three judges: Reggie Bradford (@reggiebradford), serial entrepreneur, founder of Vitrue and SVP of Cloud Product Development at Oracle Robert Hipps (@roberthipps), VP of Development for Oracle Social Network and my former boss Roland Smart (@rsmartx), VP of Social Marketing and the brains behind the Oracle Social Developer Community Finally, thanks to everyone who made this possible, including: The three other teams from HarQen (@harqen), TEAM Informatics (@teaminformatics) and Fishbowl Solutions (@fishbowle20) featuring Friend of the ‘Lab John Sim (@jrsim_uix), who finished and presented entries. I’ll be posting the details of their work this week. The one guy who finished an entry, but couldn’t make the judging, Bex Huff (@bex). Bex rallied from a hospitalization due to an allergic reaction during the show; he’s fine, don’t worry. I’ll post details of his work next week, too. The 40-plus people who registered to compete in the Challenge. Noel for all his hard work, sample code, and flying monkey target, more on that to come. The Oracle Social Network development team for supporting this event. Everyone in legal and the beta program office for their help. And finally, the Oracle Technology Network (@oracletechnet) for hosting the event and providing countless hours of operational and moral support. Sorry if I’ve missed some people, since this was a huge team effort. This event was a big success, and we plan to do similar events in the future. Stay tuned to this channel for more. 

    Read the article

  • Sea Monkey Sales & Marketing, and what does that have to do with ERP?

    - by user709270
    Tier One Defined By Lyle Ekdahl, Oracle JD Edwards Group Vice President and General Manager  I recently became aware of the latest Sea Monkey Sales & Marketing tactic. Wait now, what is Sea Monkey Sales & Marketing and what does that have to do with ERP? Well if you grew up in USA during the 50’s, 60’s and maybe a bit in the early 70’s there was a unifying media of culture known as the comic book. I was a big Iron Man fan. I always liked the troubled hero aspect of Tony Start and hey he was a technologist. This is going somewhere, just hold on. Of course comic books like most media contained advertisements. Ninety pound weakling transformed by Charles Atlas in just 15 minutes per day. Baby Ruth, Juicy Fruit Gum and all assortments of Hostess goodies were on display. The best ad was for the “Amazing Live Sea-Monkeys – The real live fun-pets you grow yourself!” These ads set the standard for exaggeration and half-truth; “…they love attention…so eager to please, they can even be trained…” The cartoon picture on the ad is of a family of royal looking sea creatures – daddy, mommy, son and little sis – sea monkey? There was a disclaimer at the bottom in fine print, “Caricatures shown not intended to depict Artemia.” Ok what ten years old knows what the heck artemia is? Well you grow up fast once you’ve been separated from your buck twenty five plus postage just to discover that it is brine shrimp. Really dumb brine shrimp that don’t take commands or do tricks. Unfortunately the technology industry is full of sea monkey sales and marketing. Yes believe it or not in some cases there is subterfuge and obfuscation used to secure contracts. Hey I get it; the picture on the box might not be the actual size. Make up what you want about your product, but here is what I don’t like, could you leave out the obvious falsity when it comes to my product, especially the negative stuff. So here is the latest one – “Oracle’s JD Edwards is NOT tier one”. Really? Definition please! Well a whole host of googleable and reputable sources confirm that a tier one vendor is large, well known, and enjoys national and international recognition. Let me see large, so thousands of customers? Oh and part of the world’s largest business software and hardware corporation? Check and check JD Edwards has that and that. Well known, enjoying national and international recognition? Oracle’s JD Edwards EnterpriseOne is available in 21 languages and is directly localized in 33 countries that support some of the world’s largest multinationals and many midsized domestic market companies. Something on the order of half the JD Edwards customer base is outside North America. My passport is on its third insert after 2 years and not from vacations. So if you don’t mind I am going to mark national and international recognition in the got it column. So what else is there? Well let me offer a few criteria. Longevity – The JD Edwards products benefit from 35+ years of intellectual property development; through booms, busts, mergers and acquisitions, we are still here Vision & innovation – JD Edwards is the first full suite ERP to run on the iPad as just one example Proven track record of execution – Since becoming part of Oracle, JD Edwards has released to the market over 20 deliverables including major release, point releases, new apps modules, tool releases, integrations…. Solid, focused functionality with a flexible, interoperable, extensible underlying architecture – JD Edwards offers solid core ERP with specialty modules for verticals all delivered on a well defined independent tools layer that helps enable you to scale your business without an ERP reimplementation A continuation plan – Oracle’s JD Edwards offers our customers a 6 year roadmap as well as interoperability with Oracle’s next generation of applications Oh I almost forgot that the expert sources agree on one additional thing, tier one may be a preferred vendor that offers product and services to you with appealing value. You should check out the TCO studies of JD Edwards. I think you will see what the thousands of customers that rely on these products to run their businesses enjoy – that is the tier one solution with the lowest TCO. Oh and if you get an offer to buy an ERP for no license charge, remember the picture on the box might not be the actual size. 

    Read the article

  • Screen (command-line program) bug?

    - by VioletCrime
    fired up my Minecraft server again after about a year off. My server used to run 11.04, which has since been upgraded to 12.04; I lost my management scripts in the upgrade (thought I'd backed up the user's home directory), but whatever, I enjoy developing stuff like that anyways. However, this time around, I'm running into issues. I start the Minecraft server using a detached screen, however the script is unable to 'stuff' commands into the screen instance until I attach to the screen then detach again? Once I do that, I can stuff anything I want into the server's terminal using the -X option, until I stop/start the server again, then I have to reattach and detach in order to restore functionality? Here's the manager script: #!/bin/bash #Name of the screen housing the server (set with the -S flag at startup) SCREENNAME=minecraft1 #Name of the folder housing the Minecraft world FOLDERNAME=world1 startServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Cannot start Minecraft server; it is already running!" else screen -dmS $SCREENNAME java -Xmx1024M -Xms1024M -jar minecraft_server.jar sleep 2 if screen -list | grep "$SCREENNAME" > /dev/null then echo "Minecraft server started; happy mining!" else echo "ERROR: Minecraft server failed to start!" fi fi; } stopServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (halt) in one minute." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down (halt) in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' else echo "Server is not running; nothing to stop." fi; } stopServerNow (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 5-second warning." screen -S $SCREENNAME -X stuff "/say EMERGENCY SHUTDOWN! 5 seconds to halt." screen -S $SCREENNAME -X stuff $'\015' sleep 5 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' else echo "Server is not running; nothing to stop." fi; } restartServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (restart) in one minute." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down (restart) in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' sleep 2 startServer else echo "Cannot restart server: it isn't running." fi; } #In order for this function to work, a directory 'backup/$FOLDERNAME' must exist in the same #directory that '$FOLDERNAME' resides backupWorld (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (backup) in one minute." screen -S $SCREENNAME -X stuff $'\015' screen -S $SCREENNAME -X stuff "/say Server should be down for no more than a few seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down for backup in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' fi sleep 2 if screen -list | grep $SCREENNAME > /dev/null then echo "Server is still running? Error." else cd .. tar -czvf backup0.tar.gz $FOLDERNAME mv backup0.tar.gz backup/$FOLDERNAME cd backup/$FOLDERNAME rm backup10.tar.gz mv backup9.tar.gz backup10.tar.gz mv backup8.tar.gz backup9.tar.gz mv backup7.tar.gz backup8.tar.gz mv backup6.tar.gz backup7.tar.gz mv backup5.tar.gz backup6.tar.gz mv backup4.tar.gz backup5.tar.gz mv backup3.tar.gz backup4.tar.gz mv backup2.tar.gz backup3.tar.gz mv backup1.tar.gz backup2.tar.gz mv backup0.tar.gz backup1.tar.gz cd ../../$FOLDERNAME screen -dmS $SCREENNAME java -Xmx1024M -Xms1024M -jar minecraft_server.jar; sleep 2 if screen -list | grep "$SCREENNAME" > /dev/null then echo "Minecraft server restarted; happy mining!" else echo "ERROR: Minecraft server failed to start!" fi fi; } printCommands (){ echo echo "$0 usage:" echo echo "Start : Starts the server on a detached screen." echo "Stop : Stop the server; includes a 1-minute warning." echo "StopNOW : Stops the server with only a 5-second warning." echo "Restart : Stops the server and starts the server again." echo "Backup : Stops the server (1 min), backs up the world, and restarts." echo "Help : Display this message." } #Forces case-insensitive string comparisons shopt -s nocasematch #Primary 'Switch' if [[ $1 = "start" ]] then startServer elif [[ $1 = "stop" ]] then stopServer elif [[ $1 = "stopnow" ]] then stopServerNow elif [[ $1 = "backup" ]] then backupWorld elif [[ $1 = "restart" ]] then restartServer else printCommands fi

    Read the article

  • Searching for the Perfect Developer&rsquo;s Laptop.

    - by mbcrump
    I have been in the market for a new computer for several months. I set out with a budget of around $1200. I knew up front that the machine would be used for developing applications and maybe some light gaming. I kept switching between buying a laptop or a desktop but the laptop won because: With a Laptop, I can carry it everywhere and with a desktop I can’t. I searched for about 2 weeks and narrowed it down to a list of must-have’s : i7 Processor (I wasn’t going to settle for an i5 or AMD. I wanted a true Quad-core machine, not 2 dual-core fused together). 15.6” monitor SSD 128GB or Larger. – It’s almost 2011 and I don’t want an old standard HDD in this machine. 8GB of DDR3 Ram. – The more the better, right? 1GB Video Card (Prefer NVidia) – I might want to play games with this. HDMI Port – Almost a standard on new Machines. This would be used when I am on the road and want to stream Netflix to the HDTV in the Hotel room. Webcam Built-in – This would be to video chat with the wife and kids if I am on the road. 6-Cell Battery. – I’ve read that an i7 in a laptop really kills the battery. A 6-cell or 9-cell is even better. That is a pretty long list for a budget of around $1200. I searched around the internet and could not buy this machine prebuilt for under $1200. That was even with coupons and my company’s 10% Dell discount. The only way that I would get a machine like this was to buy a prebuilt and replace parts. I chose the  Lenovo Y560 on Newegg to start as my base. Below is a top-down picture of it.   Part 1: The Hardware The Specs for this machine: Color :  GrayOperating System : Windows 7 Home Premium 64-bitCPU Type : Intel Core i7-740QM(1.73GHz)Screen : 15.6" WXGAMemory Size : 4GB DDR3Hard Disk : 500GBOptical Drive : DVD±R/RWGraphics Card : ATI Mobility Radeon HD 5730Video Memory : 1GBCommunication : Gigabit LAN and WLANCard slot : 1 x Express Card/34Battery Life : Up to 3.5 hoursDimensions : 15.20" x 10.00" x 0.80" - 1.30"Weight : 5.95 lbs. This computer met most of the requirements above except that it didn’t come with an SSD or have 8GB of DDR3 Memory. So, I needed to start shopping except this time for an SSD. I asked around on twitter and other hardware forums and everyone pointed me to the Crucial C300 SSD. After checking prices of the drive, it was going to cost an extra $275 bucks and I was going from a spacious 500GB drive to 128GB. After watching some of the SSD videos on YouTube I started feeling better. Below is a pic of the Crucial C300 SSD. The second thing that I needed to upgrade was the RAM. It came with 4GB of DDR3 RAM, but it was slow. I decided to buy the Crucial 8GB (4GB x 2) Kit from Newegg. This RAM cost an extra $120 and had a CAS Latency of 7. In the end this machine delivered everything that I wanted and it cost around $1300. You are probably saying, well your budget was $1200. I have spare parts that I’m planning on selling on eBay or Anandtech.  =) If you are interested then shoot me an email and I will give you a great deal mbcrump[at]gmail[dot]com. 500GB Laptop 7200RPM HDD 4GB of DDR3 RAM (2GB x 2) faceVision HD 720p Camera – Unopened In the end my Windows Experience Rating of the SSD was 7.7 and the CPU 7.1. The max that you can get is a 7.9. Part 2: The Software I’m very lucky that I get a lot of software for free. When choosing a laptop, the OS really doesn’t matter because I would never keep the bloatware pre-installed or Windows 7 Home Premium on my main development machine. Matter of fact, as soon as I got the laptop, I immediately took out the old HDD without booting into it. After I got the SSD into the machine, I installed Windows 7 Ultimate 64-Bit. The BIOS was out of date, so I updated that to the latest version and started downloading drivers off of Lenovo’s site. I had to download the Wireless Networking Drivers to a USB-Key before I could get my machine on my wireless network. I also discovered that if the date on your computer is off then you cannot join the Windows 7 Homegroup until you fix it. I’m aware that most people like peeking into what programs other software developers use and I went ahead and listed my “essentials” of a fresh build. I am a big Silverlight guy, so naturally some of the software listed below is specific to Silverlight. You should also check out my master list of Tools and Utilities for the .NET Developer. See a killer app that I’m missing? Feel free to leave it in the comments below. My Software Essential List. CPU-Z Dropbox Everything Search Tool Expression Encoder Update Expression Studio 4 Ultimate Foxit Reader Google Chrome Infragistics NetAdvantage Ultimate Edition Keepass Microsoft Office Professional Plus 2010 Microsoft Security Essentials 2  Mindscape Silverlight Elements Notepad 2 (with shell extension) Precode Code Snippet Manager RealVNC Reflector ReSharper v5.1.1753.4 Silverlight 4 Toolkit Silverlight Spy Snagit 10 SyncFusion Reporting Controls for Silverlight Telerik Silverlight RadControls TweetDeck Virtual Clone Drive Visual Studio 2010 Feature Pack 2 Visual Studio 2010 Ultimate VS KB2403277 Update to get Feature Pack 2 to work. Windows 7 Ultimate 64-Bit Windows Live Essentials 2011 Windows Live Writer Backup. Windows Phone Development Tools That is pretty much it, I have a new laptop and am happy with the purchase. If you have any questions then feel free to leave a comment below.  Subscribe to my feed

    Read the article

  • Calculated Columns in Entity Framework Code First Migrations

    - by David Paquette
    I had a couple people ask me about calculated properties / columns in Entity Framework this week.  The question was, is there a way to specify a property in my C# class that is the result of some calculation involving 2 properties of the same class.  For example, in my database, I store a FirstName and a LastName column and I would like a FullName property that is computed from the FirstName and LastName columns.  My initial answer was: 1: public string FullName 2: { 3: get { return string.Format("{0} {1}", FirstName, LastName); } 4: } Of course, this works fine, but this does not give us the ability to write queries using the FullName property.  For example, this query: 1: var users = context.Users.Where(u => u.FullName.Contains("anan")); Would result in the following NotSupportedException: The specified type member 'FullName' is not supported in LINQ to Entities. Only initializers, entity members, and entity navigation properties are supported. It turns out there is a way to support this type of behavior with Entity Framework Code First Migrations by making use of Computed Columns in SQL Server.  While there is no native support for computed columns in Code First Migrations, we can manually configure our migration to use computed columns. Let’s start by defining our C# classes and DbContext: 1: public class UserProfile 2: { 3: public int Id { get; set; } 4: 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: 8: [DatabaseGenerated(DatabaseGeneratedOption.Computed)] 9: public string FullName { get; private set; } 10: } 11: 12: public class UserContext : DbContext 13: { 14: public DbSet<UserProfile> Users { get; set; } 15: } The DatabaseGenerated attribute is needed on our FullName property.  This is a hint to let Entity Framework Code First know that the database will be computing this property for us. Next, we need to run 2 commands in the Package Manager Console.  First, run Enable-Migrations to enable Code First Migrations for the UserContext.  Next, run Add-Migration Initial to create an initial migration.  This will create a migration that creates the UserProfile table with 3 columns: FirstName, LastName, and FullName.  This is where we need to make a small change.  Instead of allowing Code First Migrations to create the FullName property, we will manually add that column as a computed column. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: CreateTable( 6: "dbo.UserProfiles", 7: c => new 8: { 9: Id = c.Int(nullable: false, identity: true), 10: FirstName = c.String(), 11: LastName = c.String(), 12: //FullName = c.String(), 13: }) 14: .PrimaryKey(t => t.Id); 15: Sql("ALTER TABLE dbo.UserProfiles ADD FullName AS FirstName + ' ' + LastName"); 16: } 17: 18: 19: public override void Down() 20: { 21: DropTable("dbo.UserProfiles"); 22: } 23: } Finally, run the Update-Database command.  Now we can query for Users using the FullName property and that query will be executed on the database server.  However, we encounter another potential problem. Since the FullName property is calculated by the database, it will get out of sync on the object side as soon as we make a change to the FirstName or LastName property.  Luckily, we can have the best of both worlds here by also adding the calculation back to the getter on the FullName property: 1: [DatabaseGenerated(DatabaseGeneratedOption.Computed)] 2: public string FullName 3: { 4: get { return FirstName + " " + LastName; } 5: private set 6: { 7: //Just need this here to trick EF 8: } 9: } Now we can both query for Users using the FullName property and we also won’t need to worry about the FullName property being out of sync with the FirstName and LastName properties.  When we run this code: 1: using(UserContext context = new UserContext()) 2: { 3: UserProfile userProfile = new UserProfile {FirstName = "Chanandler", LastName = "Bong"}; 4: 5: Console.WriteLine("Before saving: " + userProfile.FullName); 6: 7: context.Users.Add(userProfile); 8: context.SaveChanges(); 9:  10: Console.WriteLine("After saving: " + userProfile.FullName); 11:  12: UserProfile chanandler = context.Users.First(u => u.FullName == "Chanandler Bong"); 13: Console.WriteLine("After reading: " + chanandler.FullName); 14:  15: chanandler.FirstName = "Chandler"; 16: chanandler.LastName = "Bing"; 17:  18: Console.WriteLine("After changing: " + chanandler.FullName); 19:  20: } We get this output: It took a bit of work, but finally Chandler’s TV Guide can be delivered to the right person. The obvious downside to this implementation is that the FullName calculation is duplicated in the database and in the UserProfile class. This sample was written using Visual Studio 2012 and Entity Framework 5. Download the source code here.

    Read the article

  • dpkg unsatisfied dependencies, now apt-get wants to remove whole system

    - by Bruno Finger
    firstly, I'm sorry for my terminal output in portuguese, but I guess it is still understandable. I am using Ubuntu GNOME 14.04 and I tried to update the GNOME Online Accounts packages by downloading the following .deb files from packages.ubuntu.com for the Ubuntu 14.10 version: libgoa-backend-1.0-dev_3.12.4-1_amd64.deb libgoa-backend-1.0-1_3.12.4-1_amd64.deb libgoa-1.0-dev_3.12.4-1_amd64.deb libgoa-1.0-0b_3.12.4-1_amd64.deb gnome-online-accounts_3.12.4-1_amd64.deb gir1.2-goa-1.0_3.12.4-1_amd64.deb After downloading them in the same folder, I run the command sudo dpkg -i *.deb, but it didn't install the packages, instead it showed errors due to packages which them depend doesn't meet the required version (and Ubuntu have no way to install them since they are not in this version's repositories). So now every time I want to install anything through apt-get, Ubuntu tells me to run apt-get -f install to fix the errors. This is the list of packages it needs to install/uninstall/update: $ sudo apt-get -f install Lendo listas de pacotes... Pronto Construindo árvore de dependências Lendo informação de estado... Pronto Corrigindo dependências... Pronto Os seguintes pacotes foram instalados automaticamente e já não são necessários: # THESE PACKAGES HAVE BEEN PREVIOUSLY INSTALLED AND ARE NO LONGER NECESSARY account-plugin-windows-live gir1.2-gweather-3.0 libatk-bridge2.0-dev libatk1.0-dev libcairo-script-interpreter2 libcairo2-dev libexpat1-dev libfontconfig1-dev libfreetype6-dev libgdk-pixbuf2.0-dev libglib2.0-dev libgtk-3-dev libharfbuzz-dev libharfbuzz-gobject0 libice-dev libpango1.0-dev libpcre3-dev libpcrecpp0 libpixman-1-dev libpng12-dev libpthread-stubs0-dev librest-dev libsm-dev libsoup2.4-dev libwayland-dev libx11-dev libx11-doc libxau-dev libxcb-render0-dev libxcb-shm0-dev libxcb1-dev libxcomposite-dev libxcursor-dev libxdamage-dev libxdmcp-dev libxext-dev libxfixes-dev libxft-dev libxi-dev libxinerama-dev libxkbcommon-dev libxml2-dev libxrandr-dev libxrender-dev pkg-config signon-plugin-password x11proto-composite-dev x11proto-core-dev x11proto-damage-dev x11proto-fixes-dev x11proto-input-dev x11proto-kb-dev x11proto-randr-dev x11proto-render-dev x11proto-xext-dev x11proto-xinerama-dev xorg-sgml-doctools xtrans-dev zlib1g-dev Utilize 'apt-get autoremove' para os remover. Os pacotes extra a seguir serão instalados: # THE FOLLOWING PACKAGES WILL BE INSTALLED debhelper dh-apparmor libatk-bridge2.0-dev libatk1.0-dev libcairo-script-interpreter2 libcairo2-dev libept1.4.12 libexpat1-dev libfontconfig1-dev libfreetype6-dev libgdk-pixbuf2.0-dev libglib2.0-dev libgtk-3-dev libharfbuzz-dev libharfbuzz-gobject0 libice-dev libmail-sendmail-perl libpango1.0-dev libpcre3-dev libpcrecpp0 libpixman-1-dev libpng12-dev libpthread-stubs0-dev librest-dev libsm-dev libsoup2.4-dev libwayland-dev libx11-dev libx11-doc libxau-dev libxcb-render0-dev libxcb-shm0-dev libxcb1-dev libxcomposite-dev libxcursor-dev libxdamage-dev libxdmcp-dev libxext-dev libxfixes-dev libxft-dev libxi-dev libxinerama-dev libxkbcommon-dev libxml2-dev libxrandr-dev libxrender-dev pkg-config po-debconf x11proto-composite-dev x11proto-core-dev x11proto-damage-dev x11proto-fixes-dev x11proto-input-dev x11proto-kb-dev x11proto-randr-dev x11proto-render-dev x11proto-xext-dev x11proto-xinerama-dev xorg-sgml-doctools xtrans-dev zlib1g-dev Pacotes sugeridos: dh-make apparmor-easyprof libcairo2-doc libglib2.0-doc libgtk-3-doc libice-doc libpango1.0-doc imagemagick libsm-doc libsoup2.4-doc libxcb-doc libxext-doc libmail-box-perl Os pacotes a seguir serão REMOVIDOS: # THE FOLLOWING PACKAGES WILL BE REMOVED account-plugin-aim account-plugin-jabber account-plugin-salut account-plugin-yahoo empathy evolution evolution-data-server evolution-data-server-online-accounts evolution-indicator evolution-plugins gdm gir1.2-gdata-0.0 gir1.2-goa-1.0 gir1.2-zpj-0.0 gnome-contacts gnome-control-center gnome-documents gnome-online-accounts gnome-online-miners gnome-shell gnome-shell-extension-weather gnome-shell-extensions grilo-plugins-0.2 gvfs-backends-goa libevolution libfolks-eds25 libgdata13 libgoa-1.0-0b libgoa-1.0-dev libgoa-backend-1.0-1 libgoa-backend-1.0-dev libzapojit-0.0-0 mcp-account-manager-uoa nautilus-sendto-empathy ubuntu-gnome-desktop Os NOVOS pacotes a seguir serão instalados: # THE NEW FOLLOWING PACKAGES WILL BE INSTALLED debhelper dh-apparmor libatk-bridge2.0-dev libatk1.0-dev libcairo-script-interpreter2 libcairo2-dev libept1.4.12 libexpat1-dev libfontconfig1-dev libfreetype6-dev libgdk-pixbuf2.0-dev libglib2.0-dev libgtk-3-dev libharfbuzz-dev libharfbuzz-gobject0 libice-dev libmail-sendmail-perl libpango1.0-dev libpcre3-dev libpcrecpp0 libpixman-1-dev libpng12-dev libpthread-stubs0-dev librest-dev libsm-dev libsoup2.4-dev libwayland-dev libx11-dev libx11-doc libxau-dev libxcb-render0-dev libxcb-shm0-dev libxcb1-dev libxcomposite-dev libxcursor-dev libxdamage-dev libxdmcp-dev libxext-dev libxfixes-dev libxft-dev libxi-dev libxinerama-dev libxkbcommon-dev libxml2-dev libxrandr-dev libxrender-dev pkg-config po-debconf x11proto-composite-dev x11proto-core-dev x11proto-damage-dev x11proto-fixes-dev x11proto-input-dev x11proto-kb-dev x11proto-randr-dev x11proto-render-dev x11proto-xext-dev x11proto-xinerama-dev xorg-sgml-doctools xtrans-dev zlib1g-dev 0 pacotes atualizados, 61 pacotes novos instalados, 35 a serem removidos e 22 não atualizados. 7 pacotes não totalmente instalados ou removidos. É preciso baixar 12,0 MB de arquivos. Depois desta operação, 25,0 MB adicionais de espaço em disco serão usados. Você quer continuar? [S/n] Along packages needed to be removed are even gdm. This is 100% sure to make the system useless. What can I do to fix this issue? I don't care if I can't install the new version of goa anymore.

    Read the article

  • How to Secure a Data Role by Multiple Business Units

    - by Elie Wazen
    In this post we will see how a Role can be data secured by multiple Business Units (BUs).  Separate Data Roles are generally created for each BU if a corresponding data template generates roles on the basis of the BU dimension. The advantage of creating a policy with a rule that includes multiple BUs is that while mapping these roles in HCM Role Provisioning Rules, fewer number of entires need to be made. This could facilitate maintenance for enterprises with a large number of Business Units. Note: The example below applies as well if the securing entity is Inventory Organization. Let us take for example the case of a user provisioned with the "Accounts Payable Manager - Vision Operations" Data Role in Fusion Applications. This user will be able to access Invoices in Vision Operations but will not be able to see Invoices in Vision Germany. Figure 1. A User with a Data Role restricting them to Data from BU: Vision Operations With the role granted above, this is what the user will see when they attempt to select Business Units while searching for AP Invoices. Figure 2.The List Of Values of Business Units is limited to single one. This is the effect of the Data Role granted to that user as can be seen in Figure 1 In order to create a data role that secures by multiple BUs,  we need to start by creating a condition that groups those Business Units we want to include in that data role. This is accomplished by creating a new condition against the BU View .  That Condition will later be used to create a data policy for our newly created Role.  The BU View is a Database resource and  is accessed from APM as seen in the search below Figure 3.Viewing a Database Resource in APM The next step is create a new condition,  in which we define a sql predicate that includes 2 BUs ( The ids below refer to Vision Operations and Vision Germany).  At this point we have simply created a standalone condition.  We have not used this condition yet, and security is therefore not affected. Figure 4. Custom Role that inherits the Purchase Order Overview Duty We are now ready to create our Data Policy.  in APM, we search for our newly Created Role and Navigate to “Find Global Policies”.  we query the Role we want to secure and navigate to view its global policies. Figure 5. The Job Role we plan on securing We can see that the role was not defined with a Data Policy . So will create one that uses the condition we created earlier.   Figure 6. Creating a New Data Policy In the General Information tab, we have to specify the DB Resource that the Security Policy applies to:  In our case this is the BU View Figure 7. Data Policy Definition - Selection of the DB Resource we will secure by In the Rules Tab, we  make the rule applicable to multiple values of the DB Resource we selected in the previous tab.  This is where we associate the condition we created against the BU view to this data policy by entering the Condition name in the Condition field Figure 8. Data Policy Rule The last step of Defining the Data Policy, consists of  explicitly selecting  the Actions that are goverened by this Data Policy.  In this case for example we select the Actions displayed below in the right pane. Once the record is saved , we are ready to use our newly secured Data Role. Figure 9. Data Policy Actions We can now see a new Data Policy associated with our Role.  Figure 10. Role is now secured by a Data Policy We now Assign that new Role to the User.  Of course this does not have to be done in OIM and can be done using a Provisioning Rule in HCM. Figure 11. Role assigned to the User who previously was granted the Vision Ops secured role. Once that user accesses the Invoices Workarea this is what they see: In the image below the LOV of Business Unit returns the two values defined in our data policy namely: Vision Operations and Vision Germany Figure 12. The List Of Values of Business Units now includes the two we included in our data policy. This is the effect of the data role granted to that user as can be seen in Figure 11

    Read the article

< Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >