Search Results

Search found 6992 results on 280 pages for 'exist'.

Page 188/280 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • Vidalia detected that the Tor software exited unexpectedly?

    - by Rana Muhammad Waqas
    I have installed the vidalia by following these instructions everything went as they mentioned. When I started vidalia it gave me the error: Vidalia was unable to start Tor. Check your settings to ensure the correct name and location of your Tor executable is specified. I found that bug here and followed their instructions to fix it and now after that it says: Vidalia detected that the Tor software exited unexpectedly. Please check the message log for recent warning or error messages. Logs of Vidalia Oct 18 02:15:06.937 [Notice] Tor v0.2.3.25 (git-3fed5eb096d2d187) running on Linux. Oct 18 02:15:06.937 [Notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Oct 18 02:15:06.937 [Notice] Read configuration file "/home/waqas/.vidalia/torrc". Oct 18 02:15:06.937 [Notice] We were compiled with headers from version 2.0.19-stable of Libevent, but we're using a Libevent library that says it's version 2.0.21-stable. Oct 18 02:15:06.938 [Notice] Initialized libevent version 2.0.21-stable using method epoll (with changelist). Good. Oct 18 02:15:06.938 [Notice] Opening Socks listener on 127.0.0.1:9050 Oct 18 02:15:06.938 [Warning] Could not bind to 127.0.0.1:9050: Address already in use. Is Tor already running? Oct 18 02:15:06.938 [Warning] /var/run/tor is not owned by this user (waqas, 1000) but by debian-tor (118). Perhaps you are running Tor as the wrong user? Oct 18 02:15:06.938 [Warning] Before Tor can create a control socket in "/var/run/tor/control", the directory "/var/run/tor" needs to exist, and to be accessible only by the user account that is running Tor. (On some Unix systems, anybody who can list a socket can connect to it, so Tor is being careful.) Oct 18 02:15:06.938 [Warning] Failed to parse/validate config: Failed to bind one of the listener ports. Oct 18 02:15:06.938 [Error] Reading config failed--see warnings above. Please Help !

    Read the article

  • Silverlight Cream for April 04, 2010 -- #830

    - by Dave Campbell
    In this Issue: Michael Washington, Hassan, David Anson, Jeff Wilcox, UK Application Development Consulting, Davide Zordan, Victor Gaudioso, Anoop Madhusudanan, Phil Middlemiss, and Laurent Bugnion. Shoutouts: Josh Smith has a good-read post up: Design-time data is still data Shawn Hargreaves reported his MIX demo released From SilverlightCream.com: Silverlight MVVM: Enabling Design-Time Data in Expression Blend When Using Web Services Michael Washington has a tutorial up on MVVM and using a web service to get design-time data that works in Blend also... lots of information and screenshots. WP7 Transition Animation Hassan has a new WP7 tutorial up that demonstrates playing media and adding transition animation between pages. Tip: For a truly read-only custom DependencyProperty in Silverlight, use a read-only CLR property instead David Anson's latest tip is in response to comments on his previous post and details one by Dr. WPF who points out that a read-only DependencyProperty doesn't actually need to be a DependencyProperty as long as the class implements INotifyPropertyChanged. Template parts and custom controls (quick tip) Jeff Wilcox has posted a set of tips and recommendations to use when developing control development in Silverlight ... this is a post to bookmark. Flexible Data Template Support in Silverlight The UK Application Development Consulting details a 'problem' in Silverlight that doesn't exist in WPF and that is data templates that vary by type... and discusses a way around it. Multi-Touch enabling Silverlight Simon using Blend behaviors and the Surface sample for Silverlight Davide Zordan brought Multi-Touch to the Silverlight Simon game on CodePlex using Blend Behaviors. New Video Tutorial: How to Use a Behavior to Fire Methods from Objects in Styles Victor Gaudioso has a video tutorial up responding to a question from a developer. He demonstrates development of a Behavior that can be attached to objects in or out of Styles that allows you to specify what Method they need to fire. Creating a Silverlight Client for @shanselman ’s Nerd Dinner, using oData and Bing Maps Anoop Madhusudanan took Scott Hanselman's post on an OData API for StackOverflow, and has created a Silverlight client for Nerd Dinner, including BingMaps. A Chrome and Glass Theme - Part 2 Phil Middlemiss has the next part of his Chrome and Glass Theme up. In this one he creates a very nice chrome-look button with visual state changes. MVVM Light Toolkit V3 SP1 for Windows Phone 7 Laurent Bugnion has released a new version of MVVM Light for WP7. Included is an installation manual and information about what was changed. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • links for 2010-04-27

    - by Bob Rhubart
    @oracletechnet: Oracle Technology Network Newsletters Revisited "You may find this hard to believe, but some analysts contend that email newsletters are still among the most preferred methods of "information awareness" by developers today. And in our experience, the numbers back it up: subscriptions to Oracle Technology Network newsletters grow organically by 15% every year, even after you take continual list cleanup into account. " -- Justin Kestelyn (tags: oracle otn newsletters developers architects) Sylvain Duloutre: Directory Services as a Web Service Sylvain Duloutre shares a WSDL file he created to deal with issues involved in XML binding generation. (tags: oracle sun wsdl webservices DSEE netbeans jdeveloper) Nick Wooler: Iron-Clad Cloud: Secure Cloud Computing "One solution to the security problem with cloud services can be overcome using Service Oriented Security. The Oracle approach to using Service Oriented Security allows developers to pull from a centralized, authoritative source of identity services. This allows developers to build security into every application from the inside-out. This is critical to ensuring this is done in a standardized manner and most importantly it allows developers to develop without being security experts." -- Nick Wooler (tags: oracle sun security cloud saas) Andy Mulholland: A week of visits; Cisco, HP, Oracle, SAP and VMware (in alphabetical order!) "I now am considering that we should be thinking about ‘clouds’ in virtual way, by which I mean that a succession of virtual ‘clouds’ will need to exist, each possessing specific characteristics that suit certain types of services. Really it’s no different to what we see with servers today. Adding a hypervisor to a server adds new flexibility, but creating a virtualised environment means much more. What I suspect will happen is that we will start to use vendor specific approaches to building what I will term a physical cloud solution using their technology and approach to supporting a specific objective, but with time we will find these physical clouds will interoperate as a fully virtualised cloud environment." -- Andy Mulholland (tags: entarch enterprisearchitecture cloudcomputing virtualization) @fteter: Highlights From The Bright Lights - Tuesday #c10 Oracle Ace Director Floyd Teter of JPL with one last wrap-up of Collaborate 10. (tags: oracle otn collaborate2010 las vegas) Rittman Mead India – Call for very good Oracle BI Developers/Architects "Now that we have an office in India and if you are interested in joining us, do drop us a line at [email protected], and we will be glad to have technical discussions with you. If you are also an Oracle BI, DW or EPM customer looking for help on projects in the Asia-Pacific region, again we’ll be pleased to hear from you and to let you know how we can help." -- Venkatakrishnan J (tags: otn oracle jobs india developers architects software)

    Read the article

  • Configuring JPA Primary key sequence generators

    - by pachunoori.vinay.kumar(at)oracle.com
    This article describes the JPA feature of generating and assigning the unique sequence numbers to JPA entity .This article provides information on jpa sequence generator annotations and its usage. UseCase Description Adding a new Employee to the organization using Employee form should assign unique employee Id. Following description provides the detailed steps to implement the generation of unique employee numbers using JPA generators feature Steps to configure JPA Generators 1.Generate Employee Entity using "Entities from Table Wizard". View image2.Create a Database Connection and select the table "Employee" for which entity will be generated and Finish the wizards with default selections. View image 3.Select the offline database sources-Schema-create a Sequence object or you can copy to offline db from online database connection. View image 4.Open the persistence.xml in application navigator and select the Entity "Employee" in structure view and select the tab "Generators" in flat editor. 5.In the Sequence Generator section,enter name of sequence "InvSeq" and select the sequence from drop down list created in step3. View image 6.Expand the Employees in structure view and select EmployeeId and select the "Primary Key Generation" tab.7.In the Generated value section,select the "Use Generated value" check box ,select the strategy as "Sequence" and select the Generator as "InvSeq" defined step 4. View image   Following annotations gets added for the JPA generator configured in JDeveloper for an entity To use a specific named sequence object (whether it is generated by schema generation or already exists in the database) you must define a sequence generator using a @SequenceGenerator annotation. Provide a unique label as the name for the sequence generator and refer the name in the @GeneratedValue annotation along with generation strategy  For  example,see the below Employee Entity sample code configured for sequence generation. EMPLOYEE_ID is the primary key and is configured for auto generation of sequence numbers. EMPLOYEE_SEQ is the sequence object exist in database.This sequence is configured for generating the sequence numbers and assign the value as primary key to Employee_id column in Employee table. @SequenceGenerator(name="InvSeq", sequenceName = "EMPLOYEE_SEQ")   @Entity public class Employee implements Serializable {    @Id    @Column(name="EMPLOYEE_ID", nullable = false)    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator="InvSeq")   private Long employeeId; }   @SequenceGenerator @GeneratedValue @SequenceGenerator - will define the sequence generator based on a  database sequence object Usage: @SequenceGenerator(name="SequenceGenerator", sequenceName = "EMPLOYEE_SEQ") @GeneratedValue - Will define the generation strategy and refers the sequence generator  Usage:     @GeneratedValue(strategy = GenerationType.SEQUENCE, generator="name of the Sequence generator defined in @SequenceGenerator")

    Read the article

  • Chroot into a 32 bit version of ubuntu from a 64 bit host

    - by Leif Andersen
    I have a piece of software that only runs on 32 bit linux (Xilinx webPack 10.1, apperently it 'has' to be the old version because that's the latest one compatible with their boards), anyway, this version is only compatible with 32 bit linux. So, I head off to this page to see what I can do: https://help.ubuntu.com/community/32bit_and_64bit Of the 4 options (listed at the bottom): I already installed ia32-libs, and it's still not working I could do that one if needed (which I ended up doing). No, I don't want to be working from a vm all of next semester, that would be painful and I'd rather just reinstall my whole computer to a 32 bit os (which I don't want to do). It didn't sound like it was the best option based on what I've seen. So I went off to do #2, and set up a chroot for 32 bit ubuntu. It linked to this tutorial: https://help.ubuntu.com/community/DebootstrapChroot As I'm running ubuntu 10.10 I made the lucid and newer version changes. Which is to say I wrote: [hardy-i386] description=Ubuntu 8.04 Hardy for i386 directory=/srv/chroot/hardy-i386 personality=linux32 root-users=leif type=directory users=leif to /etc/schroot/chroot.d/hardy-i386 (Note though that I did save it once before I had the file properly formatted, I saved the correct version moments later though). I then ran: $ sudo mkdir -p /srv/chroot/hardy_i386 $ sudo debootstrap --variant=buildd --arch i386 hardy /srv/chroot/hardy_i386 http://archive.ubuntu.com/ubuntu/ Then I ran: $ schroot -l And it showed the proper chroot, but then when I ran: $ schroot -c hardy-i386 -u root I got the following error: E: 10mount: error: Directory '/srv/chroot/hardy-i386' does not exist E: 10mount: warning: Mount location /var/lib/schroot/mount/hardy-i386-80359697-2164-4b10-a05a-89b0f497c4f1 no longer exists; skipping unmount E: hardy-i386-80359697-2164-4b10-a05a-89b0f497c4f1: Chroot setup failed: stage=setup-start Can anyone help me figure out what the problem is? Oh, by the way: /srv/chroot/hardy-i386 most certainly exists. I've also tried it replacing all references with hardy to lucid, to no avail. Oh, one more thing, I did set up the chrome os environment a month back or so: http://www.chromium.org/chromium-os/developer-guide and it had me use something with chmod. So, can anyone figure out what the problem is? Thank you.

    Read the article

  • Using MAC Authentication for simple Web API’s consumption

    - by cibrax
    For simple scenarios of Web API consumption where identity delegation is not required, traditional http authentication schemas such as basic, certificates or digest are the most used nowadays. All these schemas rely on sending the caller credentials or some representation of it in every request message as part of the Authorization header, so they are prone to suffer phishing attacks if they are not correctly secured at transport level with https. In addition, most client applications typically authenticate two different things, the caller application and the user consuming the API on behalf of that application. For most cases, the schema is simplified by using a single set of username and password for authenticating both, making necessary to store those credentials temporally somewhere in memory. The true is that you can use two different identities, one for the user running the application, which you might authenticate just once during the first call when the application is initialized, and another identity for the application itself that you use on every call. Some cloud vendors like Windows Azure or Amazon Web Services have adopted an schema to authenticate the caller application based on a Message Authentication Code (MAC) generated with a symmetric algorithm using a key known by the two parties, the caller and the Web API. The caller must include a MAC as part of the Authorization header created from different pieces of information in the request message such as the address, the host, and some other headers. The Web API can authenticate the caller by using the key associated to it and validating the attached MAC in the request message. In that way, no credentials are sent as part of the request message, so there is no way an attacker to intercept the message and get access to those credentials. Anyways, this schema also suffers from some deficiencies that can generate attacks. For example, brute force can be still used to infer the key used for generating the MAC, and impersonate the original caller. This can be mitigated by renewing keys in a relative short period of time. This schema as any other can be complemented with transport security. Eran Rammer, one of the brains behind OAuth, has recently published an specification of a protocol based on MAC for Http authentication called Hawk. The initial version of the spec is available here. A curious fact is that the specification per se does not exist, and the specification itself is the code that Eran initially wrote using node.js. In that implementation, you can associate a key to an user, so once the MAC has been verified on the Web API, the user can be inferred from that key. Also a timestamp is used to avoid replay attacks. As a pet project, I decided to port that code to .NET using ASP.NET Web API, which is available also in github under https://github.com/pcibraro/hawknet Enjoy!.

    Read the article

  • F# &ndash; Immutable List vs a Mutable Collection in Arrays

    - by MarkPearl
    Another day gone by looking into F#. Today I thought I would ramble on about lists and arrays in F#. Coming from a C# background I barely ever use arrays now days in my C# code – why you may ask – because I find lists generally handle most of the business scenario’s that I come across. So it has been an interesting experience with me keep bumping into Array’s & Lists in F# and I wondered why the frequency of coming across arrays was so much more in this language than in C#. Take for instance the code I stumbled across today. let rng = new Random() let shuffle (array : 'a array) = let n = array.Length for x in 1..n do let i = n-x let j = rng.Next(i+1) let tmp = array.[i] array.[i] <- array.[j] array.[j] <- tmp array   Quite simply its purpose is to “shuffle” an array of items. So I thought, why does it have the “a’ array'” explicitly declared? What if I changed it to a list? Well… as I was about to find out there are some subtle differences between array’s & lists in F# that do not exist in C#. Namely, mutability. A list in F# is an ordered, immutable series of elements of the same type, while an array is a fixed-size zero based, mutable collection of consecutive data elements that are all of the same type. For me the keyword is immutable vs mutable collection. That’s why I could not simply swap the ‘a array with ‘a list in my function header because then later on in the code the syntax would not be valid where I “swap” item positions. i.e. array.[i] <- array.[j] would be invalid because if it was a list, it would be immutable and so couldn’t change by its very definition.. So where does that leave me? It’s to early days to say. I don’t know what the balance will be in future code – will I typically always use lists or arrays or even have a balance, but time will tell.

    Read the article

  • Is SugarCRM really adequate for custom development?

    - by dukeofgaming
    I used SugarCRM for a project about two years ago, I ran into errors from the very installation, having to hack the actual installation file to deploy the software in the server... and other erros that I can't recall now. Two years after, I'm picking it up for a project once again... oh dear, I'm feeling like I should have developed the whole thing from scratch myself. Some examples: I couldn't install it in the server (again)... I had to install it locally, then copy the files and database over to the server and manually edit the config file. Constantly getting deployment errors from the module builder... one reason is SugarCRM keeps creating a record in the upgrade_history table for a file that does not exist, I keep deleting such record and it keeps coming back corrupt. I get other deployment errors, but have not figured them out... then I have to rollback all files and database to try again. I deleted a custom module with relationships, the relationships stayed in the other modules and cannot be deleted anymore, PHP warnings all over the place. Quick create for custom modules does not appear, hack needed. Its whole cache directory is a joke, permanent data/files are stored there. The module builder interface disappears required fields. Edit the wrong thing, module builder won't deploy again... then pray Quick Repair and/or Rebuild Relationships do the trick. My impression of SugarCRM now is that, regardless of its pretty exterior and apparent functionality, it is a very low quality piece of software. This even scared me more: http://amplicate.com/hate/sugarcrm; a quote: I wis this info had been available when I tried to implement it 2 years ago... I searched high and low and the only info I found was positive. Yes, it's a piece of crap. The community edition was full of bugs... nothing worked. Essentially I got fired for implementing it. I'm glad though, because now I work for myself, am much happier and make more money... so, I should really thank SugarCRM for sucking so much I guess! I figured that perhaps some of you have had similar experiences, and have either sticked with SugarCRM or moved on to another solution. I'm very interested in knowing what your resolutions were -or your current situations are- to make up my own mind, since the project I'm working on is long term and I'm feeling SugarCRM will be more an obstacle than an aid. Thanks in advance.

    Read the article

  • How Microsoft listens

    - by Stacy Vicknair
    This being my freshman year as an MVP, I had a realization that I perhaps should be embarrassed hasn’t happened sooner. The realization comes much like the iconic M&Ms commercial where the M&Ms run into Santa and exclaim, “He does exist!” My personal realization arguably has a greater implication: Microsoft does listen. This is the most important lesson that I received this year attending the MVP Summit. My hope is that I can convince you that we are empowered to make a difference. Instead of using “Man I hate how this works / doesn’t work!” as cooler conversation, we can use it as true interaction with Microsoft. We as customers to Microsoft need to stop asking the question “Will this work for me?” and instead ask “How can this work for me?” There are three quick resources that the average developer has access to today that they can use to be heard by the product teams, and by no means should you think twice if you have a concern that you’d like a real response on. MVPs MVPs are members of your community who have a deep relationship with Microsoft and will have connections to their associated product group. Don’t think of them as just a resource for answers, but also as your ambassador for getting your experiences heard. You can find your local MVPs by browsing the directory at: https://mvp.support.microsoft.com/communities/mvp.aspx Evangelists Evangelists are employees of Microsoft who work to foster and grow communities in their assigned region. They are first-class citizens of Microsoft and are often deeply involved with the product groups. As a result, they will be more than glad to direct your questions or concerns to those who can answer them most expertly. With that said, evangelists are also very busy people (who do amazing things for the community) and might not be able to get you that conversation as quickly as a local MVP. You can find your local evangelist at the following website: http://msdn.microsoft.com/en-us/bb905078.aspx Microsoft Connect This is one of the resources that I haven’t used enough, but it cannot be understated. Connect is the starting point of the social conversation that happens between Microsoft and the community daily. Connect acts as a portal where you can provide new feedback as well as comment and rate the feedback provided by others. Power is in numbers when it comes to Connect, so the exposure that your feedback can get not only lets you know that you aren’t the only one who wants change, but also lets Microsoft know the same. https://connect.microsoft.com   Technorati Tags: Microsoft,MVP,Feedback,Connect

    Read the article

  • The curious case of SOA Human tasks' automatic completion

    - by Kavitha Srinivasan
    A large south-Asian insurance industry customer using Oracle BPM and SOA ran into this. I have survived this ordeal previously myself but didnt think to blog it then. However, it seems like a good idea to share this knowledge with this reader community and so here goes.. Symptom: A human task (in a SOA/BPEL/BPM process) completes automatically while it should have been assigned to a proper user.There are no stack traces, no related exceptions in the logs. Why: The product is designed to treat human tasks that don't have assignees as one that is eligible for completion. And hence no warning/error messages are recorded in the logs. Usecase variant: A variant of this usecase, where an assignee doesnt exist in the repository is treated as a recoverable error. One can find this in the 'pending recovery' instances in EM and reactivate the task by changing the assignees in the bpm workspace as a process owner /administrator. But back to the usecase when tasks get completed automatically... When: This happens when the users/groups assigned to a task are 'empty' or null. This has been seen only on tasks whose assignees are derived from an assignment expression - ie at runtime an XPath is used to determine who to assign the task to. (This should not happen if task assignees are populated via swim-lane roles.) How to detect this in EM For instances that are auto-completed thus, one will notice in the Audit Trail of such instances, that the 'outcome' of the task is empty. The 'acquired by' element will also show as empty/null. Enabling the oracle.soa.services.workflow.* logger in em should print more verbose messages about this. How to fix this The application code needs two fixes: input to HT: The XSLT/XPath used  to set the task 'assignee' and the process itself should be enhanced to handle nulls better. For eg: if no-data-found, set assignees to alternate value, force default assignees etc. output from HT: Additionally, in the application code, check that the 'outcome' of the HT is not-null. If null, route the task to be performed again after setting the assignee correctly. Beginning PS4FP, one should be able to use 'grab' to route back to the task to fire again. Hope this helps. 

    Read the article

  • Finally, I have my HP 6910p laptop running with 8Gb RAM

    - by Liam Westley
    Today, I received two Corsair Value Select 4Gb DDR SO-DIMMs (from overclock.co.uk) for my aging HP 6910p to give it the extra lease of life to keep it going until the end of 2010.  And here is the proof that Windows 7 64-bit happily sees all 8Gb, There are no 4Gb modules are officially supported for the HP 6910p (they didn’t exist when it was first build).  I was taking a bit of a gamble, and relying on the UK distance selling regulations which meant that even if they didn’t work I’d be able to send them back, getting a full refund and only paying for the return postage. I’d read Keith Comb’s blog back in 2008, (http://blogs.technet.com/b/keithcombs/archive/2008/07/05/loading-a-hp-6910p-with-8gb-of-ram.aspx) where he mentioned ‘trying’ out 4Gb samples of SO-DIMMs in a HP 6910p laptop, but there still appears to be no mentions of running this configuration in any other blog. Seeing how the 8Gb of memory is used is made easier with the new Resource Monitor available in Windows 7.  With two copies of Visual Studio 2008, Outlook, Firefox (with 30+ tabs), TweetDeck (an infamous memory hog) and VMWare workstation running a virtual machine allocated with 2Gb of memory, you might have no ‘free’ memory remaining, but the standby memory is an awesome 2.4Gb, and once the VM is up and running the Hard Faults/sec hovers around zero,   It’s the page fault figure which really counts, because reducing that value means that you are preventing the Windows 7 system drive from being used for virtual memory paging operations.  Even after only a few hours of use it’s noticeable that disc access has been reduced and applications feel more responsive and ‘snappy’.  I did consider the option of purchasing an SSD to replace the main drive, rather than go for 8Gb of RAM, but I think I’ve probably made the correct decision. Given my hobby topic of virtualisation, I take the view that you can never have too much memory.   It was also a decision made easier by the price differential between 8Gb of RAM compared to a decent size SSD.  In the 18 months since Keith Comb tested the first 4Gb SO-DIMMS they have plummeted in price, at just under £100 per 4Gb, they are around a fifth of the price when launched. So if you ever wondered if a HP 6910p can handle 8Gb, now you know.

    Read the article

  • CCNet TFS Migration - Dealing with left over folders

    - by Michael Stephenson
    Im currently in the process of migrating our many BizTalk projects from MKS source control to TFS.  While we will be using TFS for work item tracking and source control etc we will be continuing to use Cruise Control for continuous integration although im updating this to CCNet 1.5 at the same time. Ill post a few things as much as a reminder to myself about some of the problems we come across. Problem After the first build of our code the next time a build is triggered an error is encountered by the TFS source control block refreshing the source code. System.IO.IOException: The directory is not empty.    at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive)    at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) System.IO.IOException: The directory is not empty. at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive) at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) Project: Bupa.BPI.Documents Date of build: 2011-01-28 14:54:21 Running time: 00:00:05 Integration Request: Build (ForceBuild) triggered from VMOPBZDEV11 Solution The problem seems to be with a folder called TestLocations which is created by the build process and used along with the file adapter as a way to get messages into BizTalk.  For some reason the source control block when it does a full refresh of the code does not get rid of this folder and then complains thats a problem and fails the build. Interestingly there are other folders created by the build which are deleted fine.  My assumption is that this if something to do with the file adapter polling the directory.  However note that we have not had this problem with other source control blocks in the past. To workaround this I have added a prebuild task to the ccnet.config file to delete this folder before the source control block is executed.  See below for example < prebuild> exec>executable>cmd.exe</executable>buildArgs>/c "if exist "C:\<MyCode>\TestLocations" rd /s /q "C:\<MyCode>\TestLocations""</buildArgs>exec> prebuild> < < < </ </

    Read the article

  • Problems with updates

    - by legospace9876
    I can not update Weather Indicator with Update Manager. This is the terminal log: installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "sr_RS.utf_8_latin" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "sr_RS.utf_8_latin" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "sr_RS.utf_8_latin" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "sr_RS.utf_8_latin" are supported and installed on your system. perl: warning: Falling back to the andard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Setting up indicator-weather (11.11.28-0ubuntu1.1) ... Installing indicator-specific icons... Installing indicator dconf schema... cp: cannot stat `/usr/share/indicator-weather/indicator-weather.gschema.xml': No such file or directory dpkg: error processing indicator-weather (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: indicator-weather Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up indicator-weather (11.11.28-0ubuntu1.1) ... Installing indicator-specific icons... Installing indicator dconf schema... cp: cannot stat `**/usr/share/indicator-weather/indicator-weather.gschema.xml**': No such file or directory dpkg: error processing indicator-weather (--configure): subprocess installed post-installation script returned error exit status 1 The file that I bold really does not exist. How can I solve this problem?

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 29 (sys.dm_os_buffer_descriptors)

    - by Tamarick Hill
    The sys.dm_os_buffer_descriptors Dynamic Management View gives you a look into the data pages that are currently in your SQL Server buffer pool. Just in case you are not familiar with some of the internals to SQL Server and how the engine works, SQL Server only works with objects that are in memory (buffer pool). When an object such as a table needs to be read and it does not exist in the buffer pool, SQL Server will read (copy) the necessary data page(s) from disk into the buffer pool and cache it. Caching takes place so that it can be reused again and prevents the need of expensive physical reads. To better illustrate this DMV, lets query it against our AdventureWorks2012 database and view the result set. SELECT * FROM sys.dm_os_buffer_descriptors WHERE database_id = db_id('AdventureWorks2012') The first column returned from this result set is the database_id column which identifies the specific database for a given row. The file_id column represents the file that a particular buffer descriptor belongs to. The page_id column represents the ID for the data page within the buffer. The page_level column represents the index level of the data page. Next we have the allocation_unit_id column which identifies a unique allocation unit. An allocation unit is basically a set of data pages. The page_type column tells us exactly what type of page is in the buffer pool. From my screen shot above you see I have 3 distinct type of Pages in my buffer pool, Index, Data, and IAM pages. Index pages are pages that are used to build the Root and Intermediate levels of a B-Tree. A Data page would represent the actual leaf pages of a clustered index which contain the actual data for the table. Without getting into too much detail, an IAM page is Index Allocation Map page which track GAM (Global Allocation Map) pages which in turn track extents on your system. The row_count column details how many data rows are present on a given page. The free_space_in_bytes tells you how much of a given data page is still available, remember pages are 8K in size. The is_modified signifies whether or not a page has been changed since it has been read into memory, .ie a dirty page. The numa_node column represents the Nonuniform memory access node for the buffer. Lastly is the read_microsec column which tells you how many microseconds it took for a data page to be read (copied) into the buffer pool. This is a great DMV for use when you are tracking down a memory issue or if you just want to have a look at what type of pages are currently in your buffer pool. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms173442.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Warm Reception By Partners at EMEA Manageability Forum

    - by Get_Specialized!
    For the EMEA Partners that were able to attend the event in Istanbul Turkey, thank you for your attendance and feedback at the event. As you can see, the weather kept most of inside during the event and at times there was even some snow.  And while it may have been chilly outside, there was a warm reception from Partners who traveled from all over EMEA to hear from other Oracle Specialized Partners and subject matter experts about the opportunities and benefits of Oracle Enterprise Manager and Exadata Specialization. Here you can see David Robo, Oracle Technology Director for Manageability kicking off the event followed later by Patrick Rood, Oracle Indirect Manageability Business. A special thank you to all the Partner speakers including Ron Tolido, VP and CTO of Application Services Continental Europe Capgemini, who delivered a very innovative keynote where many in attendance learned that Black Swans do exist. And while at break, interactivity among partners continued and it was great to see such innovative partners who had listed their achieved specializations on their business cards. Here we can see Oracle Enterprise Manager customer, Turkish Oracle User Group board member and Blogger Gokhan Atil sharing his product experiences with others attending. Additionally, Christian Trieb of Paragon Data, also shared with other Partners what the German Oracle User Group (DOAG) was doing around manageability and invitation to submit papers for their next event. Here we can see at one of the breaks, one of the event organizers Javier Puerta (left), Oracle Director of Partner Programs, joined by Sebastiaan Vingerhoed (middle), Oracle EE & CIS Manager Manageability and speaker on Managing the Application Lifecycle, Julian Dontcheff (right), Global Head of Database Management at Accenture. Below is Julian Dontcheff's delivering his partner presentation on Exadata and Lifecycle Management. Just after his plane landed and 1 hour Turkish taxi experience to the event location, Julian still took the time to sit down with me and provide some extra insights on his experiences of managing the enterprise infrastructure with Oracle Enterprise Manager. Below is one of the Oracle Enterprise Management Product Management Team,  Mark McGill, Oracle Principal Product Manager, presenting to Partners on how you can perform Chargeback and Metering with Oracle Enterprise Manager 12c Cloud Control. Overall, it was a great event and an extra thank you to those OPN Specialized Partners who presented, to the Partners that attended, and to those Oracle team members who organized the event and presented.

    Read the article

  • Wireless Drivers for Broadcom BCM 4321 (14e4:4329) will not stay connected to a wireless network

    - by Eugene
    So, I'm not necessary new to Linux, I just never took the time to learn it, so please, bare with me. I just swapped out one of my wireless cards from one computer to another. This wireless card in question would be a "Broadcom BCM4321 (14e4:4329)" or actually a "Netgear WN311B Rangemax Next 270 Mbps Wireless PCI Adapter", but that's not important. I've tried (but probably screwed up in the process) installing the "wl" , "b43" and "brcmsmac" drivers, or at least I think I did. Currently I have only the following drivers loaded: eugene@EugeneS-PCu:~$ lsmod | grep "brcmsmac\|b43\|ssb\|bcma\|wl" b43 387371 0 bcma 52096 1 b43 mac80211 630653 1 b43 cfg80211 484040 2 b43,mac80211 ssb_hcd 12869 0 ssb 62379 2 b43,ssb_hcd The main issue is that with most of the drivers available that I've installed, they will find my wireless network but, they will only stay connected for about a minute with abnormally slow speed and then all of a sudden disconnect. Currently, the computer is hooked into another to share it's connect so that I can install drivers from the internet instead of loading them on to a flash drive and doing it offline. If anyone has any insight to the problem, that would be awesome. If not, I'll probably just look up how to install the Windows closed source driver. Edit 1: Even when I try the method here, as suggested when this was marked as a duplicate, I still can't stay connected to a wireless network. Edit 2: After discussing my issue with @Luis, he opened my question back up and told me to include the tests/procedures in the comments. Basically I did this: Read the first answer of the link above when this question was marked as duplicate which involved installing removing bcmwl-kernel-source and instead install firmware-b43-installer and b43-fwcutter. No change of result and contacted Luis in the comments, who then told me to try the second answer which involved removing my previous mistake and installing bcmwl-kernel-source Now the Network Manger (this has happend before, but usally I fixed it by using a different driver) even recognizes WiFi exist (both non-literal and literal). Luis who then suggested sudo rfkill unblock all rfkill unblock all didn't return anything, so I decide to try sudo rfkill list all. Returns nothing (no wonder rfkill unblock all did nothing). I enter lsmod | grep "brcmsmac\|b43\|ssb\|bcma\|wl" and that returns nothing. Try loading the driver by entering sudo modprobe b43 and try lsmod | grep "brcmsmac\|b43\|ssb\|bcma\|wl" again. Returns this: eugene@Eugenes-uPC:~$ sudo modprobe b43 eugene@Eugenes-uPC:~$ lsmod | grep "brcmsmac\|b43\|ssb\|bcma\|wl" b43 387371 0 bcma 52096 1 b43 mac80211 630653 1 b43 cfg80211 484040 2 b43,mac80211 ssb_hcd 12869 0 ssb 62379 2 b43,ssb_hcd So to recap: Currently Network Manager doesn't recognize Wireless exists, b43 drivers are loaded and I've currently hardwired a connect from my laptop to the computer that's causing this.

    Read the article

  • How to recreate spfile on Exadata?

    - by Bandari Huang
    Copy spfile from the ASM diskgroup to local disk by using the ASMCMD command line tool.  ASMCMD> pwd +DATA_DM01/EDWBASE ASMCMD> ls -l Type Redund Striped Time Sys Name Y CONTROLFILE/ Y DATAFILE/ Y ONLINELOG/ Y PARAMETERFILE/ Y TEMPFILE/ N spfileedwbase.ora => +DATA_DM01/EDWBASE/PARAMETERFILE/spfile.355.800017117 ASMCMD> cp +DATA_DM01/EDWBASE/spfileedwbase.ora /home/oracle/spfileedwbase.ora.bak Copy the context from spfileedwbase.ora.bak to initedwbase.ora except garbled character. Using above initedwbase.ora, start one of the RAC instances to the mount phase.   SQL> startup mount pfile=/home/oracle/initedwbase.ora Ensure one of the database instances is mounted before attempting to recreate the spfile.  SQL> select INSTANCE_NAME,HOST_NAME,STATUS from v$instance; INSTANCE_NAME HOST_NAME  STATUS ------------- ---------  ------ edwbase1      dm01db01   MOUNTED Create the new spfile. SQL> create spfile='+DATA_DM01/EDWBASE/spfileedwbase.ora' from pfile='/home/oracle/initedwbase.ora'; ASMCMD will show that a new spfile has been created as the alias spfilerac2.ora is now pointing to a new spfile under the PARAMETER directory in ASM. ASMCMD> pwd +DATA_DM01/EDWBASE ASMCMD> ls -l Type Redund Striped Time Sys Name Y CONTROLFILE/ Y DATAFILE/ Y ONLINELOG/ Y PARAMETERFILE/ Y TEMPFILE/ N spfilerac2.ora => +DATA_DM01/EDWBASE/PARAMETERFILE/spfile.356.800013581  Shutdown the instance and restart the database using srvctl using the newly created spfile. SQL> shutdown immediate ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> exit [oracle@dm01db01 ~]$ srvctl start database -d edwbase [oracle@dm01db01 ~]$ srvctl status database -d edwbase Instance edwbase1 is running on node dm01db01 Instance edwbase2 is running on node dm01db02 ASMCMD will now show a number of spfiles exist in the PARAMETERFILE directory for this database. The spfile containing the parameter preventing startups should be removed from ASM. In this case the file spfile.355.800017117 can be removed because spfile.356.800013581 is the current spfile. ASMCMD> pwd +DATA_DM01/EDWBASE ASMCMD> cd PARAMETERFILE ASMCMD> ls -l Type Redund Striped Time Sys Name PARAMETERFILE UNPROT COARSE FEB 19 08:00:00 Y spfile.355.800017117 PARAMETERFILE UNPROT COARSE FEB 19 08:00:00 Y spfile.356.800013581 ASMCMD> rm spfile.355.800017117 ASMCMD> ls spfile.356.800013581 Referenece: Recreating the Spfile for RAC Instances Where the Spfile is Stored in ASM [ID 554120.1]

    Read the article

  • Monogame/SharpDX - Shader parameters missing

    - by Layoric
    I am currently working on a simple game that I am building in Windows 8 using MonoGame (develop3d). I am using some shader code from a tutorial (made by Charles Humphrey) and having an issue populating a 'texture' parameter as it appears to be missing. Edit I have also tried 'Texture2D' and using it with a register(t0), still no luck I'm not well versed writing shaders, so this might be caused by a more obvious problem. I have debugged through MonoGame's Content processor to see how this shader is being parsed, all the non 'texture' parameters are there and look to be loading correctly. Edit This seems to go back to D3D compiler. Shader code below: #include "PPVertexShader.fxh" float2 lightScreenPosition; float4x4 matVP; float2 halfPixel; float SunSize; texture flare; sampler2D Scene: register(s0){ AddressU = Clamp; AddressV = Clamp; }; sampler Flare = sampler_state { Texture = (flare); AddressU = CLAMP; AddressV = CLAMP; }; float4 LightSourceMaskPS(float2 texCoord : TEXCOORD0 ) : COLOR0 { texCoord -= halfPixel; // Get the scene float4 col = 0; // Find the suns position in the world and map it to the screen space. float2 coord; float size = SunSize / 1; float2 center = lightScreenPosition; coord = .5 - (texCoord - center) / size * .5; col += (pow(tex2D(Flare,coord),2) * 1) * 2; return col * tex2D(Scene,texCoord); } technique LightSourceMask { pass p0 { VertexShader = compile vs_4_0 VertexShaderFunction(); PixelShader = compile ps_4_0 LightSourceMaskPS(); } } I've removed default values as they are currently not support in MonoGame and also changed ps and vs to v4 instead of 2. Could this be causing the issue? As I debug through 'DXConstantBufferData' constructor (from within the MonoGameContentProcessing project) I find that the 'flare' parameter does not exist. All others seem to be getting created fine. Any help would be appreciated. Update 1 I have discovered that SharpDX D3D compiler is what seems to be ignoring this parameter (perhaps by design?). The ConstantBufferDescription.VariableCount seems to be not counting the texture variable. Update 2 SharpDX function 'GetConstantBuffer(int index)' returns the parameters (minus textures) which is making is impossible to set values to these variables within the shader. Any one know if this is normal for DX11 / Shader Model 4.0? Or am I missing something else?

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Renault under threat from industrial espionage, intellectual property the target

    - by Simon Thorpe
    Last year we saw news of both General Motors and Ford losing a significant amount of valuable information to competitors overseas. Within weeks of the turn of 2011 we see the European car manufacturer, Renault, also suffering. In a recent news report, French Industry Minister Eric Besson warned the country was facing "economic war" and referenced a serious case of espionage which concerns information pertaining to the development of electric cars. Renault senior vice president Christian Husson told the AFP news agency that the people concerned were in a "particularly strategic position" in the company. An investigation had uncovered a "body of evidence which shows that the actions of these three colleagues were contrary to the ethics of Renault and knowingly and deliberately placed at risk the company's assets", Mr Husson said. A source told Reuters on Wednesday the company is worried its flagship electric vehicle program, in which Renault with its partner Nissan is investing 4 billion euros ($5.3 billion), might be threatened. This casts a shadow over the estimated losses of Ford ($50 million) and General Motors ($40 million). One executive in the corporate intelligence-gathering industry, who spoke on condition of anonymity, said: "It's really difficult to say it's a case of corporate espionage ... It can be carelessness." He cited a hypothetical example of an enthusiastic employee giving away too much information about his job on an online forum. While information has always been passed and leaked, inadvertently or on purpose, the rise of the Internet and social media means corporate spies or careless employees are now more likely to be found out, he added. We are seeing more and more examples of where companies like these need to invest in technologies such as Oracle IRM to ensure such important information can be kept under control. It isn't just the recent release of information into the public domain via the Wikileaks website that is of concern, but also the increasing threats of industrial espionage in cases such as these. Information rights management doesn't totally remove the threat, but abilities to control documents no matter where they exist certainly increases the capabilities significantly. Every single time someone opens a sealed document the IRM system audits the activity. This makes identifying a potential source for a leak much easier when you have an absolute record of every person who's had access to the documents. Oracle IRM can also help with accidental or careless loss. Often people use very sensitive information all the time and forget the importance of handling it correctly. With the ability to protect the information from screen shots and prevent people copy and pasting document information into social networks and other, unsecured documents, Oracle IRM brings a totally new level of information security that would have a significant impact on reducing the risk these organizations face of losing their most valuable information.

    Read the article

  • Algorithm for spreading labels in a visually appealing and intuitive way

    - by mac
    Short version Is there a design pattern for distributing vehicle labels in a non-overlapping fashion, placing them as close as possible to the vehicle they refer to? If not, is any of the method I suggest viable? How would you implement this yourself? Extended version In the game I'm writing I have a bird-eye vision of my airborne vehicles. I also have next to each of the vehicles a small label with key-data about the vehicle. This is an actual screenshot: Now, since the vehicles could be flying at different altitudes, their icons could overlap. However I would like to never have their labels overlapping (or a label from vehicle 'A' overlap the icon of vehicle 'B'). Currently, I can detect collisions between sprites and I simply push away the offending label in a direction opposite to the otherwise-overlapped sprite. This works in most situations, but when the airspace get crowded, the label can get pushed very far away from its vehicle, even if there was an alternate "smarter" alternative. For example I get: B - label A -----------label C - label where it would be better (= label closer to the vehicle) to get: B - label label - A C - label EDIT: It also has to be considered that beside the overlapping vehicles case, there might be other configurations in which vehicles'labels could overlap (the ASCII-art examples show for example three very close vehicles in which the label of A would overlap the icon of B and C). I have two ideas on how to improve the present situation, but before spending time implementing them, I thought to turn to the community for advice (after all it seems like a "common enough problem" that a design pattern for it could exist). For what it's worth, here's the two ideas I was thinking to: Slot-isation of label space In this scenario I would divide all the screen into "slots" for the labels. Then, each vehicle would always have its label placed in the closest empty one (empty = no other sprites at that location. Spiralling search From the location of the vehicle on the screen, I would try to place the label at increasing angles and then at increasing radiuses, until a non-overlapping location is found. Something down the line of: try 0°, 10px try 10°, 10px try 20°, 10px ... try 350°, 10px try 0°, 20px try 10°, 20px ...

    Read the article

  • MVC data binding

    - by user441521
    I'm using MVC but I've read that MVVM is sort of about data binding and having pure markup in your views that data bind back to the backend via the data-* attributes. I've looked at knockout but it looks pretty low level and I feel like I can make a library that does this and is much easier to use where basically you only need to call 1 javascript function that will data bind your entire page because of the data-* attributes you assign to html elements. The benefits of this (that I see) is that your view is 100% decoupled from your back-end so that a given view never has to be changed if your back-end changes (ie for asp.net people no more razor in your view that makes your view specific to MS). My question would be, I know there is knockout out there but are there any others that provide this data binding functionality for MVC type applications? I don't want to recreate something that may already exist but I want to make something "better" and easier to use than knockout. To give an example of what I mean here is all the code one would need to get data binding in my library. This isn't final but just showing the idea that all you have to do is call 1 javascript function and set some data-* attribute values and everything ties together. Is this worth seeing through? <script> $(function () { // this is all you have to call to make databinding for POST or GET to work DataBind(); }); </script> <form id="addCustomer" data-bind="Customer" data-controller="Home" data-action="CreateCustomer"> Name: <input type="text" data-bind="Name" data-bind-type="text" /> Birthday: <input type="text" data-bind="Birthday" data-bind-type="text" /> Address: <input type="text" data-bind="Address" data-bind-type="text" /> <input type="submit" value="Save" id="btnSave" /> </form> ================================================= // controller action [HttpPost] public string CreateCustomer(Customer customer) { if(customer.Name == "Rick") return "success"; return "failure"; } // model public class Customer { public string Name { get; set; } public DateTime Birthday { get; set; } public string Address { get; set; } }

    Read the article

  • Database design and performance impact

    - by Craige
    I have a database design issue that I'm not quite sure how to approach, nor if the benefits out weigh the costs. I'm hoping some P.SE members can give some feedback on my suggested design, as well as any similar experiences they may have came across. As it goes, I am building an application that has large reporting demands. Speed is an important issue, as there will be peak usages throughout the year. This application/database has a multiple-level, many-to-many relationship. eg object a object b object c object d object b has relationship to object a object c has relationship to object b, a object d has relationship to object c, b, a Theoretically, this could go on for unlimited levels, though logic dictates it could only go so far. My idea here, to speed up reporting, would be to create a syndicate table that acts as a global many-to-many join table. In this table (with the given example), one might see: +----------+-----------+---------+ | child_id | parent_id | type_id | +----------+-----------+---------+ | b | a | 1 | | c | b | 2 | | c | a | 3 | | d | c | 4 | | d | b | 5 | | d | a | 6 | +----------+-----------+---------+ Where a, b, c and d would translate to their respective ID's in their respective tables. So, for ease of reporting all of a which exist on object d, one could query SELECT * FROM `syndicates` ... JOINS TO child and parent tables ... WHERE parent_id=a and type_id=6; rather than having a query with a join to each level up the chain. The Problem This table grows exponentially, and in a given year, could easily grow past 20,000 records for one client. Given multiple clients over multiple years, this table will VERY quickly explode to millions of records and beyond. Now, the database will, in time, be partitioned across multiple servers, but I would like (as most would) to keep the number of servers as low as possible while still offering flexibility. Also writes and updates would be exponentially longer (though possibly not noticeable to the end user) as there would be multiple inserts/updates/scans on this table to keep it in sync. Am I going in the right direction here, or am I way off track. What would you do in a similar situation? This solution seems overly complex, but allows the greatest flexibility and fastest read-operations. Sidenote 1 - This structure allows me to add new levels to the tree easily. Sidenote 2 - The database querying for this database is done through an ORM framework.

    Read the article

  • Installing Canon LBP6000 in Ubuntu 12.04

    - by MMA
    This is really frustrating. I am trying to install LBP6000 in Ubuntu 12.04 without any success. (Well, I had success about a week back when I first bought the printer and finally printed pages after a struggle of several hours. Then today it suddenly stopped working and I uninstalled everything and started from scratch. Now, I seem to have lost the way.) My steps Downloaded the latest Canon driver from Canon site. File Linux_CAPT_PrinterDriver_V240_uk_EN.tar.gz Got the radu script (I am allowed only two hyperlinks, so can not put the link here. You can Google radu Canon) Changed the /etc/modprobe.d/blacklist-cups-usblp.conf file as instructed in, Official Documentation. (See the section Ubuntu 12.04 Install). Now this file looks like, # cups talks to the raw USB devices, so we need to blacklist usblp to avoid # grabbing them # blacklist usblp Rebooted my machine Changed the port in radu script to 59787 as instructed in the link at step 3. (Again see the section Ubuntu 12.04 Install, or see the comment at How to Install Canon LBP Printers in Ubuntu. Also put the latest deb files from step 1 in the appropriate directory of this script. Ran the radu script. A printer, LBP6000 got added. Not two printers, one to be disabled, as appeared in the message on the terminal after running the script. sudo /etc/init.d/ccpd status shows, Canon Printer Daemon for CUPS: ccpd: 3142 3139 Results The printer does not print. Printer state (from System Setting-Printing, or at cups http interface localhost:631/printers/LBP6000) goes from Idle to Processing, a job appears in print queue, and then the job disappears and the printer state goes back to Idle. The actual printer does not even blink. Diagnostics (got help from the link in step 3, Troubleshooting) captstatusui -P LBP6000 shows communication error lsmod | grep usblp did not show anything. After running, sudo modprobe usblp, shows usblp 17885 0 However, ls -l /dev/usb/lp0 gives, ls: cannot access /dev/usb/lp0: No such file or directory /var/ccpd did not exist, created, sudo mkdir /var/ccpd sudo mkfifo /var/ccpd/fifo0 sudo chown -R lp:lp /var/ccpd Any suggestion will be appreciated. Do not know what to do.

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >