Search Results

Search found 3181 results on 128 pages for 'listener scan'.

Page 114/128 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Cancelling Route Navigation in AngularJS Controllers

    - by dwahlin
    If you’re new to AngularJS check out my AngularJS in 60-ish Minutes video tutorial or download the free eBook. Also check out The AngularJS Magazine for up-to-date information on using AngularJS to build Single Page Applications (SPAs). Routing provides a nice way to associate views with controllers in AngularJS using a minimal amount of code. While a user is normally able to navigate directly to a specific route, there may be times when a user triggers a route change before they’ve finalized an important action such as saving data. In these types of situations you may want to cancel the route navigation and ask the user if they’d like to finish what they were doing so that their data isn’t lost. In this post I’ll talk about a technique that can be used to accomplish this type of routing task.   The $locationChangeStart Event When route navigation occurs in an AngularJS application a few events are raised. One is named $locationChangeStart and the other is named $routeChangeStart (there are other events as well). At the current time (version 1.2) the $routeChangeStart doesn’t provide a way to cancel route navigation, however, the $locationChangeStart event can be used to cancel navigation. If you dig into the AngularJS core script you’ll find the following code that shows how the $locationChangeStart event is raised as the $browser object’s onUrlChange() function is invoked:   $browser.onUrlChange(function (newUrl) { if ($location.absUrl() != newUrl) { if ($rootScope.$broadcast('$locationChangeStart', newUrl, $location.absUrl()).defaultPrevented) { $browser.url($location.absUrl()); return; } $rootScope.$evalAsync(function () { var oldUrl = $location.absUrl(); $location.$$parse(newUrl); afterLocationChange(oldUrl); }); if (!$rootScope.$$phase) $rootScope.$digest(); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The key part of the code is the call to $broadcast. This call broadcasts the $locationChangeStart event to all child scopes so that they can be notified before a location change is made. To handle the $locationChangeStart event you can use the $rootScope.on() function. For this example I’ve added a call to $on() into a function that is called immediately after the controller is invoked:   function init() { //initialize data here.. //Make sure they're warned if they made a change but didn't save it //Call to $on returns a "deregistration" function that can be called to //remove the listener (see routeChange() for an example of using it) onRouteChangeOff = $rootScope.$on('$locationChangeStart', routeChange); } This code listens for the $locationChangeStart event and calls routeChange() when it occurs. The value returned from calling $on is a “deregistration” function that can be called to detach from the event. In this case the deregistration function is named onRouteChangeOff (it’s accessible throughout the controller). You’ll see how the onRouteChangeOff function is used in just a moment.   Cancelling Route Navigation The routeChange() callback triggered by the $locationChangeStart event displays a modal dialog similar to the following to prompt the user:     Here’s the code for routeChange(): function routeChange(event, newUrl) { //Navigate to newUrl if the form isn't dirty if (!$scope.editForm.$dirty) return; var modalOptions = { closeButtonText: 'Cancel', actionButtonText: 'Ignore Changes', headerText: 'Unsaved Changes', bodyText: 'You have unsaved changes. Leave the page?' }; modalService.showModal({}, modalOptions).then(function (result) { if (result === 'ok') { onRouteChangeOff(); //Stop listening for location changes $location.path(newUrl); //Go to page they're interested in } }); //prevent navigation by default since we'll handle it //once the user selects a dialog option event.preventDefault(); return; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Looking at the parameters of routeChange() you can see that it accepts an event object and the new route that the user is trying to navigate to. The event object is used to prevent navigation since we need to prompt the user before leaving the current view. Notice the call to event.preventDefault() at the end of the function. The modal dialog is shown by calling modalService.showModal() (see my previous post for more information about the custom modalService that acts as a wrapper around Angular UI Bootstrap’s $modal service). If the user selects “Ignore Changes” then their changes will be discarded and the application will navigate to the route they intended to go to originally. This is done by first detaching from the $locationChangeStart event by calling onRouteChangeOff() (recall that this is the function returned from the call to $on()) so that we don’t get stuck in a never ending cycle where the dialog continues to display when they click the “Ignore Changes” button. A call is then made to $location.path(newUrl) to handle navigating to the target view. If the user cancels the operation they’ll stay on the current view. Conclusion The key to canceling routes is understanding how to work with the $locationChangeStart event and cancelling it so that route navigation doesn’t occur. I’m hoping that in the future the same type of task can be done using the $routeChangeStart event but for now this code gets the job done. You can see this code in action in the Customer Manager application available on Github (specifically the customerEdit view). Learn more about the application here.

    Read the article

  • Migrating Virtual Iron guest to Oracle VM 3.x

    - by scoter
    As stated on the official site, Oracle in 2009, acquired a provider of server virtualization management software named Virtual Iron; you can find all the acquisition details at this link. Into the FAQ on the official site you can also view that, for the future, Oracle plans to fully integrate Virtual Iron technology into Oracle VM products, and any enhancements will be delivered as a part of the combined solution; this is what is going on with Oracle VM 3.x. So, customers started asking us to migrate Virtual Iron guests to Oracle VM. IMPORTANT: This procedure needs a dedicated OVM-Server with no-guests running on top; be careful while execute this procedure on production environments. In these little steps you will find how-to migrate, as fast as possible, your guests between VI ( Virtual Iron ) and Oracle VM; keep in mind that OracleVM has a built-in P2V utility ( Official Documentation )  that you can use to migrate guests between VI and Oracle VM. Concepts: VI repositories.  On VI we have the same "repository" concept as in Oracle VM; the difference between these two products is that VI use a raw-lun as repository ( instead of using ocfs2 and its capabilities, like ref-links ). The VI "raw-lun" repository, with a pure operating-system perspective, may be presented as in this picture: Infact on this "raw-lun" VI create an LVM2 volume-group. The VI "raw-lun" repository, with an hypervisor perspective, may be presented as in this picture: So, the relationships are: LVM2-Volume-Group <-> VI Repository LVM2-Logical-Volume <-> VI guest virtual-disk The first step is to present the VI repository ( raw-lun ) to your dedicated OVM-Server. Prepare dedicated OVM-Server On the OVM-Server ( OVS ) you need to discover new lun and, after that, discover volume-group and logical-volumes containted in VI repository; due to default OVS configuration you need to edit lvm2 configuration file: /etc/lvm/lvm.conf     # By default for OVS we restrict every block device:     # filter = [ "r/.*/" ] and comment the line starting with "filter" as above. Now you have to discover the raw-lun presented and, next, activate volume-group and logical-volumes: #!/bin/bash for HOST in `ls /sys/class/scsi_host`;do echo '- - -' > /sys/class/scsi_host/$HOST/scan; done CPATH=`pwd` cd /dev for DEVICE in `ls sd[a-z] sd?[a-z]`;do echo '1' > /sys/block/$DEVICE/device/rescan; done cd $CPATH cd /dev/mapper for PARTITION in `ls *[a-z] *?[a-z]`;do partprobe /dev/mapper/$PARTITION; done cd $CPATH vgchange -a yAfter that you will see a new device:[root@ovs01 ~]# cd /dev/6000F4B00000000000210135bef64994[root@ovs01 6000F4B00000000000210135bef64994]# ls -l 6000F4B0000000000061013* lrwxrwxrwx 1 root root 77 Oct 29 10:50 6000F4B00000000000610135c3a0b8cb -> /dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cb By your OVM-Manager create a guest server with the same definition as on VI:same core number as VI source guestsame memory as VI source guestsame number of disks as VI source guest ( you can create OVS virtual disk with a small size of 1GB because the "clone" will, eventually, extend the size of your new virtual disks )Summarizing:source-virtual-disk path ( VI ):/dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cbdest-virtual-disk path ( OVS ):/OVS/Repositories/0004fb00000300006cfeb81c12f12f00/VirtualDisks/0004fb000012000055e0fc4c5c8a35ee.img ** ** = to identify your virtual disk you have verify its name under the "vm.cfg" file of your new guest.Clone VI virtual-disk to OVS virtual-diskdd if=/dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cb of=/OVS/Repositories/0004fb00000300006cfeb81c12f12f00/VirtualDisks/0004fb000012000055e0fc4c5c8a35ee.img Clean unsupported parameters and changes on OVS.1. Restore original /etc/lvm/lvm.conf    # By default for OVS we restrict every block device:     filter = [ "r/.*/" ]    and uncomment the line starting with "filter" as above.2. Force-stop lvm2-monitor service  # service lvm2-monitor force-stop 3. Restore original /etc/lvm directories ( archive, backup and cache )  # cd /etc/lvm  # rm -fr archive backup cache; mkdir archive backup cache4. Reboot OVSRefresh OVS repository and start your guest.By OracleVM Manager refresh your repository:By OracleVM Manager start your "migrated" guest: Comments and corrections are welcome.  Simon COTER 

    Read the article

  • Making Those PanelBoxes Behave

    - by Duncan Mills
    I have a little problem to solve earlier this week - misbehaving <af:panelBox> components... What do I mean by that? Well here's the scenario, I have a page fragment containing a set of panelBoxes arranged vertically. As it happens, they are stamped out in a loop but that does not really matter. What I want to be able to do is to provide the user with a simple UI to close and open all of the panelBoxes in concert. This could also apply to showDetailHeader and similar items with a disclosed attrubute, but in this case it's good old panelBoxes.  Ok, so the basic solution to this should be self evident. I can set up a suitable scoped managed bean that the panelBoxes all refer to for their disclosed attribute state. Then the open all / close commandButtons in the UI can simply set the state of that bean for all the panelBoxes to pick up via EL on their disclosed attribute. Sound OK? Well that works basically without a hitch, but turns out that there is a slight problem and this is where the framework is attempting to be a little too helpful. The issue is that is the user manually discloses or hides a panelBox then that will override the value that the EL is setting. So for example. I start the page with all panelBoxes collapsed, all set by the EL state I'm storing on the session I manually disclose panelBox no 1. I press the Expand All button - all works as you would hope and all the panelBoxes are now disclosed, including of course panelBox 1 which I just expanded manually. Finally I press the Collapse All button and everything collapses except that first panelBox that I manually disclosed.  The problem is that the component remembers this manual disclosure and that overrides the value provided by the expression. If I change the viewId (navigate away and back) then the panelBox will start to behave again, until of course I touch it again! Now, the more astute amoungst you would think (as I did) Ah, sound like the MDS personalizaton stuff is getting in the way and the solution should simply be to set the dontPersist attribute to disclosed | ALL. Alas this does not fix the issue.  After a little noodling on the best way to approach this I came up with a solution that works well, although if you think of an alternative way do let me know. The principle is simple. In the disclosureListener for the panelBox I take a note of the clientID of the panelBox component that has been touched by the user along with the state. This all gets stored in a Map of Booleans in ViewScope which is keyed by clientID and stores the current disclosed state in the Boolean value.  The listener looks like this (it's held in a request scope backing bean for the page): public void handlePBDisclosureEvent(DisclosureEvent disclosureEvent) { String clientId = disclosureEvent.getComponent().getClientId(FacesContext.getCurrentInstance()); boolean state = disclosureEvent.isExpanded(); pbState.addTouchedPanelBox(clientId, state); } The pbState variable referenced here is a reference to the bean which will hold the state of the panelBoxes that lives in viewScope (recall that everything is re-set when the viewid is changed so keeping this in viewScope is just fine and cleans things up automatically). The addTouchedPanelBox() method looks like this: public void addTouchedPanelBox(String clientId, boolean state) { //create the cache if needed this is just a Map<String,Boolean> if (_touchedPanelBoxState == null) { _touchedPanelBoxState = new HashMap<String, Boolean>(); } // Simply put / replace _touchedPanelBoxState.put(clientId, state); } So that's the first part, we now have a record of every panelBox that the user has touched. So what do we do when the Collapse All or Expand All buttons are pressed? Here we do some JavaScript magic. Basically for each clientID that we have stored away, we issue a client side disclosure event from JavaScript - just as if the user had gone back and changed it manually. So here's the Collapse All button action: public String CloseAllAction() { submitDiscloseOverride(pbState.getTouchedClientIds(true), false); _uiManager.closeAllBoxes(); return null; }  The _uiManager.closeAllBoxes() method is just manipulating the master-state that all of the panelBoxes are bound to using EL. The interesting bit though is the line:  submitDiscloseOverride(pbState.getTouchedClientIds(true), false); To break that down, the first part is a call to that viewScoped state holder to ask for a list of clientIDs that need to be "tweaked": public String getTouchedClientIds(boolean targetState) { StringBuilder sb = new StringBuilder(); if (_touchedPanelBoxState != null && _touchedPanelBoxState.size() > 0) { for (Map.Entry<String, Boolean> entry : _touchedPanelBoxState.entrySet()) { if (entry.getValue() == targetState) { if (sb.length() > 0) { sb.append(','); } sb.append(entry.getKey()); } } } return sb.toString(); } You'll notice that this method only processes those panelBoxes that will be in the wrong state and returns those as a comma separated list. This is then processed by the submitDiscloseOverride() method: private void submitDiscloseOverride(String clientIdList, boolean targetDisclosureState) { if (clientIdList != null && clientIdList.length() > 0) { FacesContext fctx = FacesContext.getCurrentInstance(); StringBuilder script = new StringBuilder(); script.append("overrideDiscloseHandler('"); script.append(clientIdList); script.append("',"); script.append(targetDisclosureState); script.append(");"); Service.getRenderKitService(fctx, ExtendedRenderKitService.class).addScript(fctx, script.toString()); } } This method constructs a JavaScript command to call a routine called overrideDiscloseHandler() in a script attached to the page (using the standard <af:resource> tag). That method parses out the list of clientIDs and sends the correct message to each one: function overrideDiscloseHandler(clientIdList, newState) { AdfLogger.LOGGER.logMessage(AdfLogger.INFO, "Disclosure Hander newState " + newState + " Called with: " + clientIdList); //Parse out the list of clientIds var clientIdArray = clientIdList.split(','); for (var i = 0; i < clientIdArray.length; i++){ var panelBox = flipPanel = AdfPage.PAGE.findComponentByAbsoluteId(clientIdArray[i]); if (panelBox.getComponentType() == "oracle.adf.RichPanelBox"){ panelBox.broadcast(new AdfDisclosureEvent(panelBox, newState)); } }  }  So there you go. You can see how, with a few tweaks the same code could be used for other components with disclosure that might suffer from the same problem, although I'd point out that the behavior I'm working around here us usually desirable. You can download the running example (11.1.2.2) from here. 

    Read the article

  • Annotation Processor for Superclass Sensitive Actions

    - by Geertjan
    Someone creating superclass sensitive actions should need to specify only the following things: The condition under which the popup menu item should be available, i.e., the condition under which the action is relevant. And, for superclass sensitive actions, the condition is the name of a superclass. I.e., if I'm creating an action that should only be invokable if the class implements "org.openide.windows.TopComponent",  then that fully qualified name is the condition. The position in the list of Java class popup menus where the new menu item should be found, relative to the existing menu items. The display name. The path to the action folder where the new action is registered in the Central Registry. The code that should be executed when the action is invoked. In other words, the code for the enablement (which, in this case, means the visibility of the popup menu item when you right-click on the Java class) should be handled generically, under the hood, and not every time all over again in each action that needs this special kind of enablement. So, here's the usage of my newly created @SuperclassBasedActionAnnotation, where you should note that the DataObject must be in the Lookup, since the action will only be available to be invoked when you right-click on a Java source file (i.e., text/x-java) in an explorer view: import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import org.netbeans.sbas.annotations.SuperclassBasedActionAnnotation; import org.openide.awt.StatusDisplayer; import org.openide.loaders.DataObject; import org.openide.util.NbBundle; import org.openide.util.Utilities; @SuperclassBasedActionAnnotation( position=30, displayName="#CTL_BrandTopComponentAction", path="File", type="org.openide.windows.TopComponent") @NbBundle.Messages("CTL_BrandTopComponentAction=Brand") public class BrandTopComponentAction implements ActionListener { private final DataObject context; public BrandTopComponentAction() { context = Utilities.actionsGlobalContext().lookup(DataObject.class); } @Override public void actionPerformed(ActionEvent ev) { String message = context.getPrimaryFile().getPath(); StatusDisplayer.getDefault().setStatusText(message); } } That implies I've created (in a separate module to where it is used) a new annotation. Here's the definition: package org.netbeans.sbas.annotations; import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target; @Retention(RetentionPolicy.SOURCE) @Target(ElementType.TYPE) public @interface SuperclassBasedActionAnnotation { String type(); String path(); int position(); String displayName(); } And here's the processor: package org.netbeans.sbas.annotations; import java.util.Set; import javax.annotation.processing.Processor; import javax.annotation.processing.RoundEnvironment; import javax.annotation.processing.SupportedAnnotationTypes; import javax.annotation.processing.SupportedSourceVersion; import javax.lang.model.SourceVersion; import javax.lang.model.element.Element; import javax.lang.model.element.TypeElement; import javax.lang.model.util.Elements; import org.openide.filesystems.annotations.LayerBuilder.File; import org.openide.filesystems.annotations.LayerGeneratingProcessor; import org.openide.filesystems.annotations.LayerGenerationException; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = Processor.class) @SupportedAnnotationTypes("org.netbeans.sbas.annotations.SuperclassBasedActionAnnotation") @SupportedSourceVersion(SourceVersion.RELEASE_6) public class SuperclassBasedActionProcessor extends LayerGeneratingProcessor { @Override protected boolean handleProcess(Set annotations, RoundEnvironment roundEnv) throws LayerGenerationException { Elements elements = processingEnv.getElementUtils(); for (Element e : roundEnv.getElementsAnnotatedWith(SuperclassBasedActionAnnotation.class)) { TypeElement clazz = (TypeElement) e; SuperclassBasedActionAnnotation mpm = clazz.getAnnotation(SuperclassBasedActionAnnotation.class); String teName = elements.getBinaryName(clazz).toString(); String originalFile = "Actions/" + mpm.path() + "/" + teName.replace('.', '-') + ".instance"; File actionFile = layer(e).file( originalFile). bundlevalue("displayName", mpm.displayName()). methodvalue("instanceCreate", "org.netbeans.sbas.annotations.SuperclassSensitiveAction", "create"). stringvalue("type", mpm.type()). newvalue("delegate", teName); actionFile.write(); File javaPopupFile = layer(e).file( "Loaders/text/x-java/Actions/" + teName.replace('.', '-') + ".shadow"). stringvalue("originalFile", originalFile). intvalue("position", mpm.position()); javaPopupFile.write(); } return true; } } The "SuperclassSensitiveAction" referred to in the code above is unchanged from how I had it in yesterday's blog entry. When I build the module containing two action listeners that use my new annotation, the generated layer file looks as follows, which is identical to the layer file entries I hard coded yesterday: <folder name="Actions"> <folder name="File"> <file name="org-netbeans-sbas-impl-ActionListenerSensitiveAction.instance"> <attr name="displayName" stringvalue="Process Action Listener"/> <attr methodvalue="org.netbeans.sbas.annotations.SuperclassSensitiveAction.create" name="instanceCreate"/> <attr name="type" stringvalue="java.awt.event.ActionListener"/> <attr name="delegate" newvalue="org.netbeans.sbas.impl.ActionListenerSensitiveAction"/> </file> <file name="org-netbeans-sbas-impl-BrandTopComponentAction.instance"> <attr bundlevalue="org.netbeans.sbas.impl.Bundle#CTL_BrandTopComponentAction" name="displayName"/> <attr methodvalue="org.netbeans.sbas.annotations.SuperclassSensitiveAction.create" name="instanceCreate"/> <attr name="type" stringvalue="org.openide.windows.TopComponent"/> <attr name="delegate" newvalue="org.netbeans.sbas.impl.BrandTopComponentAction"/> </file> </folder> </folder> <folder name="Loaders"> <folder name="text"> <folder name="x-java"> <folder name="Actions"> <file name="org-netbeans-sbas-impl-ActionListenerSensitiveAction.shadow"> <attr name="originalFile" stringvalue="Actions/File/org-netbeans-sbas-impl-ActionListenerSensitiveAction.instance"/> <attr intvalue="10" name="position"/> </file> <file name="org-netbeans-sbas-impl-BrandTopComponentAction.shadow"> <attr name="originalFile" stringvalue="Actions/File/org-netbeans-sbas-impl-BrandTopComponentAction.instance"/> <attr intvalue="30" name="position"/> </file> </folder> </folder> </folder> </folder>

    Read the article

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 2: Platforms and Features

    - by Tim Murphy
    This is part 2 in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. In the previous post I discussed what reasons a company might have for creating a smartphone application.  In this installment I will cover some of history and state of the different platforms as well as features that can be leveraged for building enterprise smartphone applications. Platforms Before you start choosing a platform to develop your solutions on it is good to understand how we got here and what features you can leverage. History To my memory we owe all of this to a product called the Apple Newton that came out in 1987. It was the first PDA and back then I was much more of an Apple fan.  I was very impressed with this device even though it never really went anywhere.  The Palm Pilot by US Robotics was the next major advancement in PDA. It had a simple short hand window that allowed for quick stylus entry.. Later, Windows CE came out and started the broadening of the PDA market. After that it was the Palm and CE operating systems that started showing up on cell phones and for some time these were the two dominant operating systems that were distributed with devices from multiple hardware vendors. Current The iPhone was the first smartphone to take away the stylus and give us a multi-touch interface.  It was a revolution in usability and really changed the attractiveness of smartphones for the general public.  This brought us to the beginning of the current state of the market with the concept of an online store that makes it easy for customers to get new features and functionality on demand. With Android, Google made this more than a one horse race.  Not only did they come to compete, their low cost actually made them the leading OS.  Of course what made Android so attractive also is its major fault.  It is so open that it has been a target for malware which leaves consumers exposed.  Fortunately for Google though, most consumers aren’t aware of the threat that they are under. Although Microsoft had put out one of the first smart phone operating systems with CE it had to play catch up and finally came out with the Windows Phone.  They have gone for a market approach between those of iOS and Android.  They support multiple hardware vendors like Google, but they kept a certification process for applications that is similar to Apple.  They also created a user interface that was different enough to give it a clear separation from the other two platforms. The result of all this is hundreds of millions of smartphones being sold monthly across all three platforms giving us a wide range of choices and challenges when it comes to developing solutions. Features So what are the features that make these devices flexible enough be considered for use in the enterprise? The biggest advantage of today's devices is network connectivity.  The ability to access information from multiple sources at a moment’s notice is critical for businesses.  Add to that the ability to communicate over a variety of text, voice and video modes and we have a powerful starting point. Every smartphone has a cameras and they are not just useful for posting to Instagram. We are seeing more applications such as Bing vision that allow us to scan just about any printed code or text to find information.  These capabilities have been made available to developers in the form of standard libraries for reading barcodes of just about an flavor and optical character recognition (OCR) interpretation. Bluetooth give us the ability to communicate with multiple devices. Whether these are headsets, keyboard or printers the wireless communication capabilities are just starting to evolve.  The more these wireless communication protocols grow, the more opportunities we will see to transfer data between users and a variety of devices. Local storage of information that can be called up even when the device cannot reach the network is the other big capability.  This give users the ability to work offline as well and transmit information when connections are restored. These are the tools that we have to work with to build applications that can be leveraged to gain a competitive advantage for companies that implement them. Coming Up In the third installment I will cover key concerns that you face when building enterprise smartphone apps. del.icio.us Tags: smartphones,enterprise smartphone Apps,architecture,iOS,Android,Windows Phone

    Read the article

  • Is it feasible and useful to auto-generate some code of unit tests?

    - by skiwi
    Earlier today I have come up with an idea, based upon a particular real use case, which I would want to have checked for feasability and usefulness. This question will feature a fair chunk of Java code, but can be applied to all languages running inside a VM, and maybe even outside. While there is real code, it uses nothing language-specific, so please read it mostly as pseudo code. The idea Make unit testing less cumbersome by adding in some ways to autogenerate code based on human interaction with the codebase. I understand this goes against the principle of TDD, but I don't think anyone ever proved that doing TDD is better over first creating code and then immediatly therafter the tests. This may even be adapted to be fit into TDD, but that is not my current goal. To show how it is intended to be used, I'll copy one of my classes here, for which I need to make unit tests. public class PutMonsterOnFieldAction implements PlayerAction { private final int handCardIndex; private final int fieldMonsterIndex; public PutMonsterOnFieldAction(final int handCardIndex, final int fieldMonsterIndex) { this.handCardIndex = Arguments.requirePositiveOrZero(handCardIndex, "handCardIndex"); this.fieldMonsterIndex = Arguments.requirePositiveOrZero(fieldMonsterIndex, "fieldCardIndex"); } @Override public boolean isActionAllowed(final Player player) { Objects.requireNonNull(player, "player"); Hand hand = player.getHand(); Field field = player.getField(); if (handCardIndex >= hand.getCapacity()) { return false; } if (fieldMonsterIndex >= field.getMonsterCapacity()) { return false; } if (field.hasMonster(fieldMonsterIndex)) { return false; } if (!(hand.get(handCardIndex) instanceof MonsterCard)) { return false; } return true; } @Override public void performAction(final Player player) { Objects.requireNonNull(player); if (!isActionAllowed(player)) { throw new PlayerActionNotAllowedException(); } Hand hand = player.getHand(); Field field = player.getField(); field.setMonster(fieldMonsterIndex, (MonsterCard)hand.play(handCardIndex)); } } We can observe the need for the following tests: Constructor test with valid input Constructor test with invalid inputs isActionAllowed test with valid input isActionAllowed test with invalid inputs performAction test with valid input performAction test with invalid inputs My idea mainly focuses on the isActionAllowed test with invalid inputs. Writing these tests is not fun, you need to ensure a number of conditions and you check whether it really returns false, this can be extended to performAction, where an exception needs to be thrown in that case. The goal of my idea is to generate those tests, by indicating (through GUI of IDE hopefully) that you want to generate tests based on a specific branch. The implementation by example User clicks on "Generate code for branch if (handCardIndex >= hand.getCapacity())". Now the tool needs to find a case where that holds. (I haven't added the relevant code as that may clutter the post ultimately) To invalidate the branch, the tool needs to find a handCardIndex and hand.getCapacity() such that the condition >= holds. It needs to construct a Player with a Hand that has a capacity of at least 1. It notices that the capacity private int of Hand needs to be at least 1. It searches for ways to set it to 1. Fortunately it finds a constructor that takes the capacity as an argument. It uses 1 for this. Some more work needs to be done to succesfully construct a Player instance, involving the creation of objects that have constraints that can be seen by inspecting the source code. It has found the hand with the least capacity possible and is able to construct it. Now to invalidate the test it will need to set handCardIndex = 1. It constructs the test and asserts it to be false (the returned value of the branch) What does the tool need to work? In order to function properly, it will need the ability to scan through all source code (including JDK code) to figure out all constraints. Optionally this could be done through the javadoc, but that is not always used to indicate all constraints. It could also do some trial and error, but it pretty much stops if you cannot attach source code to compiled classes. Then it needs some basic knowledge of what the primitive types are, including arrays. And it needs to be able to construct some form of "modification trees". The tool knows that it needs to change a certain variable to a different value in order to get the correct testcase. Hence it will need to list all possible ways to change it, without using reflection obviously. What this tool will not replace is the need to create tailored unit tests that tests all kinds of conditions when a certain method actually works. It is purely to be used to test methods when they invalidate constraints. My questions: Is creating such a tool feasible? Would it ever work, or are there some obvious problems? Would such a tool be useful? Is it even useful to automatically generate these testcases at all? Could it be extended to do even more useful things? Does, by chance, such a project already exist and would I be reinventing the wheel? If not proven useful, but still possible to make such thing, I will still consider it for fun. If it's considered useful, then I might make an open source project for it depending on the time. For people searching more background information about the used Player and Hand classes in my example, please refer to this repository. At the time of writing the PutMonsterOnFieldAction has not been uploaded to the repo yet, but this will be done once I'm done with the unit tests.

    Read the article

  • Can't control connection bit rate using iwconfig with Atheros TL-WN821N (AR7010)

    - by Paul H
    I'm trying to reduce the connection bit rate on my Atheros TP-Link TL-WN821N v3 usb wifi adapter due to frequent instability issues (reported connection speed goes down to 1Mb/s and I have to physically reconnect the adapter to regain a connection). I know this is a common problem with this device, and I have tried everything I can think of to fix it, including using drivers from linux-backports; compiling and installing a custom firmware (following instructions on https://wiki.debian.org/ath9k_htc#fw-free) and (as a last resort) using ndiswrapper. When using ndiswrapper, the wifi adapter is stable and operates in g mode at 54Mb/s (whilst when using the default ath9k_htc module, the adapter connects in n mode and the bit rate fluctuates constantly). Unfortunately, with this setup I have to run my processor using only one core, since using SMP with ndiswrapper causes a kernel oops on my system. So I want to lock my bit rate to 54Mb/s (or less, if need be) for connection stability, using the ath9k_htc module. I've tried 'sudo iwconfig wlan0 rate 54M'; the command runs with no error but when I check the bit rate with 'sudo iwlist wlan0 bitrate' the command returns: wlan0 unknown bit-rate information. Current Bit Rate:78 Mb/s Any ideas? Here's some info (hopefully relevant) on my setup: Xubuntu (12.04.3) 64bit (kernel 3.2.0-55.85-generic) using Network Manager. My Router is from Virgin Media, the VMDG480. lshw -C network : *-network description: Wireless interface physical id: 1 bus info: usb@1:4 logical name: wlan0 serial: 74:ea:3a:8f:16:b6 capabilities: ethernet physical wireless configuration: broadcast=yes driver=ath9k_htc driverversion=3.2.0-55 firmware=1.3 ip=192.168.0.9 link=yes multicast=yes wireless=IEEE 802.11bgn lsusb -v: Bus 001 Device 003: ID 0cf3:7015 Atheros Communications, Inc. TP-Link TL-WN821N v3 802.11n [Atheros AR7010+AR9287] Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 255 Vendor Specific Class bDeviceSubClass 255 Vendor Specific Subclass bDeviceProtocol 255 Vendor Specific Protocol bMaxPacketSize0 64 idVendor 0x0cf3 Atheros Communications, Inc. idProduct 0x7015 TP-Link TL-WN821N v3 802.11n [Atheros AR7010+AR9287] bcdDevice 2.02 iManufacturer 16 ATHEROS iProduct 32 UB95 iSerial 48 12345 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 60 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 6 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x04 EP 4 OUT bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x05 EP 5 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x06 EP 6 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 255 Vendor Specific Class bDeviceSubClass 255 Vendor Specific Subclass bDeviceProtocol 255 Vendor Specific Protocol bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) iwlist wlan0 scanning: wlan0 Scan completed : Cell 01 - Address: C4:3D:C7:3A:1F:5D Channel:1 Frequency:2.412 GHz (Channel 1) Quality=37/70 Signal level=-73 dBm Encryption key:on ESSID:"my essid" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 48 Mb/s Mode:Master Extra:tsf=00000070cca77186 Extra: Last beacon: 5588ms ago IE: Unknown: 0007756E69636F726E IE: Unknown: 010882848B962430486C IE: Unknown: 030101 IE: Unknown: 2A0100 IE: Unknown: 2F0100 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: 32040C121860 IE: Unknown: 2D1AFC181BFFFF000000000000000000000000000000000000000000 IE: Unknown: 3D1601080400000000000000000000000000000000000000 IE: Unknown: DD7E0050F204104A0001101044000102103B00010310470010F99C335D7BAC57FB00137DFA79600220102100074E657467656172102300074E6574676561721024000631323334353610420007303030303030311054000800060050F20400011011000743473331303144100800022008103C0001011049000600372A000120 IE: Unknown: DD090010180203F02C0000 IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00 iwconfig: lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"my essid" Mode:Managed Frequency:2.412 GHz Access Point: C4:3D:C7:3A:1F:5D Bit Rate=78 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=36/70 Signal level=-74 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0,

    Read the article

  • ?11.2RAC??????????????

    - by JaneZhang(???)
           ?????,???????????????,???dbca???????,???????????dbca,?????????11.2???????,???????,??dbca??????????????????,????????????????     ????11.2???????RACDB2???,?????RACDB1? ?????rac1,????rac2?     ?11.2?,?????grid?????GI,??oracle????????,????????oracle?????? 1. ??????????????????,?????,???????????:audit_file_dest, background_dump_dest, user_dump_dest ?core_dump_dest????audit_file_dest=/u01/app/oracle/admin/RACDB/adump,?????????,?????????:ORA-09925: Unable to create audit trail fileLinux-x86_64 Error: 2: No such file or directoryAdditional information: 99252. ????????????????????????????:SQL> alter system set instance_number=2 scope=spfile sid='RACDB2';SQL> alter system set thread=2 scope=spfile sid='RACDB2';SQL> alter system set undo_tablespace='UNDOTBS2' scope=spfile sid='RACDB2';SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.0.2.122)(PORT=1521))))' sid='RACDB2'; <=====192.0.2.122???2?VIP 3. ???????DB?$ORACLE_HOME/dbs/init<sid>.ora ?????DB?$ORACLE_HOME/dbs/init<sid>.ora,??????????????init<sid>.ora ????,????spfile???:=======================SPFILE='+DATA/racdb/spfileracdb.ora'??:[oracle@rac1 ~]$ scp $ORACLE_HOME/dbs/initRACDB1.ora rac2:$ORACLE_HOME/dbs/initRACDB2.ora <===????????24.  ??????/etc/oratab,????????????:RACDB2:/u01/app/oracle/product/11.2.0/dbhome_1:N       5.  ???????????: DB?$ORACLE_HOME/dbs/ora<sid>.pwd ????DB?$ORACLE_HOME/dbs/ora<sid>.pwd,??????????????:[oracle@rac1 dbs]$ scp $ORACLE_HOME/dbs/orapwRACDB1 rac2:$ORACLE_HOME/dbs/orapwRACDB2 <==?????26.  ?????????????,????????UNDO TABLESPACE?(??????dbca?????,???????undo tablespace????,?????????)??:SQL>CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE '/dev/….' SIZE 4096M ;???????:SQL>CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE '+DATA' SIZE 4096M ;7.  ?????????????,????????redo thread?redo log:??:SQL> alter database add logfile thread 2      group 3 ('/dev/...', '/dev/...') size 1024M,     group 4 ('/dev/...','dev/...') size 1024M;???????:SQL> alter database add logfile thread 2     group 3 ('+DATA','+RECO') size 1024M,     group 4 ('+DATA','+RECO') size 1024M;SQL> alter database enable thread 2; <==????thread8.  ??????????,?????????????:[oracle@rac2 admin]$su - oracle[oracle@rac2 admin]$export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1[oracle@rac2 admin]$export ORACLE_SID=RACDB2[oracle@rac2 admin]$ sqlplus / as sysdbaSQL> startup <==??????,???????2????????????9. ?????OCR???GI??,?????????????:$srvctl add instance -d <database name> -i <new instance name> -n <new node name>Example of srvctl add instance command:============================[oracle@rac2 ~]$ srvctl add instance -d racdb -i RACDB2 -n rac2  <==????????,????ps -ef|grep smon???[oracle@rac2 dbs]$ ps -ef|grep smonroot      3453     1  1 Jun12 ?        04:03:05 /u01/app/11.2.0/grid/bin/osysmond.bingrid      3727     1  0 Jun12 ?        00:00:19 asm_smon_+ASM2oracle    5343  4543  0 14:06 pts/1    00:00:00 grep smonoracle   28736     1  0 Jun25 ?        00:00:03 ora_smon_RACDB2 <========??????10. ???????:$su - grid[grid@rac2 ~]$ crsctl stat res -t...ora.racdb.db      1        ONLINE  ONLINE       rac1                     Open                      2        OFFLINE OFFLINE             rac2????,??????offline,????????????sqlplus??????sqlplus??????,???srvctl??:[grid@rac2 ~]$ su  - oraclePassword: [oracle@rac2 ~]$ sqlplus / as sysdbaSQL> shutdown immediate;Database closed.Database dismounted.ORACLE instance shut down.SQL> exit[oracle@rac2 ~]$ srvctl start instance -d racdb -i RACDB2[oracle@rac2 ~]$ su - gridPassword: [grid@rac2 ~]$ crsctl stat res -tora.racdb.db      1        ONLINE  ONLINE       rac1                     Open                      2        ONLINE  ONLINE       rac2                     Open                11. ?????????:[oracle@rac2 ~]$ crsctl stat res ora.racdb.db -pNAME=ora.racdb.dbTYPE=ora.database.typeACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--ACTION_FAILURE_TEMPLATE=ACTION_SCRIPT=ACTIVE_PLACEMENT=1AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%AUTO_START=restoreCARDINALITY=2CHECK_INTERVAL=1CHECK_TIMEOUT=30CLUSTER_DATABASE=trueDATABASE_TYPE=RACDB_UNIQUE_NAME=RACDBDEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) ELEMENT(DATABASE_TYPE= %DATABASE_TYPE%)DEGREE=1DESCRIPTION=Oracle Database resourceENABLED=1FAILOVER_DELAY=0FAILURE_INTERVAL=60FAILURE_THRESHOLD=1GEN_AUDIT_FILE_DEST=/u01/app/oracle/admin/RACDB/adumpGEN_START_OPTIONS=GEN_START_OPTIONS@SERVERNAME(rac1)=openGEN_START_OPTIONS@SERVERNAME(rac2)=openGEN_USR_ORA_INST_NAME=GEN_USR_ORA_INST_NAME@SERVERNAME(rac1)=RACDB1HOSTING_MEMBERS=INSTANCE_FAILOVER=0LOAD=1LOGGING_LEVEL=1MANAGEMENT_POLICY=AUTOMATICNLS_LANG=NOT_RESTARTING_TEMPLATE=OFFLINE_CHECK_INTERVAL=0ONLINE_RELOCATION_TIMEOUT=0ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1ORACLE_HOME_OLD=PLACEMENT=restrictedPROFILE_CHANGE_TEMPLATE=RESTART_ATTEMPTS=2ROLE=PRIMARYSCRIPT_TIMEOUT=60SERVER_POOLS=ora.RACDBSPFILE=+DATA/RACDB/spfileRACDB.oraSTART_DEPENDENCIES=hard(ora.DATA.dg,ora.RECO.dg) weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns) pullup(ora.DATA.dg,ora.RECO.dg)START_TIMEOUT=600STATE_CHANGE_TEMPLATE=STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DATA.dg,shutdown:ora.RECO.dg)STOP_TIMEOUT=600TYPE_VERSION=3.2UPTIME_THRESHOLD=1hUSR_ORA_DB_NAME=RACDBUSR_ORA_DOMAIN=USR_ORA_ENV=USR_ORA_FLAGS=USR_ORA_INST_NAME=USR_ORA_INST_NAME@SERVERNAME(rac1)=RACDB1USR_ORA_INST_NAME@SERVERNAME(rac2)=RACDB2USR_ORA_OPEN_MODE=openUSR_ORA_OPI=falseUSR_ORA_STOP_MODE=immediateVERSION=11.2.0.3.0???11.2,?OCR???database??,??????,???????????database???????database???????,??????,???????????????ASM????????????  ?:dbca ???????????:????????oracle????dbca:su - oracledbca?? RAC database?? Instance Management?? add an instance???active rac database??????? ??undo?redo??

    Read the article

  • Can ping device from one computer and not the other

    - by Sean Duggan
    I've recently been assigned to work on a diagnostic program done in C++ which communicates with a piece of electronic equipment. Our normal scenario involves communicating via an RS232 interface, but I've been asked to make our program work over ethernet, source code having been done in Visual Basic. After much thrashing about trying to get the code to work and continuing to get 10049 Winsock errors when I tried to connect, I tried pinging the switch. From the computer the VB program is running on, I can see the switch via ping, nslookup, tracert, and pathping (I was going down the list of programs) and I can do this via URI or IP address. From my laptop, sending the same commands fails every time. They're both using the same network cable and the same USB-to-Ethernet device (I've been swapping them between tests) but one can see the switch and the other cannot. I'm working on the programming end, but the ping results makes me think that there might be a network issue stymieing me. wry grin I'm not much of a network guy, so I'm appealing to expert assistance. Both computers are running Windows XP if that helps. The connection is to an "IP-RS8" device which then connects to our VCU-C units. Each unit is accessible via URI or IP address on the desktop computer we usually have connected to the units (it's running the older VB program that I was asked to lift the networking code from). The connection is made via a USB-to-Ethernet adapter so as to leave the regular Ethernet port available for connecting to the company network. Hmm... come to think of it, I've probably been confusing the issue, talking about pinging "the switch" rather than indicating that it's the devices. My apologies. Communication is generally done with a DLL that uses Winsock functions to make queries for data from the VCU and then to receive. I'm failing when connecting. I haven't found anything on the firewall which should block these commands, but I'll keep poking. I don't know if it's potentially relevant, but on the desktop, the adapter maps to Local Area Connection 3 while on the laptop, it consistently maps to Local Area Connection 2. Currently reading up on DHCP. IPConfig /all results: Desktop Host Name . . . . . . . . . . . . : AMERDAEXXXXXX Primary Dns Suffix . . . . . . . : amer.example.com Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : COMPANY.com amer.example.com atle.example.com cone.example.com apac.example.com scan.example.com bYX.example.com Ethernet adapter Local Area Connection X: Connection-specific DNS Suffix . : amer.example.com Description . . . . . . . . . . . : Broadcom NetXtreme XYxx Gigabit Controller Physical Address. . . . . . . . . : YY-XX-YB-XX-XX-XX Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : XYY.XXX.XY.XXX Subnet Mask . . . . . . . . . . . : XXX.XXX.XXY.Y Default Gateway . . . . . . . . . : XYY.XXX.XY.X DHCP Server . . . . . . . . . . . : XY.XXX.XXY.XX DNS Servers . . . . . . . . . . . : XY.XXX.XXY.XX XY.XXY.XXY.XX Primary WINS Server . . . . . . . : XY.XXX.XXY.X Secondary WINS Server . . . . . . : XY.XXY.XXY.X Lease Obtained. . . . . . . . . . : Thursday, July XX, XYXX XY:XX:XX AM Lease Expires . . . . . . . . . . : Sunday, July XX, XYXX XY:XX:XX AM Ethernet adapter Local Area Connection X: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : ASIX axYYYYX USBX.Y to Fast Ethernet Adapter Physical Address. . . . . . . . . : YY-XY-BY-YX-XY-AY Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : XY.Y.Y.X Subnet Mask . . . . . . . . . . . : XXX.XXX.XXY.Y Default Gateway . . . . . . . . . : XY.Y.Y.X DHCP Server . . . . . . . . . . . : XY.Y.Y.XY DNS Servers . . . . . . . . . . . : XY.Y.Y.X Lease Obtained. . . . . . . . . . : Thursday, July XX, XYXX XY:XX:XY AM Lease Expires . . . . . . . . . . : Tuesday, August YX, XYXX XX:XY:XY AM Laptop Windows IP Configuration Host Name . . . . . . . . . . . . : AMERLAFYYXXYX Primary Dns Suffix . . . . . . . : amer.example.com Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : COMPANY.com amer.example.com atle.example.com cone.example.com apac.example.com scan.example.com bYX.example.com Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : amer.example.com Description . . . . . . . . . . . : Intel(R) 82567LM Gigabit Network Connection Physical Address. . . . . . . . . : YY-XY-BY-DY-XB-YX Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : XYY.XXX.XY.XY Subnet Mask . . . . . . . . . . . : XXX.XXX.XXY.Y Default Gateway . . . . . . . . . : XYY.XXX.XY.X DHCP Server . . . . . . . . . . . : XY.XXX.XXY.XX DNS Servers . . . . . . . . . . . : XY.XXX.XXY.XX XY.XXY.XXY.XX Primary WINS Server . . . . . . . : XY.XXX.XXY.X Secondary WINS Server . . . . . . : XY.XXY.XXY.X Lease Obtained. . . . . . . . . . : Thursday, July XX, XYXX XX:XX:XX AM Lease Expires . . . . . . . . . . : Sunday, July XX, XYXX XX:XX:XX AM Ethernet adapter {XYXAAYXX-YEDY-XXYX-YYEX-BYXYXXYEEYEX}: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Nortel IPSECSHM Adapter - Packet Scheduler iniport Physical Address. . . . . . . . . : XX-XX-XX-XX-XX-YY Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : Y.Y.Y.Y Subnet Mask . . . . . . . . . . . : Y.Y.Y.Y Default Gateway . . . . . . . . . : Ethernet adapter Leaf Networks Adapter: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Leaf Networks Adapter Physical Address. . . . . . . . . : YY-FF-FA-BC-YF-AY Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : X.XYY.XY.XX Subnet Mask . . . . . . . . . . . : XXX.Y.Y.Y Default Gateway . . . . . . . . . : Ethernet adapter Local Area Connection 3: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Bluetooth LAN Access Server Driver Physical Address. . . . . . . . . : YY-FX-AX-YA-BY-CA Ethernet adapter Wireless Network Connection 2: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Intel(R) WiFi Link 5300 AGN Physical Address. . . . . . . . . : YY-XX-YA-CX-FC-YE Ethernet adapter Local Area Connection 2: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : ASIX ax88772 USB2.0 to Fast Ethernet Adapter Physical Address. . . . . . . . . : YY-XY-BY-YX-XY-AY Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : XYX.XYY.X.X Subnet Mask . . . . . . . . . . . : XXX.XXX.XXX.Y Default Gateway . . . . . . . . . :

    Read the article

  • C# and WUAPI: BeginDownload function

    - by Laurus Schluep
    first things first: I have no experience in object-oriented programming, whatsoever. I created my share of VB scripts and a bit of Java in school, but that's it. So my problem most likely lies there. But nevertheless, for the past few days, I've been trying to get a little application together that allows me to scan for, select and install Windows updates. So far I've been able to understand most of the reference and with the help of a few posts around the internet and I'm now at the point where I can select and download updates. So far I've been able to download a collection of updates using the following code: UpdateCollection CurrentInstallCollection = (UpdateCollection)e.Argument; UpdateDownloader CurrentDownloader = CurrentSession.CreateUpdateDownloader(); CurrentDownloader.Updates = CurrentInstallCollection; This is run in a background worker and returns once the download is done. It works just fine, I can see the updates getting downloaded on the file system but there isn't really a way to display the progress within the application. To do such a thing, there is the IDownloadJob interface that allows me to use the .BeginDownload method of the downloader (UpdateSession.CreateUpdateDownloader)...i think, at least. :D And here comes the problem: I have now tried for about 6 hours to get the code working, but no matter what I tried nothing worked. Also, there isn't much information around on the internet about the .BeginDownload method (or at least it seems that way), but my call of the method doesn't work: IDownloadJob CurrentDownloadJob = CurrentDownloader.BeginDownload(); I have no clue what arguments to supply...I've tried methods, objects...to no avail. The complete block of code looks like this: UpdateCollection CurrentInstallCollection = (UpdateCollection)e.Argument; UpdateDownloader CurrentDownloader = CurrentSession.CreateUpdateDownloader(); CurrentDownloader.Updates = CurrentInstallCollection; IDownloadJob CurrentDownloadJob = CurrentDownloader.BeginDownload(); IDownloadProgress CurrentJobProgess = CurrentDownloadJob.GetProgress(); tbStatus.Text = Convert.ToString(CurrentJobProgess.PercentComplete); I've found one source on the internet that called the method with .BeginDownload(this,this,this), which does not report any error in the code editor but probably won't help with reporting as it is my understanding, that the arguments supplied are the methods that are called when the described event occurrs (progress has changed or the download has finished). I also tried this, but it didn't work either: http://social.msdn.microsoft.com/Forums/en/csharpgeneral/thread/636a8399-2bc1-46ff-94df-a58cebfe688c A detailed description of the BeginDownload method: http://msdn.microsoft.com/en-us/library/aa386132(v=VS.85).aspx WUAPI Reference: Unfortunately I'm not allowed to post the link, but the link to the BeginDownload method goes to the same place. :) I know, it's quite a bit to ask, but if someone could point me in the right direction (as in which arguments to pass and how), it'd be very much appreciated! :)

    Read the article

  • Best Practice for Images with Codeigniter : Generating Thumbs or Resizing on the Fly

    - by Steve K
    Hi all, I know there’s been a good deal written on thumbnail generation and the like with CI, but I wanted to explain what I’ve made and see what kind of best-practice advice I could find. Here’s my story… Currently, I have a site which allows users to upload collections of photos to projects they’ve created after first creating an account. Upon account creation, the site generates folders for the users in the following fashion for each of five pre-defined projects: /students/username/project_num/images/thumbs/ (This is to say that within a pre-created students folder, the username, project_num, images and thumbs folders are created recursively five times.) When a user uploads images to a project, I have a gallery controller which uploads the full images into the images folder for the project_num, and then creates a smaller thumbnail which maintains its ratio. So far so good. On the index page of the site, where these thumbnails and full images are displayed, I had a bit of a brain lapse, thinking I could simply output the full image while resizing it via css for a ‘medium-size’ image which would lead to the full-size image when clicked. (To be clear, the path is: Click on thumbnail— Load scaled full-size (medium-size) image via ajax into a display area above thumbs— Click on medium-sized image— Load full size image via lightbox, or something of that nature.) I have everything working to this point, except, as one might imagine, resizing the full-sized images with css doesn’t maintain aspect ratio for the thumbs, which means I need to find the best way to resize these. In thinking about it, I figured I had two options: I could resize the image on the fly when the user clicks a thumbnail to load the medium-sized image via ajax. (I have a method ‘get_image($url)’ in my gallery controller which simply loads a view with an image tag and the image source passed to it, etc.) I thought perhaps I could send it first to my gallery model, resize it there on the fly, and send it on to the view. The problem I’m having is that resizing it on the fly and echoing it out gives me the raw image data (I apologize, I don’t know that’s the right term). I’ve tried using data_uris to format the raw data into something echoable, but with no success. Is this method possible? The second option I considered was to generate a second medium-sized thumbnail when the user uploads the image with maintain_ratio set to true. This method is slightly less ideal, given that when providing a way for the user to delete their projects, I’ll need to scan for an additional set of images to delete. Not a huge deal, definitely, but something I figured could be avoided by generating the medium-sized image on the fly. I hope I’ve been clear in my explanations, if long-winded! I’m very curious to see what suggestions folks have about the best way to handle this. Much thanks for reading, and any suggestions are much-appreciated! Steve K.

    Read the article

  • Hooking DirectX EndScene from an injected DLL

    - by Etan
    I want to detour EndScene from an arbitrary DirectX 9 application to create a small overlay. As an example, you could take the frame counter overlay of FRAPS, which is shown in games when activated. I know the following methods to do this: Creating a new d3d9.dll, which is then copied to the games path. Since the current folder is searched first, before going to system32 etc., my modified DLL gets loaded, executing my additional code. Downside: You have to put it there before you start the game. Same as the first method, but replacing the DLL in system32 directly. Downside: You cannot add game specific code. You cannot exclude applications where you don't want your DLL to be loaded. Getting the EndScene offset directly from the DLL using tools like IDA Pro 4.9 Free. Since the DLL gets loaded as is, you can just add this offset to the DLL starting address, when it is mapped to the game, to get the actual offset, and then hook it. Downside: The offset is not the same on every system. Hooking Direct3DCreate9 to get the D3D9, then hooking D3D9-CreateDevice to get the device pointer, and then hooking Device-EndScene through the virtual table. Downside: The DLL cannot be injected, when the process is already running. You have to start the process with the CREATE_SUSPENDED flag to hook the initial Direct3DCreate9. Creating a new Device in a new window, as soon as the DLL gets injected. Then, getting the EndScene offset from this device and hooking it, resulting in a hook for the device which is used by the game. Downside: as of some information I have read, creating a second device may interfere with the existing device, and it may bug with windowed vs. fullscreen mode etc. Same as the third method. However, you'll do a pattern scan to get EndScene. Downside: doesn't look that reliable. How can I hook EndScene from an injected DLL, which may be loaded when the game is already running, without having to deal with different d3d9.dll's on other systems, and with a method which is reliable? How does FRAPS for example perform it's DirectX hooks? The DLL should not apply to all games, just to specific processes where I inject it via CreateRemoteThread.

    Read the article

  • Bluetooth connect to a RS232 adapter in android

    - by ThePosey
    Hello All, I am trying to use the Bluetooth Chat sample API app that google provides to connect to a bluetooth RS232 adapter hooked up to another device. Here is the app for reference: http://developer.android.com/resources/samples/BluetoothChat/index.html And here is the spec sheet for the RS232 connector just for reference: http://serialio.com/download/Docs/BlueSnap-guide-4.77%5FCommands.pdf Well the problem is that when I go to connect to the device with: mmSocket.connect(); (BluetoothSocket::connect()) I always get an IOException error thrown by the connect method. When I do a toString on the exception I get "Service discovery failed". My question is mostly what are the cases that would cause an IOException to get thrown in the connect method? I know those are in the source somewhere but I don't know exactly how the java layer that you write apps in and the C/C++ layer that contains the actual stacks interface. I know that it uses the bluez bluetooth stack which is written in C/C++ but not sure how that ties into the java layer which is what I would think is throwing the exception. Any help on pointing me to where I can try to dissect this issue would be incredible. Also just to note I am able to pair with the RS232 adapter just fine but I am never able to actually connect. Here is the logcat output for more reference: I/ActivityManager( 1018): Displayed activity com.example.android.BluetoothChat/.DeviceListActivity: 326 ms (total 326 ms) E/BluetoothService.cpp( 1018): stopDiscoveryNative: D-Bus error in StopDiscovery: org.bluez.Error.Failed (Invalid discovery session) D/BluetoothChat( 1729): onActivityResult -1 D/BluetoothChatService( 1729): connect to: 00:06:66:03:0C:51 D/BluetoothChatService( 1729): setState() STATE_LISTEN - STATE_CONNECTING E/BluetoothChat( 1729): + ON RESUME + I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_CONNECTING I/BluetoothChatService( 1729): BEGIN mConnectThread E/BluetoothService.cpp( 1018): stopDiscoveryNative: D-Bus error in StopDiscovery: org.bluez.Error.Failed (Invalid discovery session) E/BluetoothEventLoop.cpp( 1018): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1498/hci0/dev_00_06_66_03_0C_51 I/BluetoothChatService( 1729): CONNECTION FAIL TOSTRING: java.io.IOException: Service discovery failed D/BluetoothChatService( 1729): setState() STATE_CONNECTING - STATE_LISTEN D/BluetoothChatService( 1729): start D/BluetoothChatService( 1729): setState() STATE_LISTEN - STATE_LISTEN I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_LISTEN V/BluetoothEventRedirector( 1080): Received android.bleutooth.device.action.UUID I/NotificationService( 1018): enqueueToast pkg=com.example.android.BluetoothChat callback=android.app.ITransientNotification$Stub$Proxy@446327c8 duration=0 I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_LISTEN E/BluetoothEventLoop.cpp( 1018): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1498/hci0/dev_00_06_66_03_0C_51 V/BluetoothEventRedirector( 1080): Received android.bleutooth.device.action.UUID The device I'm trying to connect to is the 00:06:66:03:0C:51 which I can scan for and apparently pair with just fine.

    Read the article

  • Using ASP.NET MVC, Linq To SQL, and StructureMap causing DataContext to cache data

    - by Dragn1821
    I'll start by telling my project setup: ASP.NET MVC 1.0 StructureMap 2.6.1 VB I've created a bootstrapper class shown here: Imports StructureMap Imports DCS.Data Imports DCS.Services Public Class BootStrapper Public Shared Sub ConfigureStructureMap() ObjectFactory.Initialize(AddressOf StructureMapRegistry) End Sub Private Shared Sub StructureMapRegistry(ByVal x As IInitializationExpression) x.AddRegistry(New MainRegistry()) x.AddRegistry(New DataRegistry()) x.AddRegistry(New ServiceRegistry()) x.Scan(AddressOf StructureMapScanner) End Sub Private Shared Sub StructureMapScanner(ByVal scanner As StructureMap.Graph.IAssemblyScanner) scanner.Assembly("DCS") scanner.Assembly("DCS.Data") scanner.Assembly("DCS.Services") scanner.WithDefaultConventions() End Sub End Class I've created a controller factory shown here: Imports System.Web.Mvc Imports StructureMap Public Class StructureMapControllerFactory Inherits DefaultControllerFactory Protected Overrides Function GetControllerInstance(ByVal controllerType As System.Type) As System.Web.Mvc.IController Return ObjectFactory.GetInstance(controllerType) End Function End Class I've modified the Global.asax.vb as shown here: ... Sub Application_Start() RegisterRoutes(RouteTable.Routes) 'StructureMap BootStrapper.ConfigureStructureMap() ControllerBuilder.Current.SetControllerFactory(New StructureMapControllerFactory()) End Sub ... I've added a Structure Map registry file to each of my three projects: DCS, DCS.Data, and DCS.Services. Here is the DCS.Data registry: Imports StructureMap.Configuration.DSL Public Class DataRegistry Inherits Registry Public Sub New() 'Data Connections. [For](Of DCSDataContext)() _ .HybridHttpOrThreadLocalScoped _ .Use(New DCSDataContext()) 'Repositories. [For](Of IShiftRepository)() _ .Use(Of ShiftRepository)() [For](Of IMachineRepository)() _ .Use(Of MachineRepository)() [For](Of IShiftSummaryRepository)() _ .Use(Of ShiftSummaryRepository)() [For](Of IOperatorRepository)() _ .Use(Of OperatorRepository)() [For](Of IShiftSummaryJobRepository)() _ .Use(Of ShiftSummaryJobRepository)() End Sub End Class Everything works great as far as loading the dependecies, but I'm having problems with the DCSDataContext class that was genereated by Linq2SQL Classes. I have a form that posts to a details page (/Summary/Details), which loads in some data from SQL. I then have a button that opens a dialog box in JQuery, which populates the dialog from a request to (/Operator/Modify). On the dialog box, the form has a combo box and an OK button that lets the user change the operator's name. Upon clicking OK, the form is posted to (/Operator/Modify) and sent through the service and repository layers of my program and updates the record in the database. Then, the RedirectToAction is called to send the user back to the details page (/Summary/Details) where there is a call to pull the data from SQL again, updating the details view. Everything works great, except the details view does not show the new operator that was selected. I can step through the code and see the DCSDataContext class being accessed to update the operator (which does actually change the database record), but when the DCSDataContext is accessed to reload the details objects, it pulls in the old value. I'm guessing that StructureMap is causing not only the DCSDataContext class but also the data to be cached? I have also tried adding the following to the Global.asax, but it just ends up crashing the program telling me the DCSDataContext has been disposed... Private Sub MvcApplication_EndRequest(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.EndRequest StructureMap.ObjectFactory.ReleaseAndDisposeAllHttpScopedObjects() End Sub Can someone please help?

    Read the article

  • .NET RegEx "Memory Leak" investigation

    - by Kevin Pullin
    I recently looked into some .NET "memory leaks" (i.e. unexpected, lingering GC rooted objects) in a WinForms app. After loading and then closing a huge report, the memory usage did not drop as expected even after a couple of gen2 collections. Assuming that the reporting control was being kept alive by a stray event handler I cracked open WinDbg to see what was happening... Using WinDbg, the !dumpheap -stat command reported a large amount of memory was consumed by string instances. Further refining this down with the !dumpheap -type System.String command I found the culprit, a 90MB string used for the report, at address 03be7930. The last step was to invoke !gcroot 03be7930 to see which object(s) were keeping it alive. My expectations were incorrect - it was not an unhooked event handler hanging onto the reporting control (and report string), but instead it was held on by a System.Text.RegularExpressions.RegexInterpreter instance, which itself is a descendant of a System.Text.RegularExpressions.CachedCodeEntry. Now, the caching of Regexs is (somewhat) common knowledge as this helps to reduce the overhead of having to recompile the Regex each time it is used. But what then does this have to do with keeping my string alive? Based on analysis using Reflector, it turns out that the input string is stored in the RegexInterpreter whenever a Regex method is called. The RegexInterpreter holds onto this string reference until a new string is fed into it by a subsequent Regex method invocation. I'd expect similar behaviour by hanging onto Regex.Match instances and perhaps others. The chain is something like this: Regex.Split, Regex.Match, Regex.Replace, etc Regex.Run RegexScanner.Scan (RegexScanner is the base class, RegexInterpreter is the subclass described above). The offending Regex is only used for reporting, rarely used, and therefore unlikely to be used again to clear out the existing report string. And even if the Regex was used at a later point, it would probably be processing another large report. This is a relatively significant problem and just plain feels dirty. All that said, I found a few options on how to resolve, or at least work around, this scenario. I'll let the community respond first and if no takers come forward I will fill in any gaps in a day or two.

    Read the article

  • Asp.net MVC VirtualPathProvider views parse error

    - by madcapnmckay
    Hi, I am working on a plugin system for Asp.net MVC 2. I have a dll containing controllers and views as embedded resources. I scan the plugin dlls for controller using StructureMap and I then can pull them out and instantiate them when requested. This works fine. I then have a VirtualPathProvider which I adapted from this post public class AssemblyResourceProvider : VirtualPathProvider { protected virtual string WidgetDirectory { get { return "~/bin"; } } private bool IsAppResourcePath(string virtualPath) { var checkPath = VirtualPathUtility.ToAppRelative(virtualPath); return checkPath.StartsWith(WidgetDirectory, StringComparison.InvariantCultureIgnoreCase); } public override bool FileExists(string virtualPath) { return (IsAppResourcePath(virtualPath) || base.FileExists(virtualPath)); } public override VirtualFile GetFile(string virtualPath) { return IsAppResourcePath(virtualPath) ? new AssemblyResourceVirtualFile(virtualPath) : base.GetFile(virtualPath); } public override CacheDependency GetCacheDependency(string virtualPath, IEnumerable virtualPathDependencies, DateTime utcStart) { return IsAppResourcePath(virtualPath) ? null : base.GetCacheDependency(virtualPath, virtualPathDependencies, utcStart); } } internal class AssemblyResourceVirtualFile : VirtualFile { private readonly string path; public AssemblyResourceVirtualFile(string virtualPath) : base(virtualPath) { path = VirtualPathUtility.ToAppRelative(virtualPath); } public override Stream Open() { var parts = path.Split('/'); var resourceName = Path.GetFileName(path); var apath = HttpContext.Current.Server.MapPath(Path.GetDirectoryName(path)); var assembly = Assembly.LoadFile(apath); return assembly != null ? assembly.GetManifestResourceStream(assembly.GetManifestResourceNames().SingleOrDefault(s => string.Compare(s, resourceName, true) == 0)) : null; } } The VPP seems to be working fine also. The view is found and is pulled out into a stream. I then receive a parse error Could not load type 'System.Web.Mvc.ViewUserControl<dynamic>'. which I can't find mentioned in any previous example of pluggable views. Why would my view not compile at this stage? Thanks for any help, Ian EDIT: Getting closer to an answer but not quite clear why things aren't compiling. Based on the comments I checked the versions and everything is in V2, I believe dynamic was brought in at V2 so this is fine. I don't even have V3 installed so it can't be that. I have however got the view to render, if I remove the <dynamic> altogether. So a VPP works but only if the view is not strongly typed or dynamic This makes sense for the strongly typed scenario as the type is in the dynamically loaded dll so the viewengine will not be aware of it, even though the dll is in the bin. Is there a way to load types at app start? Considering having a go with MEF instead of my bespoke Structuremap solution. What do you think?

    Read the article

  • Custom property editors do not work for request parameters in Spring MVC?

    - by dvd
    Hello, I'm trying to create a multiaction web controller using Spring annotations. This controller will be responsible for adding and removing user profiles and preparing reference data for the jsp page. @Controller public class ManageProfilesController { @InitBinder public void initBinder(WebDataBinder binder) { binder.registerCustomEditor(UserAccount.class,"account", new UserAccountPropertyEditor(userManager)); binder.registerCustomEditor(Profile.class, "profile", new ProfilePropertyEditor(profileManager)); logger.info("Editors registered"); } @RequestMapping("remove") public void up( @RequestParam("account") UserAccount account, @RequestParam("profile") Profile profile) { ... } @RequestMapping("") public ModelAndView defaultView(@RequestParam("account") UserAccount account) { logger.info("Default view handling"); ModelAndView mav = new ModelAndView(); logger.info(account.getLogin()); mav.addObject("account", account); mav.addObject("profiles", profileManager.getProfiles()); mav.setViewName(view); return mav; } ... } Here is the part of my webContext.xml file: <context:component-scan base-package="ru.mirea.rea.webapp.controllers" /> <context:annotation-config/> <bean class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping"> <property name="mappings"> <value> ... /home/users/manageProfiles=users.manageProfilesController </value> </property> </bean> <bean id="users.manageProfilesController" class="ru.mirea.rea.webapp.controllers.users.ManageProfilesController"> <property name="view" value="home\users\manageProfiles"/> </bean> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter" /> However, when i open the mapped url, i get exception: java.lang.IllegalArgumentException: Cannot convert value of type [java.lang.String] to required type [ru.mirea.rea.model.UserAccount]: no matching editors or conversion strategy found I use spring 2.5.6 and plan to move to the Spring 3.0 in some not very distant future. However, according to this JIRA https://jira.springsource.org/browse/SPR-4182 it should be possible already in spring 2.5.1. The debug shows that the InitBinder method is correctly called. What am i doing wrong?

    Read the article

  • NoHostAvailableException With Cassandra & DataStax Java Driver If Large ResultSet

    - by hughj
    The setup: 2-node Cassandra 1.2.6 cluster replicas=2 very large CQL3 table with no secondary index Rowkey is a UUID.randomUUID().toString() read consistency set to ONE Using DataStax java driver 1.0 The request: Attempting to do a table scan by "SELECT some-col from schema.table LIMIT nnn;" The fail: Once I go beyond a certain nnn LIMIT, I start to get NoHostAvailableExceptions from the driver. It reads like this: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered)) at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:64) at com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:214) at com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:169) at com.jpmc.es.rtm.storage.impl.EventExtract.main(EventExtract.java:36) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered)) at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:98) at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:165) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) Given: This is probably not the most enlightened thing to do to a large table with millions of rows, but this is how I learn what not to do, so I would really appreciate someone who could volunteer how this kind of error can be debugged. For example, when this happens, there are no indications that the nodes in the cluster ever had an issue with the request (there is nothing in the logs on either node that indicate any timeout or failure). Also, I enabled the trace on the driver, which gives you some nice autotrace (ala Oracle) info as long as the query succeeds. But in this case, the driver blows a NoHostAvailableException and no ExecutionInfo is available, so tracing has not provided any benefit in this case. I also find it interesting that this does not seem to be recorded as a timeout (my JMX consoles tell me no timeouts have occurred). So, I am left not understanding WHERE the failure is actually occurring. I am left with the idea that it is the driver that is having a problem, but I don't know how to debug it (and I would really like to). I have read several posts from folks that state that query'g for resultSets 10000 rows is probably not a good idea, and I am willing to accept this, but I would like to understand what is causing the exception and where the exception is happening. FWIW, I also tried bumping the timeout properties in the cassandra.yaml, but this made no difference whatsoever. I welcome any suggestions, anecdotes, insults, or monetary contributions for my registration in the house of moron-developers. Regards!!

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • Controling virtualbox internet access?

    - by HandyGandy
    I am finally going through the process of moving my XP into a vbox (host linux). The thing is that I am migrating a virtually clean install. So aside from the occasional antivirus scan, I want to make sure that my XP is not sending malware data (keystoke.logs, spam etc. ) out silently ( and thus having picked up some virus ). To that end I want to limit XP to contacting my LAN and a few internet sites. ( mainly sites that require proprietary windows only software to access, AV sites and Windows update ). I want XP to only access preapproved addresses. If it is trying to contact a nonapproved address, I want it somehow logged and access restricted until I allow access. I also don't want to have to decide whether to allow access to a site at my leisure. To keeps things clear let me give an example: I start my vbox/XP ( which I call MYXP) running on my linux box ( called MYLINUX connecting to the net through a linksys wrt54g ) and connects via samba to my LAN ( since my LAN seems to be possessed of every evil thing, it's address is 192.168.666. ). At the moment my configuration is set so that I allow MYXP to access 192.168.666 and www.MYANTIVIRUS_UPDATES.com and www.MS_UPDATES.com. Then on the VM I start a program which tries to make a connection to www.playmygame.com . www.playmygame.com is on my preapproved list so the connection goes through. Later I check attempted accesses and discover that it also tried to connect to www.mygame_high_scores.com I figure this is OK so I add www.mygame_high_scores.com to my approved list. Later, I again check address and discover that my VM/XP tried to access www.mygame_steals_your_identity.com. I do some checking and discover the address is registered to someone in Kiev, Nigeria. Since this doesn't sound kosher to me, I replace the MYXP VM with one that was backed up before I installed mygame. I remove www.playmygame.com and www.mygame_high_scores.com from my access list for MYXP. It should acomplish this with little overheard. When I am not running the VM ideally it should not have any overhead. Suggestions?

    Read the article

  • How to build, sort and print a tree of a sort?

    - by Tuplanolla
    This is more of an algorithmic dilemma than a language-specific problem, but since I'm currently using Ruby I'll tag this as such. I've already spent over 20 hours on this and I would've never believed it if someone told me writing a LaTeX parser was a walk in the park in comparison. I have a loop to read hierarchies (that are prefixed with \m) from different files art.tex: \m{Art} graphical.tex: \m{Art}{Graphical} me.tex: \m{About}{Me} music.tex: \m{Art}{Music} notes.tex: \m{Art}{Music}{Sheet Music} site.tex: \m{About}{Site} something.tex: \m{Something} whatever.tex: \m{Something}{That}{Does Not}{Matter} and I need to sort them alphabetically and print them out as a tree About Me (me.tex) Site (site.tex) Art (art.tex) Graphical (graphical.tex) Music (music.tex) Sheet Music (notes.tex) Something (something.tex) That Does Not Matter (whatever.tex) in (X)HTML <ul> <li>About</li> <ul> <li><a href="me.tex">Me</a></li> <li><a href="site.tex">Site</a></li> </ul> <li><a href="art.tex">Art</a></li> <ul> <li><a href="graphical.tex">Graphical</a></li> <li><a href="music.tex">Music</a></li> <ul> <li><a href="notes.tex">Sheet Music</a></li> </ul> </ul> <li><a href="something.tex">Something</a></li> <ul> <li>That</li> <ul> <li>Doesn't</li> <ul> <li><a href="whatever.tex">Matter</a></li> </ul> </ul> </ul> </ul> using Ruby without Rails, which means that at least Array.sort and Dir.glob are available. All of my attempts were formed like this (as this part should work just fine). def fss_brace_array(ss_input)#a concise version of another function; converts {1}{2}...{n} into an array [1, 2, ..., n] or returns an empty array ss_output = ss_input[1].scan(%r{\{(.*?)\}}) rescue ss_output = [] ensure return ss_output end #define tree s_handle = File.join(:content.to_s, "*") Dir.glob("#{s_handle}.tex").each do |s_handle| File.open(s_handle, "r") do |f_handle| while s_line = f_handle.gets if s_all = s_line.match(%r{\\m\{(\{.*?\})+\}}) s_all = s_all.to_a #do something with tree, fss_brace_array(s_all) and s_handle break end end end end #do something else with tree

    Read the article

  • BASH, multiple arrays and a loop.

    - by S1syphus
    At work, we 7 or 8 hardrives we dispatch over the country, each have unique labels which are not sequential. Ideally drives are plugged in our desktop, then gets folders from the server that correspond to the drive name. Sometimes, only one hard drive gets plugged in sometimes multiples, possibly in the future more will be added. Each is mounts to /Volumes/ and it's identifier; so for example /Volumes/f00, where f00 is the identifier. What I want to happen, scan volumes see if any any of the drives are plugged in, then checks the server to see if the folder exists, if ir does copy folder and recursive folders. Here is what I have so far, it checks if the drive exists in Volumes: #!/bin/sh #Declare drives in the array ARRAY=( foo bar long ) #Get the drives from the array DRIVES=${#ARRAY[@]} #Define base dir to check BaseDir="/Volumes" #Define shared server fold on local mount points #I plan to use AFP eventually, but for the sake of ease #using a local mount. ServerMount="BigBlue" #Define folder name for where files are to come from Dispatch="File-Dispatch" dir="$BaseDir/${ARRAY[${i}]}" #Loop through each item in the array and check if exists on /Volumes for (( i=0;i<$DRIVES;i++)); do dir="$BaseDir/${ARRAY[${i}]}" if [ -d "$dir" ]; then echo "$dir exists, you win." else echo "$dir is not attached." fi done What I can't figure out how to do, is how to check the volumes for the server while looping through the harddrive mount points. So I could do something like: #!/bin/sh #Declare drives, and folder location in arrays ARRAY=( foo bar long ) ARRAY1=($(ls ""$BaseDir"/"$ServerMount"/"$Dispatch"")) #Get the drives from the array DRIVES=${#ARRAY[@]} SERVERFOLDER=${#ARRAY1[@]} #Define base dir to check BaseDir="/Volumes" #Define shared server fold on local mount points ServerMount="BigBlue #Define folder name for where files are to come from Dispatch="File-Dispatch" dir="$BaseDir/${ARRAY[${i}]}" #List the contents from server directory into array ARRAY1=($(ls ""$BaseDir"/"$ServerMount"/"$Dispatch"")) echo ${list[@]} for (( i=0;i<$DRIVES;i++)); (( i=0;i<$SERVERFOLDER;i++)); do dir="$BaseDir/${ARRAY[${i}]}" ser="${ARRAY1[${i}]}" if [ "$dir" =~ "$sir" ]; then cp "$sir" "$dir" else echo "$dir is not attached." fi done I know, that is pretty wrong... well very, but I hope it gives you the idea of what I am trying to achieve. Any ideas or suggestions?

    Read the article

  • MySQL table organization and optimization (Rails)

    - by aguynamedloren
    I've been learning Ruby on Rails over the past few months with no prior programming experience. Lately, I've been thinking about database optimization and table organization. I know there are great books on the subject, but I typically learn by example / as I go. Here's a hypothetical situation: Let's say I am building a social network for a niche community with 250,000 members (users). The users have the ability to attend events. Let's say there are 50,000 past/present/future events. Much like Facebook events, a user can attend any number of events and an event can have any number of attendees. In the database, there would be a table for users and a table for events. Somehow I would have to create an association between the users and events. I could create an "events" column in the users table such that each user row would contain a hash of event IDs, or I could create an "attendees" column in the events table such that each event row would contain a hash of user IDs. Neither of these solutions seem ideal, however. On a users profile page, I want to display the list of events they are associated with, which would require scanning the 50,000 event rows for the user ID of said user if I include an "attendees" column in the events table. Likewise, on an event page, I want to display a list of attendees for the event, which would require scanning the 250,000 user rows for the event ID of said event if I include an "events" column in the users table. Option 3 would be to create a third table that contains the attendee information for each and every event - but I don't see how this would solve any problems. Are these non-issues? Rails makes accessing all of this information easy, but I guess I'm worried about scale. It is entirely possible that I am under-estimating the speed and processing power of modern databases / servers / etc. How long would it take to scan 250,000 user rows for specific event IDs - 10ms? 100ms? 1,000ms? I guess that's not that bad. Am I just over-thinking this?

    Read the article

  • Xcache - No different after using it

    - by Charles Yeung
    Hi I have installed Xcache in my site(using xampp), I have tested more then 10 times on several page and the result is same as default(no any cache installed), is it something wrong with the configure? Updated [xcache-common] ;; install as zend extension (recommended), normally "$extension_dir/xcache.so" zend_extension = /usr/local/lib/php/extensions/non-debug-non-zts-xxx/xcache.so zend_extension_ts = /usr/local/lib/php/extensions/non-debug-zts-xxx/xcache.so ;; For windows users, replace xcache.so with php_xcache.dll zend_extension_ts = C:\xampp\php\ext\php_xcache.dll ;; or install as extension, make sure your extension_dir setting is correct ; extension = xcache.so ;; or win32: ; extension = php_xcache.dll [xcache.admin] xcache.admin.enable_auth = On xcache.admin.user = "mOo" ; xcache.admin.pass = md5($your_password) xcache.admin.pass = "" [xcache] ; ini only settings, all the values here is default unless explained ; select low level shm/allocator scheme implemenation xcache.shm_scheme = "mmap" ; to disable: xcache.size=0 ; to enable : xcache.size=64M etc (any size > 0) and your system mmap allows xcache.size = 60M ; set to cpu count (cat /proc/cpuinfo |grep -c processor) xcache.count = 1 ; just a hash hints, you can always store count(items) > slots xcache.slots = 8K ; ttl of the cache item, 0=forever xcache.ttl = 0 ; interval of gc scanning expired items, 0=no scan, other values is in seconds xcache.gc_interval = 0 ; same as aboves but for variable cache xcache.var_size = 4M xcache.var_count = 1 xcache.var_slots = 8K ; default ttl xcache.var_ttl = 0 xcache.var_maxttl = 0 xcache.var_gc_interval = 300 xcache.test = Off ; N/A for /dev/zero xcache.readonly_protection = Off ; for *nix, xcache.mmap_path is a file path, not directory. ; Use something like "/tmp/xcache" if you want to turn on ReadonlyProtection ; 2 group of php won't share the same /tmp/xcache ; for win32, xcache.mmap_path=anonymous map name, not file path xcache.mmap_path = "/dev/zero" ; leave it blank(disabled) or "/tmp/phpcore/" ; make sure it's writable by php (without checking open_basedir) xcache.coredump_directory = "" ; per request settings xcache.cacher = On xcache.stat = On xcache.optimizer = Off [xcache.coverager] ; per request settings ; enable coverage data collecting for xcache.coveragedump_directory and xcache_coverager_start/stop/get/clean() functions (will hurt executing performance) xcache.coverager = Off ; ini only settings ; make sure it's readable (care open_basedir) by coverage viewer script ; requires xcache.coverager=On xcache.coveragedump_directory = "" Thanks you

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >