Search Results

Search found 6776 results on 272 pages for 'vm role'.

Page 56/272 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • List all BPM Processes for a user

    - by kasriniv
    Hello, Happy to start contributing to this blog..  The title of the blog is probably deceptively simple and warrants an elaboration. Customized BPM workspaces/user interfaces are a fairly common requirement. One of our marquee customers in the online stock trading business, envisioned this user interaction for their BPM application: User logs in to the internal portal Use will have list of roles which he is granted as a drop down list Once user selects the role, a list of processes which user is part of appear. Logged in user can be part of any swimlane role of the process This can be a fairly common/reasonable user-UI interaction pattern. 1. and 2. are easily achievable and hence the subject matter of this blog is the requirement in 3. Objective: Given a username and a role, list all the BPM processes that the user is part of, in any swimlane of any process. Here is quick overview of the major steps/logic in the code: Intialize workflow/BPM  context as usual Get a handle on InstanceQueryService(getInstanceQueryService), InstanceManagementService,        ProcessMetadataService and ProcessModelService List all Processes for that bpmcontext (listProcessMetadataSumary) and get Granted roles to that user For each of the processes [method  getAccessibleProcesss(ProcessMetadataSummary, Set)]for each of the lanes in the process, check if the role granted to the user, matches the roleName for that swimlane. If so, add to output. Notes: The usual caveats apply including BPM APIs are subject to change.  JDeveloper method introspection is your better friend than API documentation :-)... (I am going to try upload the source code  and if it doesnt work, will follow this blog up with the corresponding source code.) Hope this helps.  Ack: Yogesh K, BPM Dev team.

    Read the article

  • How to leverage the internal HTTP endpoint available on Azure web roles?

    - by adelsors
    Imagine you have a Web application using an in-memory collection that changes occasionally, loading it from storage on the Application_Start global.asax event and updating it whenever it changes. If you want to deploy this application on Azure you need to keep in mind that more than one instance of the application can be running at any time and therefore you need to provide some mechanism to keep all instances informed with the latest changes. Because that the communication through internal endpoints between Azure role instances is at no cost, a good solution can be maintaining the information on Azure Storage Tables, reading its contents on the Application_Start event and populating its changes to all instances using the internal HTTP port available on Azure Web Roles. You need to follow these steps to leverage the internal HTTP endpoint available on Azure web roles: 1.   Define an internal HTTP endpoint in the Web Role properties, for example InternalHttpEndpoint   2.   Add a new WCF service to the Web Role, for example NotificationServices.svc 3.   Add a method on the new service to receive notifications from other role instances. 4.   Declare a class that inherits from System.ServiceModel.Activation.ServiceHostFactory and override the method CreateServiceHost to host the internal endpoint.   Note that you can use SecurityMode.None because the internal endpoint is private to the instances of the service, this is provided by the platform. 5.   Edit the markup of the service right clicking the svc file and selecting "View markup" to add the new factory as the factory to be used to create the service    6. Now you can notify changes to other instances using this code:

    Read the article

  • Secondment promotion promises

    - by user75460
    I'm a Java developer at a large FTSE 30 company. My line manager approached me and asked if I'd like to be the teams lead developer. I was keen to accept. Initially he said I'd be acting-up for 3 months, then changed his tune and said I would be doing a 6 month secondment. During this time, he has got himself promoted and I have a new line manager. I have been very successful during this secondment and reviews have been overwhelmingly positive: both from my former line manager and current line manager. However, six months on, no lead role has been created in the organization and a new director has re-organised the structure of the team: two senior roles (senior Android and senior iOS) are going to be created. I feel a bit put-out that my secondment has amounted to nothing. I could have just done nothing and then applied for the senior role 6 months later (which I feel aren't as marketable as Lead developer). During my secondment I have basically become TA, senior developer, line manager and general go-to guy for all things (across Android and iOS). What do you think I should do, and has my company abused it's position? I feel they have offered a secondment to a role that they never really planned to create. During this time I have received no financial benefit for doing a more senior role.

    Read the article

  • Windows Azure: Announcing release of Windows Azure SDK 2.2 (with lots of goodies)

    - by ScottGu
    Earlier today I blogged about a big update we made today to Windows Azure, and some of the great new features it provides. Today I’m also excited to also announce the release of the Windows Azure SDK 2.2. Today’s SDK release adds even more great features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter The below post has more details on what’s available in today’s Windows Azure SDK 2.2 release.  Also head over to Channel 9 to see the new episode of the Visual Studio Toolbox show that will be available shortly, and which highlights these features in a video demonstration. Visual Studio 2013 Support Version 2.2 of the Window Azure SDK is the first official version of the SDK to support the final RTM release of Visual Studio 2013. If you installed the 2.1 SDK with the Preview of Visual Studio 2013 we recommend that you upgrade your projects to SDK 2.2.  SDK 2.2 also works side by side with the SDK 2.0 and SDK 2.1 releases on Visual Studio 2012: Integrated Windows Azure Sign In within Visual Studio Integrated Windows Azure Sign-In support within Visual Studio is one of the big improvements added with this Windows Azure SDK release.  Integrated sign-in support enables developers to develop/test/manage Windows Azure resources within Visual Studio without having to download or use management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer inside Visual Studio and choose the “Connect to Windows Azure” context menu option to connect to Windows Azure: Doing this will prompt you to enter the email address of the account you wish to sign-in with: You can use either a Microsoft Account (e.g. Windows Live ID) or an Organizational account (e.g. Active Directory) as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio Server Explorer (and you can start using them): With this new integrated sign in experience you are now able to publish web apps, deploy VMs and cloud services, use Windows Azure diagnostics, and fully interact with your Windows Azure services within Visual Studio without the need for a management certificate.  All of the authentication is handled using the Windows Azure Active Directory associated with your Windows Azure account (details on this can be found in my earlier blog post). Integrating authentication this way end-to-end across the Service Management APIs + Dev Tools + Management Portal + PowerShell automation scripts enables a much more secure and flexible security model within Windows Azure, and makes it much more convenient to securely manage multiple developers + administrators working on a project.  It also allows organizations and enterprises to use the same authentication model that they use for their developers on-premises in the cloud.  It also ensures that employees who leave an organization immediately lose access to their company’s cloud based resources once their Active Directory account is suspended. Filtering/Subscription Management Once you login within Visual Studio, you can filter which Windows Azure subscriptions/regions are visible within the Server Explorer by right-clicking the “Filter Services” context menu within the Server Explorer.  You can also use the “Manage Subscriptions” context menu to mange your Windows Azure Subscriptions: Bringing up the “Manage Subscriptions” dialog allows you to see which accounts you are currently using, as well as which subscriptions are within them: The “Certificates” tab allows you to continue to import and use management certificates to manage Windows Azure resources as well.  We have not removed any functionality with today’s update – all of the existing scenarios that previously supported management certificates within Visual Studio continue to work just fine.  The new integrated sign-in support provided with today’s release is purely additive. Note: the SQL Database node and the Mobile Service node in Server Explorer do not support integrated sign-in at this time. Therefore, you will only see databases and mobile services under those nodes if you have a management certificate to authorize access to them.  We will enable them with integrated sign-in in a future update. Remote Debugging Cloud Resources within Visual Studio Today’s Windows Azure SDK 2.2 release adds support for remote debugging many types of Windows Azure resources. With live, remote debugging support from within Visual Studio, you are now able to have more visibility than ever before into how your code is operating live in Windows Azure.  Let’s walkthrough how to enable remote debugging for a Cloud Service: Remote Debugging of Cloud Services To enable remote debugging for your cloud service, select Debug as the Build Configuration on the Common Settings tab of your Cloud Service’s publish dialog wizard: Then click the Advanced Settings tab and check the Enable Remote Debugging for all roles checkbox: Once your cloud service is published and running live in the cloud, simply set a breakpoint in your local source code: Then use Visual Studio’s Server Explorer to select the Cloud Service instance deployed in the cloud, and then use the Attach Debugger context menu on the role or to a specific VM instance of it: Once the debugger attaches to the Cloud Service, and a breakpoint is hit, you’ll be able to use the rich debugging capabilities of Visual Studio to debug the cloud instance remotely, in real-time, and see exactly how your app is running in the cloud. Today’s remote debugging support is super powerful, and makes it much easier to develop and test applications for the cloud.  Support for remote debugging Cloud Services is available as of today, and we’ll also enable support for remote debugging Web Sites shortly. Firewall Management Support with SQL Databases By default we enable a security firewall around SQL Databases hosted within Windows Azure.  This ensures that only your application (or IP addresses you approve) can connect to them and helps make your infrastructure secure by default.  This is great for protection at runtime, but can sometimes be a pain at development time (since by default you can’t connect/manage the database remotely within Visual Studio if the security firewall blocks your instance of VS from connecting to it). One of the cool features we’ve added with today’s release is support that makes it easy to enable and configure the security firewall directly within Visual Studio.  Now with the SDK 2.2 release, when you try and connect to a SQL Database using the Visual Studio Server Explorer, and a firewall rule prevents access to the database from your machine, you will be prompted to add a firewall rule to enable access from your local IP address: You can simply click Add Firewall Rule and a new rule will be automatically added for you. In some cases, the logic to detect your local IP may not be sufficient (for example: you are behind a corporate firewall that uses a range of IP addresses) and you may need to set up a firewall rule for a range of IP addresses in order to gain access. The new Add Firewall Rule dialog also makes this easy to do.  Once connected you’ll be able to manage your SQL Database directly within the Visual Studio Server Explorer: This makes it much easier to work with databases in the cloud. Visual Studio 2013 RTM Virtual Machine Images Available for MSDN Subscribers Last week we released the General Availability Release of Visual Studio 2013 to the web.  This is an awesome release with a ton of new features. With today’s Windows Azure update we now have a set of pre-configured VM images of VS 2013 available within the Windows Azure Management Portal for use by MSDN customers.  This enables you to create a VM in the cloud with VS 2013 pre-installed on it in with only a few clicks: Windows Azure now provides the fastest and easiest way to get started doing development with Visual Studio 2013. Windows Azure Management Libraries for .NET (Preview) Having the ability to automate the creation, deployment, and tear down of resources is a key requirement for applications running in the cloud.  It also helps immensely when running dev/test scenarios and coded UI tests against pre-production environments. Today we are releasing a preview of a new set of Windows Azure Management Libraries for .NET.  These new libraries make it easy to automate tasks using any .NET language (e.g. C#, VB, F#, etc).  Previously this automation capability was only available through the Windows Azure PowerShell Cmdlets or to developers who were willing to write their own wrappers for the Windows Azure Service Management REST API. Modern .NET Developer Experience We’ve worked to design easy-to-understand .NET APIs that still map well to the underlying REST endpoints, making sure to use and expose the modern .NET functionality that developers expect today: Portable Class Library (PCL) support targeting applications built for any .NET Platform (no platform restriction) Shipped as a set of focused NuGet packages with minimal dependencies to simplify versioning Support async/await task based asynchrony (with easy sync overloads) Shared infrastructure for common error handling, tracing, configuration, HTTP pipeline manipulation, etc. Factored for easy testability and mocking Built on top of popular libraries like HttpClient and Json.NET Below is a list of a few of the management client classes that are shipping with today’s initial preview release: .NET Class Name Supports Operations for these Assets (and potentially more) ManagementClient Locations Credentials Subscriptions Certificates ComputeManagementClient Hosted Services Deployments Virtual Machines Virtual Machine Images & Disks StorageManagementClient Storage Accounts WebSiteManagementClient Web Sites Web Site Publish Profiles Usage Metrics Repositories VirtualNetworkManagementClient Networks Gateways Automating Creating a Virtual Machine using .NET Let’s walkthrough an example of how we can use the new Windows Azure Management Libraries for .NET to fully automate creating a Virtual Machine. I’m deliberately showing a scenario with a lot of custom options configured – including VHD image gallery enumeration, attaching data drives, network endpoints + firewall rules setup - to show off the full power and richness of what the new library provides. We’ll begin with some code that demonstrates how to enumerate through the built-in Windows images within the standard Windows Azure VM Gallery.  We’ll search for the first VM image that has the word “Windows” in it and use that as our base image to build the VM from.  We’ll then create a cloud service container in the West US region to host it within: We can then customize some options on it such as setting up a computer name, admin username/password, and hostname.  We’ll also open up a remote desktop (RDP) endpoint through its security firewall: We’ll then specify the VHD host and data drives that we want to mount on the Virtual Machine, and specify the size of the VM we want to run it in: Once everything has been set up the call to create the virtual machine is executed asynchronously In a few minutes we’ll then have a completely deployed VM running on Windows Azure with all of the settings (hard drives, VM size, machine name, username/password, network endpoints + firewall settings) fully configured and ready for us to use: Preview Availability via NuGet The Windows Azure Management Libraries for .NET are now available via NuGet. Because they are still in preview form, you’ll need to add the –IncludePrerelease switch when you go to retrieve the packages. The Package Manager Console screen shot below demonstrates how to get the entire set of libraries to manage your Windows Azure assets: You can also install them within your .NET projects by right clicking on the VS Solution Explorer and using the Manage NuGet Packages context menu command.  Make sure to select the “Include Prerelease” drop-down for them to show up, and then you can install the specific management libraries you need for your particular scenarios: Open Source License The new Windows Azure Management Libraries for .NET make it super easy to automate management operations within Windows Azure – whether they are for Virtual Machines, Cloud Services, Storage Accounts, Web Sites, and more.  Like the rest of the Windows Azure SDK, we are releasing the source code under an open source (Apache 2) license and it is hosted at https://github.com/WindowsAzure/azure-sdk-for-net/tree/master/libraries if you wish to contribute. PowerShell Enhancements and our New Script Center Today, we are also shipping Windows Azure PowerShell 0.7.0 (which is a separate download). You can find the full change log here. Here are some of the improvements provided with it: Windows Azure Active Directory authentication support Script Center providing many sample scripts to automate common tasks on Windows Azure New cmdlets for Media Services and SQL Database Script Center Windows Azure enables you to script and automate a lot of tasks using PowerShell.  People often ask for more pre-built samples of common scenarios so that they can use them to learn and tweak/customize. With this in mind, we are excited to introduce a new Script Center that we are launching for Windows Azure. You can learn about how to scripting with Windows Azure with a get started article. You can then find many sample scripts across different solutions, including infrastructure, data management, web, and more: All of the sample scripts are hosted on TechNet with links from the Windows Azure Script Center. Each script is complete with good code comments, detailed descriptions, and examples of usage. Summary Visual Studio 2013 and the Windows Azure SDK 2.2 make it easier than ever to get started developing rich cloud applications. Along with the Windows Azure Developer Center’s growing set of .NET developer resources to guide your development efforts, today’s Windows Azure SDK 2.2 release should make your development experience more enjoyable and efficient. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • log4j floods my console

    - by srikanth VM
    This is the log4j.properties that i have in my app log4j.rootLogger=B C log4j.logger.A=INFO, A1 log4j.debug=false log4j.appender.A1=org.apache.log4j.ConsoleAppender log4j.appender.A1.layout=org.apache.log4j.PatternLayout log4j.appender.A1.layout.ConversionPattern=%d [%t] %-5p %C - %m%n log4j.logger.B=INFO, A2 log4j.debug=false log4j.appender.A2=org.apache.log4j.FileAppender log4j.appender.A2.file=PRIME-log.txt log4j.appender.A2.layout=org.apache.log4j.PatternLayout log4j.appender.A2.layout.ConversionPattern=%d [%t] %-5p %C - %m%n log4j.logger.C=INFO, A3 log4j.appender.A3=org.apache.log4j.FileAppender log4j.appender.A3.file=employee_pass_regeneration-log.txt log4j.appender.A3.layout=org.apache.log4j.PatternLayout log4j.appender.A3.layout.ConversionPattern=%d [%t] %-5p %C - %m%n I only want File appender so i only use that , but some how my console is always flooded with DEBUG messages which i have never used 8704 [http-8080-2] DEBUG org.springframework.web.servlet.view.JstlView - Rendering view with name 'passIndex' with model null and static attributes {} I guess these are all system messages , but with these debug messages its really hard to debug actually i mean i cannot find my own sysouts i tried log4j.debug=false but still i get these messages

    Read the article

  • Hyper-V File Server Clustering - at my wit’s end

    - by René Kåbis
    I am at my wit’s end with File Server clustering under Hyper-V. I am hoping that someone might be able to help me figure out this Gordian Knot of a technology that seems to have dead ends (like forcing cluster VMs to use iSCSI drives where normally-attached VHDX drives could suffice) where logic and reason would normally provide a logical solution. My hardware: I will be running three servers (in the end), but right now everything is taking place on one server. One of the secondary servers will exist purely as a witness/quorum, and another slightly more powerful one will be acting as an emergency backup (with additional storage, just not redundant) to hold the secondary AD VM and the other halves of a set of clustered VMs: the SQL VM and the file system VM. Please note, these each are the depreciated nodes of a cluster, the main nodes will be on the most powerful first machine. My heavy lifter is a machine that also contains all of the truly redundant storage on the network. If this gives anyone the heebie-geebies, too bad. It has a 6TB (usable) RAID-10 array, and will (in the end) hold the primary nodes of both aforementioned clusters, but is right now holding all VMs. This is, right now: DC01, DC02, SQL01, SQL02, FS01 & FS02. Eventually, I will be adding additional VMs to handle Exchange, Sharepoint and Lync, but only to this main server (the secondary server won't be able to handle more than three or four VMs, so why burden it? The AD, SQL & FS VMs are the most critical for the business). If anyone is now saying, “wait, what about a SAN or a NAS for the file servers?”, well too bad. What exists on the main machine is what I have to deal with. I followed these instructions, but I seem to be unable to get things to work. In order to make the file server truly redundant, I cannot trust any one machine to hold the only data store on the network. Therefore, I have created a set of iSCSI drives on the VM-host of the main machine, and attached one to each file server VM. The end result is that I want my FS01 to sit on the heavy lifter, along with its iSCSI “drive”, and FS02 will sit on the secondary machine with its own iSCSI “drive” there as well. That is, neither iSCSI drive will end up sitting on the same machine as the other. As such, the clustered FS will utterly duplicate the contents of the iSCSI drives between each other, so that if one physical machine (or the FS VM) goes toes-up, the other has got a full copy of the data on its own iSCSI drive. My problem occurs when I try to apply the file server role within the failover cluster manager. Actually, it is even before that -- it occurs when adding the disks. Since I have added each disk preferentially to a specific VM (by limiting the initiator by DNS hostname, and by adding two-way CHAP authentication), this forces each VM to be in control of its own iSCSI disk. However, when I try to add the disks to the Disks section of Storage within Failover Cluster Manager, the entire process fails for a random disk of the pair. That is, one will get online, but the other will remain offline because it does not have the correct “owner node”. I mean, really -- WTF? Of course it doesn’t have the right owner node, both drives are showing the same node name!! I cannot seem to have one drive show up with one node name as owner, and the other drive show up with the other node name as owner. And because both drives are not “online”, I cannot create a pool to apply to a cluster role. Talk about getting stuck between a rock and a hard place! I’ve got more to add, but my work is closing for the day and I have to wrap things up. I will try to add more tomorrow morning when I get in. My main objective is to have a file server VM on each machine, the storage on each machine, but a transparent failover in case one physical machine fails. Essentially, a failover FS that doesn’t care which machine fails -- the storage contents are replicated equally on each machine. Am I even heading in the right direction?

    Read the article

  • Can't update scala on Gentoo

    - by xhochy
    As I wanted to test Scala 2.9.2 on my gentoo system I tried updated the package but ended up with this error. I can't figure out where the problem may be: Calculating dependencies ...... done! >>> Verifying ebuild manifests >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Emerging (1 of 1) dev-lang/scala-2.9.2 >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Failed to emerge dev-lang/scala-2.9.2, Log file: >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log' >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Jobs: 0 of 1 complete, 1 running, 1 failed Load avg: 0.23, 0.16, 0.20 >>> Jobs: 0 of 1 complete, 1 failed Load avg: 0.23, 0.16, 0.20 * Package: dev-lang/scala-2.9.2 * Repository: gentoo * Maintainer: [email protected] * USE: amd64 elibc_glibc kernel_linux multilib userland_GNU * FEATURES: sandbox [01m[31;06m!!! ERROR: Couldn't find suitable VM. Possible invalid dependency string. Due to jdk-with-com-sun requiring a target of 1.7 but the virtual machines constrained by virtual/jdk-1.6 and/or this package requiring virtual(s) jdk-with-com-sun[0m * Unable to determine VM for building from dependencies: NV_DEPEND: virtual/jdk:1.6 java-virtuals/jdk-with-com-sun !binary? ( dev-java/ant-contrib:0 ) app-arch/xz-utils >=dev-java/java-config-2.1.9-r1 source? ( app-arch/zip ) >=dev-java/ant-core-1.7.0 dev-java/ant-nodeps >=dev-java/javatoolkit-0.3.0-r2 >=dev-lang/python-2.4 * ERROR: dev-lang/scala-2.9.2 failed (setup phase): * Failed to determine VM for building. * * Call stack: * ebuild.sh, line 93: Called pkg_setup * scala-2.9.2.ebuild, line 43: Called java-pkg-2_pkg_setup * java-pkg-2.eclass, line 53: Called java-pkg_init * java-utils-2.eclass, line 2187: Called java-pkg_switch-vm * java-utils-2.eclass, line 2674: Called die * The specific snippet of code: * die "Failed to determine VM for building." * * If you need support, post the output of `emerge --info '=dev-lang/scala-2.9.2'`, * the complete build log and the output of `emerge -pqv '=dev-lang/scala-2.9.2'`. !!! When you file a bug report, please include the following information: GENTOO_VM= CLASSPATH="" JAVA_HOME="" JAVACFLAGS="" COMPILER="" and of course, the output of emerge --info * The complete build log is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/die.env'. * Working directory: '/var/tmp/portage/dev-lang/scala-2.9.2' * S: '/var/tmp/portage/dev-lang/scala-2.9.2/work/scala-2.9.2-sources' * Messages for package dev-lang/scala-2.9.2: * Unable to determine VM for building from dependencies: * ERROR: dev-lang/scala-2.9.2 failed (setup phase): * Failed to determine VM for building. * * Call stack: * ebuild.sh, line 93: Called pkg_setup * scala-2.9.2.ebuild, line 43: Called java-pkg-2_pkg_setup * java-pkg-2.eclass, line 53: Called java-pkg_init * java-utils-2.eclass, line 2187: Called java-pkg_switch-vm * java-utils-2.eclass, line 2674: Called die * The specific snippet of code: * die "Failed to determine VM for building." * * If you need support, post the output of `emerge --info '=dev-lang/scala-2.9.2'`, * the complete build log and the output of `emerge -pqv '=dev-lang/scala-2.9.2'`. * The complete build log is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/die.env'. * Working directory: '/var/tmp/portage/dev-lang/scala-2.9.2' * S: '/var/tmp/portage/dev-lang/scala-2.9.2/work/scala-2.9.2-sources' The following eix output may help: % eix java-virtuals/jdk-with-com-sun [I] java-virtuals/jdk-with-com-sun Available versions: 20111111 {{ELIBC="FreeBSD"}} Installed versions: 20111111(16:08:51 18/04/12)(ELIBC="-FreeBSD") Homepage: http://www.gentoo.org Description: Virtual ebuilds that require internal com.sun classes from a JDK Both virtual jdks 1.6 and 1.7 are installed: % eix virtual/jdk [I] virtual/jdk Available versions: (1.4) ~1.4.2-r1[1] (1.5) 1.5.0 ~1.5.0-r3[1] (1.6) 1.6.0 1.6.0-r1 (1.7) (~)1.7.0 Installed versions: 1.6.0-r1(1.6)(23:22:48 10/11/12) 1.7.0(1.7)(23:21:09 10/11/12) Description: Virtual for JDK [1] "java-overlay" /var/lib/layman/java-overlay

    Read the article

  • Git to svn: Adding commit date to log messages

    - by Arnauld VM
    How should I do to have the author (or committer) name/date added to the log message when "dcommitting" to svn? For example, if the log message in Git is: This is a nice modif I'd like to have the message in svn be something like: This is a nice modif ----- Author: John Doo <[email protected] 2010-06-10 12:38:22 Committer: Nice Guy <[email protected] 2010-06-10 14:05:42 (Note that I'm mainly interested in the date, since I already mapped svn users in .svn-authors) Any simple way? Hook needed? Other suggestion? (See also: http://article.gmane.org/gmane.comp.version-control.git/148861) Thank you in advance. Yours faithfully, -- Arnauld Van Muysewinkel

    Read the article

  • Best Practices - updated: which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains). This is an updated and enlarged version of the post on this topic originally posted October 2012. One frequent question "what type of domain should I use to run applications?" There used to be a simple answer: "run applications in guest domains in almost all cases", but now there are more things to consider. Enhancements to Oracle VM Server for SPARC and introduction of systems like the current SPARC servers including the T4 and T5 systems, the Oracle SuperCluster T5-8 and Oracle SuperCluster M6-32 provide scale and performance much higher than the original servers that ran domains. Single-CPU performance, I/O capacity, memory sizes, are much larger now, and far more demanding applications are now being hosted in logical domains. The general advice continues to be "use guest domains in almost all cases", meaning, "use virtual I/O rather than physical I/O", unless there is a specific reason to use the other domain types. The sections below will discuss the criteria for choosing between domain types. Review: division of labor and types of domain Oracle VM Server for SPARC offloads management and I/O functionality from the hypervisor to domains (also called virtual machines), providing a modern alternative to older VM architectures that use a "thick", monolithic hypervisor. This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, further improving reliability and security. Oracle VM Server for SPARC defines the following types of domain, each with their own roles: Control domain - management control point for the server, runs the logical domain daemon and constraints engine, and is used to configure domains and manage resources. The control domain is the first domain to boot on a power-up, is always an I/O domain, and is usually a service domain as well. It doesn't have to be, but there's no reason to not leverage it for virtual I/O services. There is one control domain per T-series system, and one per Physical Domain (PDom) on an M5-32 or M6-32 system. M5 and M6 systems can be physically domained, with logical domains within the physical ones. I/O domain - a domain that has been assigned physical I/O devices. The devices may be one more more PCIe root complexes (in which case the domain is also called a root complex domain). The domain has native access to all the devices on the assigned PCIe buses. The devices can be any device type supported by Solaris on the hardware platform. a SR-IOV (Single-Root I/O Virtualization) function. SR-IOV lets a physical device (also called a physical function) or PF) be subdivided into multiple virtual functions (VFs) which can be individually assigned directly to domains. SR-IOV devices currently can be Ethernet or InfiniBand devices. direct I/O ownership of one or more PCI devices residing in a PCIe bus slot. The domain has direct access to the individual devices An I/O domain has native performance and functionality for the devices it owns, unmediated by any virtualization layer. It may also have virtual devices. Service domain - a domain that provides virtual network and disk devices to guest domains. The services are defined by commands that are run in the control domain. It usually is an I/O domain as well, in order for it to have devices to virtualize and serve out. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Device considerations Consider the following when choosing between virtual devices and physical devices: Virtual devices provide the best flexibility - they can be dynamically added to and removed from a running domain, and you can have a large number of them up to a per-domain device limit. Virtual devices are compatible with live migration - domains that exclusively have virtual devices can be live migrated between servers supporting domains. On the other hand: Physical devices provide the best performance - in fact, native "bare metal" performance. Virtual devices approach physical device throughput and latency, especially with virtual network devices that can now saturate 10GbE links, but physical devices are still faster. Physical I/O devices do not add load to service domains - all the I/O goes directly from the I/O domain to the device, while virtual I/O goes through service domains, which must be provided sufficient CPU and memory capacity. Physical I/O devices can be other than network and disk - we virtualize network, disk, and serial console, but physical devices can be the wide range of attachable certified devices, including things like tape and CDROM/DVD devices. In some cases the lines are now blurred: virtual devices have better performance than previously: starting with Oracle VM Server for SPARC 3.1 there is near-native virtual network performance. There is more flexibility with physical devices than before: SR-IOV devices can now be dynamically reconfigured on domains. Tradeoffs one used to have to make are now relaxed: you can often have the flexibility of virtual I/O with performance that previously required physical I/O. You can have the performance and isolation of SR-IOV with the ability to dynamically reconfigure it, just like with virtual devices. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI buses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain that is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure, as described in Availability Best Practices - Avoiding Single Points of Failure . Guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device does not result in an application outage. This also permits "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O buses, so there is more I/O capacity that can be used for applications. Increased server capacity made it attractive to run more vertically-scaled applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the Oracle SuperCluster engineered systems mentioned previously. In those engineered systems, I/O domains are used for high performance applications with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. Not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O to guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm command must be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. For reference, an excellent guide to secure deployment of domains by Stefan Hinker is at Secure Deployment of Oracle VM Server for SPARC. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. They should be considered the default domain type to use unless there is a specific requirement that mandates an I/O domain. I/O domains can be used for applications with the highest performance requirements. Single Root I/O Virtualization (SR-IOV) makes this more attractive by giving direct I/O access to more domains, and by permitting dynamic reconfiguration of SR-IOV devices. Today's larger systems provide multiple PCIe buses - for example, 16 buses on the T5-8 - making it possible to configure multiple I/O domains each owning their own bus. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so interruption of service in one service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. Oracle SuperCluster uses the control domain for applications, but it is an exception. It's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity servers that run Oracle VM Server for SPARC are attractive for applications with the most demanding resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide peak performance for critical applications. That said, the improved virtual device performance in Oracle VM Server means that the default choice should still be guest domains with virtual I/O.

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • How to solve exception_priv _instruction exception while running destop project? [on hold]

    - by Haritha
    While running desktop project im getting exception_priv _instruction how to solve this??? while running this page is coming # # A fatal error has been detected by the Java Runtime Environment: # # EXCEPTION_PRIV_INSTRUCTION (0xc0000096) at pc=0x02f5a92b, pid=3012, tid=3104 # # JRE version: 7.0-b147 # Java VM: Java HotSpot(TM) Client VM (21.0-b17 mixed mode, sharing windows-x86 ) # Problematic frame: # C 0x02f5a92b # # Failed to write core dump. Minidumps are not enabled by default on client versions of Windows # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # --------------- T H R E A D --------------- Current thread (0x02f5a800): JavaThread "LWJGL Application" [_thread_in_native, id=3104, stack(0x076f0000,0x07740000)] siginfo: ExceptionCode=0xc0000096 Registers: EAX=0x000df4f0, EBX=0x32afc180, ECX=0x000df4f0, EDX=0x00000020 ESP=0x0773f768, EBP=0x0773f790, ESI=0x32afc180, EDI=0x02f5a800 EIP=0x02f5a92b, EFLAGS=0x00010206 Top of Stack: (sp=0x0773f768) 0x0773f768: 02bd429c 02bd429c 0773f770 32afc180 0x0773f778: 0773f7b8 32b022c8 00000000 32afc180 0x0773f788: 00000000 0773f7a0 0773f7dc 00943187 0x0773f798: 229ec1c0 00948839 69081736 00000000 0x0773f7a8: 089b0048 00000000 00000014 00001406 0x0773f7b8: 00000002 0773f7bc 32afbeb0 0773f7f8 0x0773f7c8: 32b022c8 00000000 32afbf00 0773f7a0 0x0773f7d8: 0773f7f0 0773f81c 00943187 69081736 Instructions: (pc=0x02f5a92b) 0x02f5a90b: 00 43 00 00 00 00 f0 bc 02 e8 00 e9 22 40 f7 73 0x02f5a91b: 07 85 a5 94 00 90 f7 73 07 50 cc a0 6d d8 49 c0 0x02f5a92b: 6d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x02f5a93b: 00 00 00 00 00 00 00 00 00 08 80 3d 37 00 00 00 Register to memory mapping: EAX=0x000df4f0 is an unknown value EBX=0x32afc180 is an oop {method} - klass: {other class} ECX=0x000df4f0 is an unknown value EDX=0x00000020 is an unknown value ESP=0x0773f768 is pointing into the stack for thread: 0x02f5a800 EBP=0x0773f790 is pointing into the stack for thread: 0x02f5a800 ESI=0x32afc180 is an oop {method} - klass: {other class} EDI=0x02f5a800 is a thread Stack: [0x076f0000,0x07740000], sp=0x0773f768, free space=317k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C 0x02f5a92b j org.lwjgl.opengl.GL11.glVertexPointer(IILjava/nio/FloatBuffer;)V+48 j com.badlogic.gdx.backends.lwjgl.LwjglGL10.glVertexPointer(IIILjava/nio/Buffer;)V+53 j com.badlogic.gdx.graphics.glutils.VertexArray.bind()V+149 j com.badlogic.gdx.graphics.Mesh.bind()V+25 j com.badlogic.gdx.graphics.Mesh.render(IIIZ)V+32 j com.badlogic.gdx.graphics.Mesh.render(III)V+8 j com.badlogic.gdx.graphics.g2d.SpriteBatch.flush()V+197 j com.badlogic.gdx.graphics.g2d.SpriteBatch.switchTexture(Lcom/badlogic/gdx/graphics/Texture;)V+1 j com.badlogic.gdx.graphics.g2d.SpriteBatch.draw(Lcom/badlogic/gdx/graphics/Texture;FFFF)V+33 j sevenseas.game.WorldRenderer.drawBob()V+54 j sevenseas.game.WorldRenderer.render()V+12 j sevenseas.game.GameClass.render(F)V+38 j com.badlogic.gdx.Game.render()V+19 j com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop()V+642 j com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run()V+27 v ~StubRoutines::call_stub V [jvm.dll+0x122c7e] V [jvm.dll+0x1c9c0e] V [jvm.dll+0x122e73] V [jvm.dll+0x122ed7] V [jvm.dll+0xccd1f] V [jvm.dll+0x14433f] V [jvm.dll+0x171549] C [msvcr100.dll+0x5c6de] endthreadex+0x3a C [msvcr100.dll+0x5c788] endthreadex+0xe4 C [kernel32.dll+0xb713] GetModuleFileNameA+0x1b4 Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) j org.lwjgl.opengl.GL11.nglVertexPointer(IIIJJ)V+0 j org.lwjgl.opengl.GL11.glVertexPointer(IILjava/nio/FloatBuffer;)V+48 j com.badlogic.gdx.backends.lwjgl.LwjglGL10.glVertexPointer(IIILjava/nio/Buffer;)V+53 j com.badlogic.gdx.graphics.glutils.VertexArray.bind()V+149 j com.badlogic.gdx.graphics.Mesh.bind()V+25 j com.badlogic.gdx.graphics.Mesh.render(IIIZ)V+32 j com.badlogic.gdx.graphics.Mesh.render(III)V+8 j com.badlogic.gdx.graphics.g2d.SpriteBatch.flush()V+197 j com.badlogic.gdx.graphics.g2d.SpriteBatch.switchTexture(Lcom/badlogic/gdx/graphics/Texture;)V+1 j com.badlogic.gdx.graphics.g2d.SpriteBatch.draw(Lcom/badlogic/gdx/graphics/Texture;FFFF)V+33 j sevenseas.game.WorldRenderer.drawBob()V+54 j sevenseas.game.WorldRenderer.render()V+12 j sevenseas.game.GameClass.render(F)V+38 j com.badlogic.gdx.Game.render()V+19 j com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop()V+642 j com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run()V+27 v ~StubRoutines::call_stub --------------- P R O C E S S --------------- Java Threads: ( => current thread ) 0x003d6c00 JavaThread "DestroyJavaVM" [_thread_blocked, id=3240, stack(0x008c0000,0x00910000)] =>0x02f5a800 JavaThread "LWJGL Application" [_thread_in_native, id=3104, stack(0x076f0000,0x07740000)] 0x02bcf000 JavaThread "Service Thread" daemon [_thread_blocked, id=2612, stack(0x02e00000,0x02e50000)] 0x02bc1000 JavaThread "C1 CompilerThread0" daemon [_thread_blocked, id=2776, stack(0x02db0000,0x02e00000)] 0x02bbf400 JavaThread "Attach Listener" daemon [_thread_blocked, id=2448, stack(0x02d60000,0x02db0000)] 0x02bbe000 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=1764, stack(0x02d10000,0x02d60000)] 0x02bb8000 JavaThread "Finalizer" daemon [_thread_blocked, id=3864, stack(0x02cc0000,0x02d10000)] 0x02bb3400 JavaThread "Reference Handler" daemon [_thread_blocked, id=2424, stack(0x02c70000,0x02cc0000)] Other Threads: 0x02bb1800 VMThread [stack: 0x02c20000,0x02c70000] [id=3076] 0x02bd1000 WatcherThread [stack: 0x02e50000,0x02ea0000] [id=3276] VM state:not at safepoint (normal execution) VM Mutex/Monitor currently owned by a thread: None Heap def new generation total 4928K, used 2571K [0x229c0000, 0x22f10000, 0x27f10000) eden space 4416K, 46% used [0x229c0000, 0x22bc2e38, 0x22e10000) from space 512K, 100% used [0x22e90000, 0x22f10000, 0x22f10000) to space 512K, 0% used [0x22e10000, 0x22e10000, 0x22e90000) tenured generation total 10944K, used 634K [0x27f10000, 0x289c0000, 0x329c0000) the space 10944K, 5% used [0x27f10000, 0x27faea60, 0x27faec00, 0x289c0000) compacting perm gen total 12288K, used 1655K [0x329c0000, 0x335c0000, 0x369c0000) the space 12288K, 13% used [0x329c0000, 0x32b5dc58, 0x32b5de00, 0x335c0000) ro space 10240K, 42% used [0x369c0000, 0x36dfc660, 0x36dfc800, 0x373c0000) rw space 12288K, 53% used [0x373c0000, 0x37a38180, 0x37a38200, 0x37fc0000) Code Cache [0x00940000, 0x009d8000, 0x02940000) total_blobs=305 nmethods=80 adapters=158 free_code_cache=32183Kb largest_free_block=32955904 Dynamic libraries: 0x00400000 - 0x0042f000 C:\Program Files\Java\jre7\bin\javaw.exe 0x7c900000 - 0x7c9af000 C:\WINDOWS\system32\ntdll.dll 0x7c800000 - 0x7c8f6000 C:\WINDOWS\system32\kernel32.dll 0x77dd0000 - 0x77e6b000 C:\WINDOWS\system32\ADVAPI32.dll 0x77e70000 - 0x77f02000 C:\WINDOWS\system32\RPCRT4.dll 0x77fe0000 - 0x77ff1000 C:\WINDOWS\system32\Secur32.dll 0x7e410000 - 0x7e4a1000 C:\WINDOWS\system32\USER32.dll 0x77f10000 - 0x77f59000 C:\WINDOWS\system32\GDI32.dll 0x773d0000 - 0x774d3000 C:\WINDOWS\WinSxS\x86_Microsoft.Windows.Common-Controls_6595b64144ccf1df_6.0.2600.5512_x-ww_35d4ce83\COMCTL32.dll 0x77c10000 - 0x77c68000 C:\WINDOWS\system32\msvcrt.dll 0x77f60000 - 0x77fd6000 C:\WINDOWS\system32\SHLWAPI.dll 0x76390000 - 0x763ad000 C:\WINDOWS\system32\IMM32.DLL 0x629c0000 - 0x629c9000 C:\WINDOWS\system32\LPK.DLL 0x74d90000 - 0x74dfb000 C:\WINDOWS\system32\USP10.dll 0x78aa0000 - 0x78b5e000 C:\Program Files\Java\jre7\bin\msvcr100.dll 0x6d940000 - 0x6dc61000 C:\Program Files\Java\jre7\bin\client\jvm.dll 0x71ad0000 - 0x71ad9000 C:\WINDOWS\system32\WSOCK32.dll 0x71ab0000 - 0x71ac7000 C:\WINDOWS\system32\WS2_32.dll 0x71aa0000 - 0x71aa8000 C:\WINDOWS\system32\WS2HELP.dll 0x76b40000 - 0x76b6d000 C:\WINDOWS\system32\WINMM.dll 0x76bf0000 - 0x76bfb000 C:\WINDOWS\system32\PSAPI.DLL 0x6d8d0000 - 0x6d8dc000 C:\Program Files\Java\jre7\bin\verify.dll 0x6d370000 - 0x6d390000 C:\Program Files\Java\jre7\bin\java.dll 0x6d920000 - 0x6d933000 C:\Program Files\Java\jre7\bin\zip.dll 0x6cec0000 - 0x6cf42000 C:\Documents and Settings\7stl0225\Local Settings\Temp\libgdx7stl0225\37fe1abc\gdx.dll 0x10000000 - 0x1004c000 C:\Documents and Settings\7stl0225\Local Settings\Temp\libgdx7stl0225\52d76f2b\lwjgl.dll 0x5ed00000 - 0x5edcc000 C:\WINDOWS\system32\OPENGL32.dll 0x68b20000 - 0x68b40000 C:\WINDOWS\system32\GLU32.dll 0x73760000 - 0x737ab000 C:\WINDOWS\system32\DDRAW.dll 0x73bc0000 - 0x73bc6000 C:\WINDOWS\system32\DCIMAN32.dll 0x77c00000 - 0x77c08000 C:\WINDOWS\system32\VERSION.dll 0x070b0000 - 0x07115000 C:\DOCUME~1\7stl0225\LOCALS~1\Temp\libgdx7stl0225\52d76f2b\OpenAL32.dll 0x7c9c0000 - 0x7d1d7000 C:\WINDOWS\system32\SHELL32.dll 0x774e0000 - 0x7761d000 C:\WINDOWS\system32\ole32.dll 0x5ad70000 - 0x5ada8000 C:\WINDOWS\system32\uxtheme.dll 0x76fd0000 - 0x7704f000 C:\WINDOWS\system32\CLBCATQ.DLL 0x77050000 - 0x77115000 C:\WINDOWS\system32\COMRes.dll 0x77120000 - 0x771ab000 C:\WINDOWS\system32\OLEAUT32.dll 0x73f10000 - 0x73f6c000 C:\WINDOWS\system32\dsound.dll 0x76c30000 - 0x76c5e000 C:\WINDOWS\system32\WINTRUST.dll 0x77a80000 - 0x77b15000 C:\WINDOWS\system32\CRYPT32.dll 0x77b20000 - 0x77b32000 C:\WINDOWS\system32\MSASN1.dll 0x76c90000 - 0x76cb8000 C:\WINDOWS\system32\IMAGEHLP.dll 0x72d20000 - 0x72d29000 C:\WINDOWS\system32\wdmaud.drv 0x72d10000 - 0x72d18000 C:\WINDOWS\system32\msacm32.drv 0x77be0000 - 0x77bf5000 C:\WINDOWS\system32\MSACM32.dll 0x77bd0000 - 0x77bd7000 C:\WINDOWS\system32\midimap.dll 0x73ee0000 - 0x73ee4000 C:\WINDOWS\system32\KsUser.dll 0x755c0000 - 0x755ee000 C:\WINDOWS\system32\msctfime.ime 0x69000000 - 0x691a9000 C:\WINDOWS\system32\sisgl.dll 0x73b30000 - 0x73b45000 C:\WINDOWS\system32\mscms.dll 0x73000000 - 0x73026000 C:\WINDOWS\system32\WINSPOOL.DRV 0x66e90000 - 0x66ed1000 C:\WINDOWS\system32\icm32.dll 0x07760000 - 0x0778d000 C:\Program Files\WordWeb\WHook.dll 0x74c80000 - 0x74cac000 C:\WINDOWS\system32\OLEACC.dll 0x76080000 - 0x760e5000 C:\WINDOWS\system32\MSVCP60.dll VM Arguments: jvm_args: -Dfile.encoding=Cp1252 java_command: sevenseas.game.MainDesktop Launcher Type: SUN_STANDARD Environment Variables: PATH=C:/Program Files/Java/jre7/bin/client;C:/Program Files/Java/jre7/bin;C:/Program Files/Java/jre7/lib/i386;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\Java\jdk1.7.0\bin;C:\eclipse; USERNAME=7stl0225 OS=Windows_NT PROCESSOR_IDENTIFIER=x86 Family 15 Model 4 Stepping 1, GenuineIntel --------------- S Y S T E M --------------- OS: Windows XP Build 2600 Service Pack 3 CPU:total 1 (1 cores per cpu, 1 threads per core) family 15 model 4 stepping 1, cmov, cx8, fxsr, mmx, sse, sse2, sse3 Memory: 4k page, physical 2031088k(939252k free), swap 3969920k(3011396k free) vm_info: Java HotSpot(TM) Client VM (21.0-b17) for windows-x86 JRE (1.7.0-b147), built on Jun 27 2011 02:25:52 by "java_re" with unknown MS VC++:1600 time: Sat Oct 26 12:35:14 2013 elapsed time: 0 seconds

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • C# XDocument Attribute Performance Concerns

    - by Dested
    I have a loaded XDocument that I need to grab all the attributes that equal a certain value and is of a certain element efficiently. My current IEnumerable<XElement> vm; if (!cacher2.TryGetValue(name,out vm)) { vm = project.Descendants(XName.Get(name)); cacher2.Add(name, vm); } XElement[] abdl = (vm.Where(a => a.Attribute(attribute).Value == ab)).ToArray(); cacher2 is a Dictionary<string,IEnumerable<XElement>> The ToArray is so I can evaluate the expression now. I dont think this causes any real speed concerns. The problem is the Where itself. I am searching through anywhere from 1 to 10k items. Any help?

    Read the article

  • Caspol, VMs, Mapped Drives, VS2010

    - by Simon Woods
    Hi I have a VM (Win7 32 bit) with VS2010 installed. I have a drive mapped into it from the host machine (VM 64 bit), when I have some of my VS2010 projects and to where I am building them. One of my projects is looking to load an assembly. If I copy that assembly to a local drive, the program ruins fine. If I leave it on the mapped drive, then I get an error Exception is: FileLoadException - Could not load file or assembly 'file:///Z:\BusinessTier\bin\Debug\BusinessTier.dll I am unsure whether or not I need to run Caspol. There is another post on SO which pointed me to a post which indicated that VS2008 SP1+ removed the need for caspol wrt network drives, but I wondered if I still needed to because I am in a VM. I have tried running the following on the host machine in an attempt to give permissions to VS inside the VM, but to no avail C:\Windows\Microsoft.NET\Framework\v4.0.30128>caspol -m -ag 1.2 -url file://g:\* FullTrust where g:* is the drive being mapped into the VM (as drive z:) What am I missing (apart from understanding!) Thx Simon

    Read the article

  • VirtualBox snapshots

    - by CodeMedic
    Heres what happened. I had a snapshot on which I was working from within a linux VM. A friend requested a clean VM as a clone of mine. So I closed / shutdown my running VM, made a copy of the Disk1.vdi along with the snapshots ({uuid}.vdi). Then I restarted the VM and did merged snapshots, deleted my home directory and made a tar+bz2 for my friend. Then after I restored my backups, I am not able to mount my snapshot. The VM seems to boot from my version before snapshot. I cant seem to find a way to mount back my snapshot. Any idea how to make VirtualBox see the snapshot and mount it?

    Read the article

  • Enabling Http caching and compression in IIS 7 for asp.net websites

    - by anil.kasalanati
    Caching – There are 2 ways to set Http caching 1-      Use Max age property 2-      Expires header. Doing the changes via IIS Console – 1.       Select the website for which you want to enable caching and then select Http Responses in the features tab       2.       Select the Expires webcontent and on changing the After setting you can generate the max age property for the cache control    3.       Following is the screenshot of the headers   Then you can use some tool like fiddler and see 302 response coming from the server. Doing it web.config way – We can add static content section in the system.webserver section <system.webServer>   <staticContent>             <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="365.00:00:00" />   </staticContent> Compression - By default static compression is enabled on IIS 7.0 but the only thing which falls under that category is CSS but this is not enough for most of the websites using lots of javascript.  If you just thought by enabling dynamic compression would fix this then you are wrong so please follow following steps –   In some machines the dynamic compression is not enabled and following are the steps to enable it – Open server manager Roles > Web Server (IIS) Role Services (scroll down) > Add Role Services Add desired role (Web Server > Performance > Dynamic Content Compression) Next, Install, Wait…Done!   ?  Roles > Web Server (IIS) ?  Role Services (scroll down) > Add Role Services     Add desired role (Web Server > Performance > Dynamic Content Compression)     Next, Install, Wait…Done!     Enable  - ?  Open server manager ?  Roles > Web Server (IIS) > Internet Information Services (IIS) Manager   Next pane: Sites > Default Web Site > Your Web Site Main pane: IIS > Compression         Then comes the custom configuration for encrypting javascript resources. The problem is that the compression in IIS 7 completely works on the mime types and by default there is a mismatch in the mime types Go to following location C:\Windows\System32\inetsrv\config Open applicationHost.config The mimemap is as follows  <mimeMap fileExtension=".js" mimeType="application/javascript" />   So the section in the staticTypes should be changed          <add mimeType="application/javascript" enabled="true" />     Doing the web.config way –   We can add following section in the system.webserver section <system.webServer> <urlCompression doDynamicCompression="false"  doStaticCompression="true"/> More Information/References – ·         http://weblogs.asp.net/owscott/archive/2009/02/22/iis-7-compression-good-bad-how-much.aspx ·         http://www.west-wind.com/weblog/posts/98538.aspx  

    Read the article

  • Scrum for a single programmer?

    - by Rob Perkins
    I'm billed as the "Windows Expert" in my very small company, which consists of myself, a mechanical engineer working in a sales and training role, and the company's president, working in a design, development, and support role. My role is equally as general, but primarily I design and implement whatever programming on our product needs to get done in order for our stuff to run on whichever versions of Windows are current. I just finished watching a high-level overview of the Scrum paradigm, given in a webcast. My question is: Is it worth my time to learn more about this approach to product development, given that my development work items are usually given at a very high level, such as "internationalize and localize the product". If it is, how would you suggest adapting Scrum for the use of just one programmer? What tools, cloud-based or otherwise, would be useful to that end? If it is not, what approach would you suggest for a single programmer to organize his efforts from day to day? (Perhaps the question reduces to that simple question.)

    Read the article

  • Membership in ASP.Net applications - part 4

    - by nikolaosk
    This is the fourth post in a series of posts regarding ASP.Net built in membership functionality,providers,controls. You can read the first one here . You can read the second post here . You can read the third post here . In this post I will show you how to add users programmatically to a role. In the third post we saw how to get users in a specific role.I will also show you how to delete a user and a role programmatically. 1) Launch Visual Studio 2005,2008/2010. Express editions will work fine....(read more)

    Read the article

  • June is going to be a busy month!

    - by Monica Kumar
    Who says things slow down in summer? Well, maybe for school kids, but certainly not for Oracle's Virtualization team! June is turning out to be one of the busiest months for us. We are going to be participating in a number of industry events. If you happen to be at any of these, please stop by the Oracle booth and our session/s. Let's go through a run down of these events. 1. 13th Annual Call Center Week June 4-8 Ceasar's Palace, Las Vegas  Event website You're now wondering...why are we at this call center show. It's really simple, Oracle's Desktop Virtualization solutions offer the best way for call center to reliably and securely access enterprise apps using a variety of endpoint devices such as an iPad or a Sun Ray Client. Provisioning new employees becomes a breeze. We'll be jointly showcasing our solution with Oracle's CRM team. Come check us out.  2. Gartner Infrastructure & Management, Florida June 5-7 Orlando, FL  Event website Oracle is a Premier sponsor of the Gartner IOM Summit this June 5 – 7, 2012 in Orlando, FL.  Attendees will have the opportunity to meet with Oracle experts in a variety of sessions, including demonstrations during the showcase receptions. 3. Cloud Expo East Check out our website for details of our participation. Stop by at booth 511 to talk to our Cloud, Virtualization and Big Data experts. In addition, we're delivering a number of sessions at Cloud Expo. The one I want to highlight is the following: Session: Borderless Applications in the Cloud with Oracle VM and Oracle Virtual Assembly Builder Abstract: As virtualization adoption progresses beyond server consolidation, this is also transforming how enterprise applications are deployed and managed in an agile environment. The traditional method of business-critical application deployment where administrators have to contend with an array of unrelated tools, custom scripts to deploy and manage applications, OS and VM instances into a fast changing cloud computing environment can no longer scale effectively to achieve response time and desired efficiency. Oracle VM and Oracle Virtual Assembly Builder allow applications, associated components, deployment metadata, management policies and best practices to be encapsulated into ready-to-run VMs for rapid, repeatable deployment and ease of management. Join us in this Cloud Expo session to see how Oracle VM and Oracle Virtual Assembly Builder allow you to deploy complex multi-tier applications in minutes and enables you to easily onboard existing applications to cloud environments.  Get your free Cloud Expo pass now!  We're offering complimentary VIP Gold Passes. Go to https://www.blueskyz.com/v3/Login.aspx?ClientID=19&EventID=56&sg=177, click “Continue” if you are a New User or log-in if you have already created an account. Once there, you can view the Agenda or Register for Cloud Expo. To register - fill out the basic business card questions and then enter oracleVIPgold in the Priority Code field to change the price from $2,000 to $0. 4. CiscoLive 2012  June 10-14 San Diego, CA Event website Our Oracle VM and Oracle Linux experts will talk about joint collaboration with Cisco on UCS. We'll also highlight customer use cases. 5. Gartner Infrastructure & Operations Management Summit, EMEA Dates: June 11-12 Frankfurt, Germany Event website Meet experts from our Virtualization and Linux team in EMEA. Stop by our booth and find out what's new in Oracle VM Server for x86 and Oracle Linux. June is going to be busy.

    Read the article

  • Azure Full trust permissions

    - by kaleidoscope
    Under Windows Azure full trust, your role has access to a variety of system resources that are not available under partial trust File System Resources A role running in Windows Azure has permissions to read and write to certain file, directory, and volume resources on the server. These permissions are outlined in the following table.  File system resource Permission System root directory No access Subdirectories of the system root directory No access Windows directory Read access only Machine configuration files No access Service configuration file Read access only Local storage resource Full access Registry Resources The following table outlines permissions available to the role when accessing the registry while running in Windows Azure. HKEY_CLASSES_ROOT Read access HKEY_CURRENT_USER No access HKEY_LOCAL_MACHINE Read access HKEY_USERS Read access HKEY_CURRENT_CONFIG Read access More details can be found at: http://msdn.microsoft.com/en-us/library/dd573363.aspx   Amit, S

    Read the article

  • Why does tomcat-admin install require adding admin and manager to tomcat-users.xml manually?

    - by J G
    I installed tomcat6 on lucid using apt-get. All working. I installed tomcat-admin. Not working. I amended the /etc/tomcat6/tomcat-users.xml file to uncomment the users and roles (from the default) to be like the following: <role rolename="tomcat"/> <role rolename="role1"/> <user username="tomcat" password="password" roles="tomcat"/> <user username="both" password="password" roles="tomcat,role1"/> <user username="role1" password="password" roles="role1"/> This still didn't work. Then from the following page I added. <role rolename="manager"/> <user username="admin" password="secret" roles="manager"/> then it worked. Why doesn't this occur as part of the install? (Why isn't this in the Ubuntu Manual on Tomcat ?)

    Read the article

  • Is there a title for someone who is both a Software Developer and a Business Analyst?

    - by gyin
    I usually see two job titles in the IT industry. My understanding of their most commonly accepted usage (simplified): Business Analyst: Main role is eliciting the users' needs Software Developer Main role is to design, build and test a software solution answering the needs I'm wondering: "How we should call somebody whose role is to do both of the roles above?" Is this a common job title? And is trying to find people with these broadness of skills realistic, difficult? EDIT: I'm specifically interested to name Lead Software Developers who have eventually learned and can apply techniques of Business Analysis.

    Read the article

  • Boost.Program_options fixed number of tokens

    - by kloffy
    Boost.Program_options provides a facility to pass multiple tokens via command line arguments as follows: std::vector<int> nums; po::options_description desc("Allowed options"); desc.add_options() ("help", "Produce help message.") ("nums", po::value< std::vector<int> >(&nums)->multitoken(), "Numbers.") ; po::variables_map vm; po::store(po::parse_command_line(argc, argv, desc), vm); po::notify(vm); However, what is the preferred way of accepting only a fixed number of arguments? The only solution I could come is to manually assign values: int nums[2]; po::options_description desc("Allowed options"); desc.add_options() ("help", "Produce help message.") ("nums", "Numbers.") ; po::variables_map vm; po::store(po::parse_command_line(argc, argv, desc), vm); if (vm.count("nums")) { // Assign nums } This feels a bit clumsy. Is there a better solution?

    Read the article

  • Refactoring Bloated ViewModel

    - by Holy Christ
    Hi, I am writing a PRISM/MVVM/WPF application. It's a LOB application, so there are a lot of complicated rules. I've noticed the View Model is starting to get bloated. There are two main issues. One is that to maintain MVVM, I'm doing a lot of things that feel hacky like adding a bunch of properties to my VM. The view binds to those properties to keep track of what feels like view specific information. For example, a boolean keeping track of the status of a long running process in the VM, so the view can disable some of its controls while the long running process is working. I've read that this issue could be solved with Attached Behaviors. I'll look more into that. In the example MVVM apps you see online, this isn't a big deal because they are over-simplified. The other issue is the number of commands in my VM. Right now there are four commands. I'm defining the commands in the VM using Josh Smith's RelayCommand (basically the DelegateCommand in PRISM) so all the business logic lives in the VM. I considered moving each command into separate unit of works. I'm not sure the best way to do this. Which patterns are you guys using to keep your VMs clean? I can already feel someone responding with "your view and VM is too complicated, you should break them into many view/VMs". It is certainly not too complicated from a Ux perspective - there are 2 buttons, a combobox, and a listbox. Also, from a logical perspective, it is one cohesive domain. Having said that, I'm very interested in hearing how others are dealing with this type of issue. Thanks for your input.

    Read the article

  • Another boost error

    - by user1676605
    On this code I get the enourmous error static void ParseTheCommandLine(int argc, char *argv[]) { int count; int seqNumber; namespace po = boost::program_options; std::string appName = boost::filesystem::basename(argv[0]); po::options_description desc("Generic options"); desc.add_options() ("version,v", "print version string") ("help", "produce help message") ("sequence-number", po::value<int>(&seqNumber)->default_value(0), "sequence number") ("pem-file", po::value< vector<string> >(), "pem file") ; po::positional_options_description p; p.add("pem-file", -1); po::variables_map vm; po::store(po::command_line_parser(argc, argv). options(desc).positional(p).run(), vm); po::notify(vm); if (vm.count("pem file")) { cout << "Pem files are: " << vm["pem-file"].as< vector<string> >() << "\n"; } cout << "Sequence number is " << seqNumber << "\n"; exit(1); ../../../FIXMarketDataCommandLineParameters/FIXMarketDataCommandLineParameters.hpp|98|error: no match for ‘operator<<’ in ‘std::operator<< [with _Traits = std::char_traits](((std::basic_ostream &)(& std::cout)), ((const char*)"Pem files are: ")) << ((const boost::program_options::variable_value*)vm.boost::program_options::variables_map::operator[](((const std::string&)(& std::basic_string, std::allocator (((const char*)"pem-file"), ((const std::allocator&)((const std::allocator*)(& std::allocator()))))))))-boost::program_options::variable_value::as with T = std::vector, std::allocator , std::allocator, std::allocator ’|

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >