Search Results

Search found 2185 results on 88 pages for 'uac virtualization'.

Page 68/88 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • can't delete files on Windows 7 (permission)

    - by lajos
    I updated my laptop from XP to Windows 7. There are some leftover files from XP on the computer now, when I try deleting them I get an error: You need permission to perform this action. You require permission from S-1-.... to make changes to this folder. What's weird is that I am logged in with the only user account on this machine and I have administrator privileges. I tried turning UAC off, but still can't delete the files. How can I force removal of these files?

    Read the article

  • Enable (or work around) Administrative shares in Windows 8

    - by Brado
    So in using Windows 8, I've discovered that the administrative shares are disabled. There seems to be no easy way to get them re-enabled. Is anyone aware of a work around, or solution? I did not have this issue with Windows 7 After disabling UAC. However in Windows 8 this still doesn't work. This is all I could find, however I am not satisfied with the information provided. http://www.computerperformance.co.uk/win8/windows8-administrative-shares.htm http://www.tomsitpro.com/articles/windows_8-file_sharing-windows_administrative_shares,2-195.html

    Read the article

  • Windows 8 Permissions

    - by Dane Balia
    I, 2 days ago, completed a standard Windows 8 installation. It was a fresh install, however Windows 7 was on the disk prior to which it migrated to Windows.old. But for some strange and weird reason, I am struggling to 'write' to my disk in regards to .NET Applications. It appears that none of my .NET (self-written) applications can write to their log files on disk, which are created on startup. I have disabled UAC, as well as set Full Control over Disk C: for my User, but no luck. I keep getting the error: A required privilege is not held by the client. I did google and try some online tuts - but no luck! Thanks

    Read the article

  • Seeking easy to use Setup creator with logic and ability to write registry keys

    - by Mawg
    I want to create a Setup to install my application. It as to be able to do the following: ask a question (maybe 2 radio buttons?) and, depending on the reply, copy one of 2 DLLs to the Windows directory and invoke regsvr to register the DLL write some registry keys I would code my own program to do this, but I don't have enough knowledge of the different versions of Windows (XP, Vista, 7, etc) and 32/64 bit, registry layout, or permissions like UAC, etc So, it seems to me that it would be easier to use a Setup generator which has been around for a while and already handles all of this stuff. I went to my favorite site for free stuff and found this page. However, the programs mentioned there are either too simplistic or have too steep a learning curve for me. Can anyone recommend a Goldilocks solution which does what I mentioned in those two bullet points while taking into account all Windows versions, 32/64 bit, non-admin accounts, etc?

    Read the article

  • Prevent applications from being run as administrator

    - by Unsigned
    Background Most installation toolkits have the ability to launch, automatically or otherwise, external programs after installation. This is often appears in the installer via such options as "Show readme", or "Start program". Issue The problem is, many of these installers are poorly coded, and do not drop permissions appropriately, for example, opening the application's homepage in the browser... launching the browser with the installer's Administrative privileges, or a "High" UAC integrity level. This has the potential to open up security breaches, by opening up a webpage (and possibly browser addons), that are now running with elevated permissions. Question The question is: Is there a way to prevent certain applications (such as a web browser) from ever being launched with Administrative privileges, i.e., an automatic drop-privilege?

    Read the article

  • Multiple PCs - Single profile

    - by martixy
    Or how to create an AD(or similar) environment at home? So... I work in this big IT company(they have and AD environment ofc) and I've seen first hand how nice integration and log-in-from-anywhere(roaming?) profiles can be. So I wanna create the same setup at home, where I have 3 machines. Mostly logging in with the same profile and preserving the settings(things like UAC being off, a few group policies I've customized, things like that). How can I achieve something like this?

    Read the article

  • Windows 7: How to stop/start service from commandline (like services.msc does it)?

    - by john
    I have developed a program in Java that uses on a local SQL Server instance to store its data. On some installations the SQL Server instance is not running sometimes. Users can fix this problem by manually starting the SQL Server instance (via services.msc). I am thinking about automating this task: the software would check if the database server is reachable, if not try to (re)start it. The problem is that on the same user account the Services can be stopped /started via services.msc (without any UAC prompt), but not via (non-elevated) command line. The operating system seems to treat services.msc differently: c:\>sc start mssql$db1 [SC] StartService: OpenService FEHLER 5: Zugriff verweigert (Access denied) c:\>net start mssql$db1 Systemfehler 5 aufgetreten. Zugriff verweigert (Access denied) So the question is: how can I stop/start the service from a java-program/command line without having my users to use services.msc (preferrably via on-board-tools)

    Read the article

  • Windows Security Videos auf Channel 9

    - by Your DisplayName here!
    Ich habe vor ein paar Wochen mit Lori drei Videos zum Thema Windows Security für Entwickler aufgenommen – die sind nun Online. Der erste Teil beschäftigt sich mit den absoluten Grundlagen der Windows Sicherheit. Was ist ein Konto? Was ist eine SID? Was ist ein Windows Token? Weiterhin wird gezeigt, wie sich diese grundlegenden Windows Einrichtungen über Managed Code anprogrammieren lassen. Der Vortrag endet mit einem kleinen Einblick in die Vorgehensweise von UAC, und wie dieses programmatisch verwendet werden kann. http://channel9.msdn.com/Blogs/Lori/Windows-Security-fr-Developers-Teil-1 Teil zwei beschäfitgt sich mit Zugriffs-Kontrolllisten, und wie diese mit .NET Code gelesen und geschrieben werden können. Weiterhin werden die beiden verwandten Konzepte Logon Session und Impersonierung besprochen. Beide Einrichtungen erzeugen einen neuen Token, sind aber grundlegend verschieden in ihren Einsatzgebieten. http://channel9.msdn.com/Blogs/Lori/Windows-Security-fr-Developers-Teil-2 Teil drei stellt das Kerberos Netzwerk-Authentifizierungsprotokoll vor. Da dieses Protokoll standardmäßig in Active Directory verwendet wird, sollten man es in den Grundzügen kennen. Natürlich kann auch Kerberos aus Managed Code verwendet werden – die abschließende Demo zeigt wie dies funktioniert. http://channel9.msdn.com/Blogs/Lori/Windows-Security-fr-Developers-Teil-3 …und noch ein kleines Interview http://channel9.msdn.com/Blogs/Lori/Interview-mit-Dominick-Baier Viel Spaß ;)

    Read the article

  • Is it possible to virtualize war file execution without separate J2EE container deployments?

    - by Smith
    Let's say I want to allow my developers to upload their war files to a web app (not the application server itself) running on our intranet and that web app would then run those wars as if they were separate apps deployed individually in our J2EE container. In other words, we are not actually deploying the wars as separate apps in the container - they are simply running side-by-side inside this one web app that acts like a J2EE container. Is that possible? Something like a war virtualization app?

    Read the article

  • How can I prevent ListBox from selecting an item when I right-click it?

    - by chaiguy
    The tricky part is that each item has a ContextMenu that I still want to open when it is right-clicked (I just don't want it selecting it). In fact, if it makes things any easier, I don't want any automatic selection at all, so if there's some way I can disable it entirely that would be just fine. I'm thinking of just switching to an ItemsControl actually, so long as I can get virtualization and scrolling to work with it.

    Read the article

  • Attempting to load a 64-bit application, how this CPU is not compatible with 64-bit mode

    - by Brandon Michael Hunter
    I have a Dell Studio 540, 64 bit OS Windows Home Premium. My CPU is supports Intel's virtualization technology, but I don't know how to enabled it on my machine. I saw that you can do it via the bios, but I didn't see this option when going through my BIOS. Is there another way to enabled this feature? Please let me know. I'm trying to installed Windows Server 2008 via Vircutal PC 2007. Thank You,

    Read the article

  • Is it possible to virtualize war file execution without separate J2EE container deployments?

    - by Smith
    Let's say I want to allow my developers to upload their war files to a web app (not the application server itself) running on our intranet and that web app would then run those wars as if they were separate apps deployed individually in our J2EE container. In other words, we are not actually deploying the wars as separate apps in the container - they are simply running side-by-side inside this one web app that acts like a J2EE container. Is that possible? Something like a war virtualization app?

    Read the article

  • Best way to use Photoshop CS3 in Linux

    - by Abe
    I don't want to use Mac or Windows at work but I have a lot of work in Photoshop when I have to create an HTML page from a Photoshop design. What is the best way to use Photoshop CS3 in Linux, Wine, virtualization, ... ???

    Read the article

  • The Proper Use of the VM Role in Windows Azure

    - by BuckWoody
    At the Professional Developer’s Conference (PDC) in 2010 we announced an addition to the Computational Roles in Windows Azure, called the VM Role. This new feature allows a great deal of control over the applications you write, but some have confused it with our full infrastructure offering in Windows Hyper-V. There is a proper architecture pattern for both of them. Virtualization Virtualization is the process of taking all of the hardware of a physical computer and replicating it in software alone. This means that a single computer can “host” or run several “virtual” computers. These virtual computers can run anywhere - including at a vendor’s location. Some companies refer to this as Cloud Computing since the hardware is operated and maintained elsewhere. IaaS The more detailed definition of this type of computing is called Infrastructure as a Service (Iaas) since it removes the need for you to maintain hardware at your organization. The operating system, drivers, and all the other software required to run an application are still under your control and your responsibility to license, patch, and scale. Microsoft has an offering in this space called Hyper-V, that runs on the Windows operating system. Combined with a hardware hosting vendor and the System Center software to create and deploy Virtual Machines (a process referred to as provisioning), you can create a Cloud environment with full control over all aspects of the machine, including multiple operating systems if you like. Hosting machines and provisioning them at your own buildings is sometimes called a Private Cloud, and hosting them somewhere else is often called a Public Cloud. State-ful and Stateless Programming This paradigm does not create a new, scalable way of computing. It simply moves the hardware away. The reason is that when you limit the Cloud efforts to a Virtual Machine, you are in effect limiting the computing resources to what that single system can provide. This is because much of the software developed in this environment maintains “state” - and that requires a little explanation. “State-ful programming” means that all parts of the computing environment stay connected to each other throughout a compute cycle. The system expects the memory, CPU, storage and network to remain in the same state from the beginning of the process to the end. You can think of this as a telephone conversation - you expect that the other person picks up the phone, listens to you, and talks back all in a single unit of time. In “Stateless” computing the system is designed to allow the different parts of the code to run independently of each other. You can think of this like an e-mail exchange. You compose an e-mail from your system (it has the state when you’re doing that) and then you walk away for a bit to make some coffee. A few minutes later you click the “send” button (the network has the state) and you go to a meeting. The server receives the message and stores it on a mail program’s database (the mail server has the state now) and continues working on other mail. Finally, the other party logs on to their mail client and reads the mail (the other user has the state) and responds to it and so on. These events might be separated by milliseconds or even days, but the system continues to operate. The entire process doesn’t maintain the state, each component does. This is the exact concept behind coding for Windows Azure. The stateless programming model allows amazing rates of scale, since the message (think of the e-mail) can be broken apart by multiple programs and worked on in parallel (like when the e-mail goes to hundreds of users), and only the order of re-assembling the work is important to consider. For the exact same reason, if the system makes copies of those running programs as Windows Azure does, you have built-in redundancy and recovery. It’s just built into the design. The Difference Between Infrastructure Designs and Platform Designs When you simply take a physical server running software and virtualize it either privately or publicly, you haven’t done anything to allow the code to scale or have recovery. That all has to be handled by adding more code and more Virtual Machines that have a slight lag in maintaining the running state of the system. Add more machines and you get more lag, so the scale is limited. This is the primary limitation with IaaS. It’s also not as easy to deploy these VM’s, and more importantly, you’re often charged on a longer basis to remove them. your agility in IaaS is more limited. Windows Azure is a Platform - meaning that you get objects you can code against. The code you write runs on multiple nodes with multiple copies, and it all works because of the magic of Stateless programming. you don’t worry, or even care, about what is running underneath. It could be Windows (and it is in fact a type of Windows Server), Linux, or anything else - but that' isn’t what you want to manage, monitor, maintain or license. You don’t want to deploy an operating system - you want to deploy an application. You want your code to run, and you don’t care how it does that. Another benefit to PaaS is that you can ask for hundreds or thousands of new nodes of computing power - there’s no provisioning, it just happens. And you can stop using them quicker - and the base code for your application does not have to change to make this happen. Windows Azure Roles and Their Use If you need your code to have a user interface, in Visual Studio you add a Web Role to your project, and if the code needs to do work that doesn’t involve a user interface you can add a Worker Role. They are just containers that act a certain way. I’ll provide more detail on those later. Note: That’s a general description, so it’s not entirely accurate, but it’s accurate enough for this discussion. So now we’re back to that VM Role. Because of the name, some have mistakenly thought that you can take a Virtual Machine running, say Linux, and deploy it to Windows Azure using this Role. But you can’t. That’s not what it is designed for at all. If you do need that kind of deployment, you should look into Hyper-V and System Center to create the Private or Public Infrastructure as a Service. What the VM Role is actually designed to do is to allow you to have a great deal of control over the system where your code will run. Let’s take an example. You’ve heard about Windows Azure, and Platform programming. You’re convinced it’s the right way to code. But you have a lot of things you’ve written in another way at your company. Re-writing all of your code to take advantage of Windows Azure will take a long time. Or perhaps you have a certain version of Apache Web Server that you need for your code to work. In both cases, you think you can (or already have) code the the software to be “Stateless”, you just need more control over the place where the code runs. That’s the place where a VM Role makes sense. Recap Virtualizing servers alone has limitations of scale, availability and recovery. Microsoft’s offering in this area is Hyper-V and System Center, not the VM Role. The VM Role is still used for running Stateless code, just like the Web and Worker Roles, with the exception that it allows you more control over the environment of where that code runs.

    Read the article

  • Running Solaris 11 as a control domain on a T2000

    - by jsavit
    There is increased adoption of Oracle Solaris 11, and many customers are deploying it on systems that previously ran Solaris 10. That includes older T1-processor based systems like T1000 and T2000. Even though they are old (from 2005) and don't have the performance of current SPARC servers, they are still functional, stable servers that customers continue to operate. One reason to install Solaris 11 on them is that older machines are attractive for testing OS upgrades before updating current, production systems. Normally this does not present a challenge, because Solaris 11 runs on any T-series or M-series SPARC server. One scenario adds a complication: running Solaris 11 in a control domain on a T1000 or T2000 hosting logical domains. Solaris 11 pre-installed Oracle VM Server for SPARC incompatible with T1 Unlike Solaris 10, Solaris 11 comes with Oracle VM Server for SPARC preinstalled. The ldomsmanager package contains the logical domains manager for Oracle VM Server for SPARC 2.2, which requires a SPARC T2, T2+, T3, or T4 server. It does not work with T1-processor systems, which are only supported by LDoms Manager 1.2 and earlier. The following screenshot shows what happens (bold font) if you try to use Oracle VM Server for SPARC 2.x commands in a Solaris 11 control domain. The commands were issued in a control domain on a T2000 that previously ran Solaris 10. We also display the version of the logical domains manager installed in Solaris 11: root@t2000 psrinfo -vp The physical processor has 4 virtual processors (0-3) UltraSPARC-T1 (chipid 0, clock 1200 MHz) # prtconf|grep T SUNW,Sun-Fire-T200 # ldm -V Failed to connect to logical domain manager: Connection refused # pkg info ldomsmanager Name: system/ldoms/ldomsmanager Summary: Logical Domains Manager Description: LDoms Manager - Virtualization for SPARC T-Series Category: System/Virtualization State: Installed Publisher: solaris Version: 2.2.0.0 Build Release: 5.11 Branch: 0.175.0.8.0.3.0 Packaging Date: May 25, 2012 10:20:48 PM Size: 2.86 MB FMRI: pkg://solaris/system/ldoms/[email protected],5.11-0.175.0.8.0.3.0:20120525T222048Z The 2.2 version of the logical domains manager will have to be removed, and 1.2 installed, in order to use this as a control domain. Preparing to change - create a new boot environment Before doing anything else, lets create a new boot environment: # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris NR / 2.14G static 2012-09-25 10:32 # beadm create solaris-1 # beadm activate solaris-1 # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris N / 4.82M static 2012-09-25 10:32 solaris-1 R - 2.14G static 2012-09-29 11:40 # init 0 Normally an init 6 to reboot would have been sufficient, but in the next step I reset the system anyway in order to put the system in factory default mode for a "clean" domain configuration. Preparing to change - reset to factory default There was a leftover domain configuration on the T2000, so I reset it to the factory install state. Since the ldm command is't working yet, it can't be done from the control domain, so I did it by logging onto to the service processor: $ ssh -X admin@t2000-sc Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved. Oracle Advanced Lights Out Manager CMT v1.7.9 Please login: admin Please Enter password: ******** sc> showhost Sun-Fire-T2000 System Firmware 6.7.10 2010/07/14 16:35 Host flash versions: OBP 4.30.4.b 2010/07/09 13:48 Hypervisor 1.7.3.c 2010/07/09 15:14 POST 4.30.4.b 2010/07/09 14:24 sc> bootmode config="factory-default" sc> poweroff Are you sure you want to power off the system [y/n]? y SC Alert: SC Request to Power Off Host. SC Alert: Host system has shut down. sc> poweron SC Alert: Host System has Reset At this point I rebooted into the new Solaris 11 boot environment, and Solaris commands showed it was running on the factory default configuration of a single domain owning all 32 CPUs and 32GB of RAM (that's what it looked like in 2005.) # psrinfo -vp The physical processor has 8 cores and 32 virtual processors (0-31) The core has 4 virtual processors (0-3) The core has 4 virtual processors (4-7) The core has 4 virtual processors (8-11) The core has 4 virtual processors (12-15) The core has 4 virtual processors (16-19) The core has 4 virtual processors (20-23) The core has 4 virtual processors (24-27) The core has 4 virtual processors (28-31) UltraSPARC-T1 (chipid 0, clock 1200 MHz) # prtconf|grep Mem Memory size: 32640 Megabytes Note that the older processor has 4 virtual CPUs per core, while current processors have 8 per core. Remove ldomsmanager 2.2 and install the 1.2 version The Solaris 11 pkg command is now used to remove the 2.2 version that shipped with Solaris 11: # pkg uninstall ldomsmanager Packages to remove: 1 Create boot environment: No Create backup boot environment: No Services to change: 2 PHASE ACTIONS Removal Phase 130/130 PHASE ITEMS Package State Update Phase 1/1 Package Cache Update Phase 1/1 Image State Update Phase 2/2 Finally, LDoms 1.2 installed via its install script, the same way it was done years ago: # unzip LDoms-1_2-Integration-10.zip # cd LDoms-1_2-Integration-10/Install/ # ./install-ldm Welcome to the LDoms installer. You are about to install the Logical Domains Manager package that will enable you to create, destroy and control other domains on your system. Given the capabilities of the LDoms domain manager, you can now change the security configuration of this Solaris instance using the Solaris Security Toolkit. ... ... normal install messages omitted ... The Solaris Security Toolkit applies to Solaris 10, and cannot be used in Solaris 11 (in which several things hardened by the Toolkit are already hardened by default), so answer b in the choice below: You are about to install the Logical Domains Manager package that will enable you to create, destroy and control other domains on your system. Given the capabilities of the LDoms domain manager, you can now change the security configuration of this Solaris instance using the Solaris Security Toolkit. Select a security profile from this list: a) Hardened Solaris configuration for LDoms (recommended) b) Standard Solaris configuration c) Your custom-defined Solaris security configuration profile Enter a, b, or c [a]: b ... other install messages omitted for brevity... After install I ensure that the necessary services are enabled, and verify the version of the installed LDoms Manager: # svcs ldmd STATE STIME FMRI online 22:00:36 svc:/ldoms/ldmd:default # svcs vntsd STATE STIME FMRI disabled Aug_19 svc:/ldoms/vntsd:default # ldm -V Logical Domain Manager (v 1.2-debug) Hypervisor control protocol v 1.3 Using Hypervisor MD v 1.1 System PROM: Hypervisor v. 1.7.3. @(#)Hypervisor 1.7.3.c 2010/07/09 15:14\015 OpenBoot v. 4.30.4. @(#)OBP 4.30.4.b 2010/07/09 13:48 Set up control domain and domain services At this point we have a functioning LDoms 1.2 environment that can be configured in the usual fashion. One difference is that LDoms 1.2 behavior had 'delayed configuration mode (as expected) during initial configuration before rebooting the control domain. Another minor difference with a Solaris 11 control domain is that you define virtual switches using the 'vanity name' of the network interface, rather than the hardware driver name as in Solaris 10. # ldm list ------------------------------------------------------------------------------ Notice: the LDom Manager is running in configuration mode. Configuration and resource information is displayed for the configuration under construction; not the current active configuration. The configuration being constructed will only take effect after it is downloaded to the system controller and the host is reset. ------------------------------------------------------------------------------ NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- SP 32 32640M 3.2% 4d 2h 50m # ldm add-vdiskserver primary-vds0 primary # ldm add-vconscon port-range=5000-5100 primary-vcc0 primary # ldm add-vswitch net-dev=net0 primary-vsw0 primary # ldm set-mau 2 primary # ldm set-vcpu 8 primary # ldm set-memory 4g primary # ldm add-config initial # ldm list-spconfig factory-default initial [current] That's it, really. After reboot, we are ready to install guest domains. Summary - new wine in old bottles This example shows that (new) Solaris 11 can be installed on (old) T2000 servers and used as a control domain. The main activity is to remove the preinstalled Oracle VM Server for 2.2 and install Logical Domains 1.2 - the last version of LDoms to support T1-processor systems. I tested Solaris 10 and Solaris 11 guest domains running on this server and they worked without any surprises. This is a viable way to get further into Solaris 11 adoption, even on older T-series equipment.

    Read the article

  • Slicing the EDG

    - by Antony Reynolds
    Different SOA Domain Configurations In this blog entry I would like to introduce three different configurations for a SOA environment.  I have omitted load balancers and OTD/OHS as they introduce a whole new round of discussion.  For each possible deployment architecture I have identified some of the advantages. Super Domain This is a single EDG style domain for everything needed for SOA/OSB.   It extends the standard EDG slightly but otherwise assumes a single “super” domain. This is basically the SOA EDG.  I have broken out JMS servers and Coherence servers to improve scalability and reduce dependencies. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if rest of domain is unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service. Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Benefits Single Administration Point (1 Admin Server) Closely follows EDG with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Drawbacks Patching is an all or nothing affair. Startup time for SOA may be slow if large number of composites deployed. Multiple Domains This extends the EDG into multiple domains, allowing separate management and update of these domains.  I see this type of configuration quite often with customers, although some don't have OWSM, others don't have separate Coherence etc. SOA & BAM are kept in the same domain as little benefit is obtained by separating them. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service. Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Benefits Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Patch lifecycle of OSB/SOA/JMS are no longer lock stepped. JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB. OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability. All domains use same OWSM policy store (MDS-WSM). Drawbacks Multiple domains to manage and configure. Multiple Admin servers (single view requires use of Grid Control) Multiple Admin servers/WSM clusters waste resources. Additional homes needed to enjoy benefits of separate patching. Cross domain trust needs setting up to simplify cross domain interactions. Startup time for SOA may be slow if large number of composites deployed. Shared Service Environment This model extends the previous multiple domain arrangement to provide a true shared service environment.This extends the previous model by allowing multiple additional SOA domains and/or other domains to take advantage of the shared services.  Only one non-shared domain is shown, but there could be multiple, allowing groups of applications to share patching independent of other application groups. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Shared SOA Domain hosts Human Workflow Tasks BAM Common "utility" composites Single OSB domain provides "Enterprise Service Bus" All domains use same OWSM policy store (MDS-WSM) Benefits Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Patch lifecycle of OSB/SOA/JMS are no longer lock stepped. JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB. OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability. All domains use same OWSM policy store (MDS-WSM). Supports large numbers of deployed composites in multiple domains. Single URL for Human Workflow end users. Single URL for BAM end users. Drawbacks Multiple domains to manage and configure. Multiple Admin servers (single view requires use of Grid Control) Multiple Admin servers/WSM clusters waste resources. Additional homes needed to enjoy benefits of separate patching. Cross domain trust needs setting up to simplify cross domain interactions. Human Workflow needs to be specially configured to point to shared services domain. Summary The alternatives in this blog allow for patching to have different impacts, depending on the model chosen.  Each organization must decide the tradeoffs for itself.  One extreme is to go for the shared services model and have one domain per SOA application.  This requires a lot of administration of the multiple domains.  The other extreme is to have a single super domain.  This makes the entire enterprise susceptible to an outage at the same time due to patching or other domain level changes.  Hopefully this blog will help your organization choose the right model for you.

    Read the article

  • Beyond Cloud Technology, Enabling A More Agile and Responsive Organization

    - by sxkumar
    This is the second part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain”. In the first part,  I was sharing with you how a broad-based transformation makes cloud more than a technology initiative, I will describe in this section how it requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. People: Most IT organizations have a fairly complex organizational structure. There are different groups, managing different pieces of the puzzle, and yet, they don't always work together. Provisioning a new application therefore may require a request to float endlessly through system administrators, DBAs and middleware admin worlds – resulting in long delays and constant finger pointing.  Cloud users expect end-to-end automation - which requires these silos to be greatly simplified, if not completely eliminated.  Most customers I talk to acknowledge this problem but are quick to admit that such a transformation is hard. As hard as it may be, I am afraid that the status quo is no longer an option. Sticking to an organizational structure that was created ages back will not only impede cloud adoption,  it also risks making the IT skills increasingly irrelevant in a world that is rapidly moving towards converged applications and infrastructure.   Process: Most IT organizations today operate with a mindset that they must fully "control" access to any and all types of IT services. This in turn leads to people clinging on to outdated manual approval processes .  While requiring approvals for scarce resources makes sense, insisting that every single request must be manually approved defeats the very purpose of cloud. Not only this causes delays, thereby at least partially negating the agility benefits, it also results in gross inefficiency. In a cloud environment, self-service access should be governed by policies, quotas that the administrators can define upfront . For a cloud initiative to be successful, IT organizations MUST be ready to empower users by giving them real control rather than insisting on brokering every single interaction between users and the cloud resources. Technology: From a technology perspective, cloud is about consolidation, standardization and automation. A consolidated and standardized infrastructure helps increase utilization and reduces cost. Additionally, it  enables a much higher degree of automation - thereby providing users the required agility while minimizing operational costs.  Obviously, automation is the key to cloud. Unfortunately it hasn’t received as much attention within enterprises as it should have.  Many organizations are just now waking up to the criticality of automation and it still often gets relegated to back burner in favor of other "high priority" projects. However, it is important to understand that without the right type and level of automation, cloud will remain a distant dream for most enterprises. This in turn makes the choice of the cloud management software extremely critical.  For a cloud management software to be effective in an enterprise environment, it must meet the following qualifications: Broad and Deep Solution It should offer a broad and deep solution to enable the kind of broad-based transformation we are talking about.  Its footprint must cover physical and virtual systems, as well as infrastructure, database and application tiers. Too many enterprises choose to equate cloud with virtualization. While virtualization is a critical component of a cloud solution, it is just a component and not the whole solution. Similarly, too many people tend to equate cloud with Infrastructure-as-a-Service (IaaS). While it is perfectly reasonable to treat IaaS as a starting point, it is important to realize that it is just the first stepping stone - and on its own it can only provide limited business benefits. It is actually the higher level services, such as (application) platform and business applications, that will bring about a more meaningful transformation to your enterprise. Run and Manage Efficiently Your Mission Critical Applications It should not only be able to run your mission critical applications, it should do so better than before.  For enterprises, applications and data are the critical business assets  As such, if you are building a cloud platform that cannot run your ERP application, it isn't truly a "enterprise cloud".  Also, be wary of  vendors who try to sell you the idea that your applications must be written in a certain way to be able to run on the cloud. That is nothing but a bogus, self-serving argument. For the cloud to be meaningful to enterprises, it should adopt to your applications - and not the other way around.  Automated, Integrated Set of Cloud Management Capabilities At the root of many of the problems plaguing enterprise IT today is complexity. A complex maze of tools and technology, coupled with archaic  processes, results in an environment which is inflexible, inefficient and simply too hard to manage. Management tool consolidation, therefore, is key to the success of your cloud as tool proliferation adds to complexity, encourages compartmentalization and defeats the very purpose that you are building the cloud for. Decision makers ought to be extra cautious about vendors trying to sell them a "suite" of disparate and loosely integrated products as a cloud solution.  An effective enterprise cloud management solution needs to provide a tightly integrated set of capabilities for all aspects of cloud lifecycle management. A simple question to ask: will your environment be more or less complex after you implement your cloud? More often than not, the answer will surprise you.  At Oracle, we have understood these challenges and have been working hard to create cloud solutions that are relevant and meaningful for enterprises.  And we have been doing it for much longer than you may think. Oracle was one of the very first enterprise software companies to make our products available on the Amazon Cloud. As far back as in 2007, we created new cloud solutions such as Cloud Database Backup that are helping customers like Amazon save millions every year.  Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies. I will dedicated the third and final part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain” to Oracle Cloud Technologies Building Blocks and how they mapped into our vision of Enterprise Cloud. Stay Tuned.

    Read the article

  • Win32Exception: The directory name is invalid

    - by Mohammadreza
    I'm trying to run a process as a different user that has Administrator privilege in 2 different computers running Vista and their UAC enabled but in one of them I get a Win32Exception that says "The directory name is invalid" Can anyone tell me what is wrong with my code? var myFile = "D:\\SomeFolder\\MyExecutable.exe"; var workingFolder = "D:\\SomeFolder"; var pInfo = new System.Diagnostics.ProcessStartInfo(); pInfo.FileName = myFile; pInfo.WorkingDirectory = workingFolder; pInfo.Arguments = myArgs; pInfo.LoadUserProfile = true; pInfo.UseShellExecute = false; pInfo.UserName = {UserAccount}; pInfo.Password = {SecureStringPassword}; pInfo.Domain = "."; System.Diagnostics.Process.Start(pInfo); UPDATE The application that executes the above code has requireAdministrator execution level. I even set the working folder to "Path.GetDirectoryName(myFile)" and "New System.IO.FileInfo(myFile).DirectoryName"

    Read the article

  • DLLImport error: System.AccessViolationException with Manifest file and c#

    - by RP
    When trying to call (DLLImport) an external c++ dll from a .net application that has a manifest file with requireAdministrator, I get this error trying to call function from the C++ dll in Windows 7 with UAC enabled. Method I am calling: EnCrypts System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. public class BlowFish { [DllImport("BlowfishTool.dll", CharSet = CharSet.Auto)] public static extern String EnCrypt(String strData, String strPassword); [DllImport("BlowfishTool.dll", CharSet = CharSet.Auto)] public static extern String EnCrypt(String strData, String strPassword, bool doNotUsePassChecking); [DllImport("BlowfishTool.dll", CharSet = CharSet.Auto)] public static extern String DeCrypt(String strData, String strPassword, bool doNotUsePassChecking); [DllImport("BlowfishTool.dll", CharSet = CharSet.Auto)] public static extern String DeCrypt(String strData, String strPassword); public static String EnCrypts(String strData, String strPassword) { return EnCrypt(strData, strPassword, true); } } }

    Read the article

  • "Requested registry access is not allowed." on Windows 7 / Vista

    - by Trainee4Life
    I'm attempting to write a key to Registry. It works on Windows XP, but fails on Windows 7 / Vista. The code below throws a Security Exception with description "Requested registry access is not allowed." RegistryKey regKey = Registry.LocalMachine.OpenSubKey("SOFTWARE\\App_Name\\" + subKey, true); I realise that this has to do with the UAC settings, but I couldn't figure out an ideal workaround. I don't want to fork out another process, and may be don't even want to request for any credentials. Just want it to work the same way as on Windows XP. I have modified the manifest file and removed requestedExecutionLevel node. This seems to do the trick. Is there any other possible workaround, and are there any serious flaws with the "manifest" solution?

    Read the article

  • WIX/DTF - How do I elevate deffered managed custom actions?

    - by Jarrod
    We are working on creating an installer using WIX. We have a couple of custom actions that are going to require elevation to run on Vista. Our MSI obviously elevates itself, because I get a UAC prompt and it writes keys to HKLM. Our managed DTF custom actions are apparently not running elevated. If I start the MSI from an elevated command prompt, the installation runs fine. The only workaround I have seen suggested (link) is to use another exe with a manifest as a bootstrapper to run the MSI. This is not a good solution, as it doesn't allow the user to modify the installation by selecting "Change" from the Add/Remove programs dialog. Our install has multiple features that can be customized, so we want this functionality. Does anybody know how to do this?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >