Search Results

Search found 3831 results on 154 pages for 'w2k8 r2'.

Page 81/154 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Partner Blog Series: PwC Perspectives - "Is It Time for an Upgrade?"

    - by Tanu Sood
    Is your organization debating their next step with regard to Identity Management? While all the stakeholders are well aware that the one-size-fits-all doesn’t apply to identity management, just as true is the fact that no two identity management implementations are alike. Oracle’s recent release of Identity Governance Suite 11g Release 2 has innovative features such as a customizable user interface, shopping cart style request catalog and more. However, only a close look at the use cases can help you determine if and when an upgrade to the latest R2 release makes sense for your organization. This post will describe a few of the situations that PwC has helped our clients work through. “Should I be considering an upgrade?” If your organization has an existing identity management implementation, the questions below are a good start to assessing your current solution to see if you need to begin planning for an upgrade: Does the current solution scale and meet your projected identity management needs? Does the current solution have a customer-friendly user interface? Are you completely meeting your compliance objectives? Are you still using spreadsheets? Does the current solution have the features you need? Is your total cost of ownership in line with well-performing similar sized companies in your industry? Can your organization support your existing Identity solution? Is your current product based solution well positioned to support your organization's tactical and strategic direction? Existing Oracle IDM Customers: Several existing Oracle clients are looking to move to R2 in 2013. If your organization is on Sun Identity Manager (SIM) or Oracle Identity Manager (OIM) and if your current assessment suggests that you need to upgrade, you should strongly consider OIM 11gR2. Oracle provides upgrade paths to Oracle Identity Manager 11gR2 from SIM 7.x / 8.x as well as Oracle Identity Manager 10g / 11gR1. The following are some of the considerations for migration: Check the end of product support (for Sun or legacy OIM) schedule There are several new features available in R2 (including common Helpdesk scenarios, profiling of disconnected applications, increased scalability, custom connectors, browser-based UI configurations, portability of configurations during future upgrades, etc) Cost of ownership (for SIM customers)\ Customizations that need to be maintained during the upgrade Time/Cost to migrate now vs. waiting for next version If you are already on an older version of Oracle Identity Manager and actively maintaining your support contract with Oracle, you might be eligible for a free upgrade to OIM 11gR2. Check with your Oracle sales rep for more details. Existing IDM infrastructure in place: In the past year and half, we have seen a surge in IDM upgrades from non-Oracle infrastructure to Oracle. If your organization is looking to improve the end-user experience related to identity management functions, the shopping cart style access request model and browser based personalization features may come in handy. Additionally, organizations that have a large number of applications that include ecommerce, LDAP stores, databases, UNIX systems, mainframes as well as a high frequency of user identity changes and access requests will value the high scalability of the OIM reconciliation and provisioning engine. Furthermore, we have seen our clients like OIM's out of the box (OOB) support for multiple authoritative sources. For organizations looking to integrate applications that do not have an exposed API, the Generic Technology Connector framework supported by OIM will be helpful in quickly generating custom connector using OOB wizard. Similarly, organizations in need of not only flexible on-boarding of disconnected applications but also strict access management to these applications using approval flows will find the flexible disconnected application profiling feature an extremely useful tool that provides a high degree of time savings. Organizations looking to develop custom connectors for home grown or industry specific applications will likewise find that the Identity Connector Framework support in OIM allows them to build and test a custom connector independently before integrating it with OIM. Lastly, most of our clients considering an upgrade to OIM 11gR2 have also expressed interest in the browser based configuration feature that allows an administrator to quickly customize the user interface without adding any custom code. Better yet, code customizations, if any, made to the product are portable across the future upgrades which, is viewed as a big time and money saver by most of our clients. Below are some upgrade methodologies we adopt based on client priorities and the scale of implementation. For illustration purposes, we have assumed that the client is currently on Oracle Waveset (formerly Sun Identity Manager).   Integrated Deployment: The integrated deployment is typically where a client wants to split the implementation to where their current IDM is continuing to handle the front end workflows and OIM takes over the back office operations incrementally. Once all the back office operations are moved completely to OIM, the front end workflows are migrated to OIM. Parallel Deployment: This deployment is typically done where there can be a distinct line drawn between which functionality the platforms are supporting. For example the current IDM implementation is handling the password reset functionality while OIM takes over the access provisioning and RBAC functions. Cutover Deployment: A cutover deployment is typically recommended where a client has smaller less complex implementations and it makes sense to leverage the migration tools to move them over immediately. What does this mean for YOU? There are many variables to consider when making upgrade decisions. For most customers, there is no ‘easy’ button. Organizations looking to upgrade or considering a new vendor should start by doing a mapping of their requirements with product features. The recommended approach is to take stock of both the short term and long term objectives, understand product features, future roadmap, maturity and level of commitment from the R&D and build the implementation plan accordingly. As we said, in the beginning, there is no one-size-fits-all with Identity Management. So, arm yourself with the knowledge, engage in industry discussions, bring in business stakeholders and start building your implementation roadmap. In the next post we will discuss the best practices on R2 implementations. We will be covering the Do's and Don't's and share our thoughts on making implementations successful. Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • Clustering for Mere Mortals (Pt2)

    - by Geoff N. Hiten
    Planning. I could stop there and let that be the entirety post #2 in this series.  Planning is the single most important element in building a cluster and the Laptop Demo Cluster is no exception.  One of the more awkward parts of actually creating a cluster is coordinating information between Windows Clustering and SQL Clustering.  The dialog boxes show up hours apart, but still have to have matching and consistent information. Excel seems to be a good tool for tracking these settings.  My workbook has four pages: Systems, Storage, Network, and Service Accounts.  The systems page looks like this:   Name Role Software Location East Physical Cluster Node 1 Windows Server 2008 R2 Enterprise Laptop VM West Physical Cluster Node 2 Windows Server 2008 R2 Enterprise Laptop VM North Physical Cluster Node 3 (Future Reserved) Windows Server 2008 R2 Enterprise Laptop VM MicroCluster Cluster Management Interface N/A Laptop VM SQL01 High-Performance High-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM SQL02 High-Performance Standard-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM SQL03 Standard-Performance High-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM Note that everything that has a computer name is listed here, whether physical or virtual. Storage looks like this: Storage Name Instance Purpose Volume Path Size (GB) LUN ID Speed Quorum MicroCluster Cluster Quorum Quorum Q: 2     SQL01Anchor SQL01 Instance Anchor SQL01Anchor L: 2     SQL02Anchor SQL02 Instance Anchor SQL02Anchor M: 2     SQL01Data1 SQL01 SQL Data SQL01Data1 L:\MountPoints\SQL01Data1 2     SQL02Data1 SQL02 SQL Data SQL02Data1 M:\MountPoints\SQL02Data1       Starting at the left is the name used in the storage array.  It is important to rename resources at each level, whether it is Storage, LUN, Volume, or disk folder.  Otherwise, troubleshooting things gets complex and difficult.  You want to be able to glance at a resource at any level and see where it comes from and what it is connected to. Networking is the same way:   System Network VLAN  IP Subnet Mask Gateway DNS1 DNS2 East Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 East Heartbeat Cluster2   255.255.255.0       West Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 West Heartbeat Cluster2   255.255.255.0       North Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 North Heartbeat Cluster2   255.255.255.0       SQL01 Public Cluster1 10.97.230.x(DHCP) 255.255.255.0       SQL02 Public Cluster1 10.97.230.x(DHCP) 255.255.255.0       One hallmark of a poorly planned and implemented cluster is a bunch of "Local Network Connection #n" entries in the network settings page.  That lets me know that somebody didn't care about the long-term supportabaility of the cluster.  This can be critically important with Hyper-V Clusters and their high NIC counts.  Final page:   Instance Service Name Account Password Domain OU SQL01 SQL Server SVCSQL01 Baseline22 MicroAD Service Accounts SQL01 SQL Agent SVCSQL01 Baseline22 MicroAD Service Accounts SQL02 SQL Server SVC_SQL02 Baseline22 MicroAD Service Accounts SQL02 SQL Agent SVC_SQL02 Baseline22 MicroAD Service Accounts SQL03 (Future) SQL Server SVC_SQL03 Baseline22 MicroAD Service Accounts SQL03 (Future) SQL Agent SVC_SQL03 Baseline22 MicroAD Service Accounts             Installation Account           administrator            Yes.  I write down the account information.  I secure the file via NTFS, but I don't want to fumble around looking for passwords when it comes time to rebuild a node. Always fill out the workbook COMPLETELY before installing anything.  The whole point is to have everything you need at your fingertips before you begin.  The install experience is so much better and more productive with this information in place.

    Read the article

  • Oracle Database 11gR2 11.2.0.3 Certified with E-Business Suite on Windows

    - by John Abraham
    As a follow up to our original certification announcement, Oracle Database 11g Release 2 (11.2.0.3) is now certified with Oracle E-Business Suite Release 11i and Release 12 on the following Microsoft Windows operating systems: Release 12.1 (12.1.1 and higher) Microsoft Windows Server (32-bit) (2003, 2008) Microsoft Windows x64 (64-bit) (20031, 20081, 2008 R22) Release 12.0 (12.0.4 and higher) Microsoft Windows Server (32-bit) (2003) Microsoft Windows x64 (64-bit) (2003, 2008)1 Release 11i (11.5.10.2 + ATG PF.H RUP 6 and higher) Microsoft Windows Server (32-bit) (2003, 20081) Microsoft Windows x64 (64-bit) (2003, 2008, 2008 R2)1 Notes 1: This OS is a 'database tier only' or 'split tier configuration' platform where the application tier must be on a fully certified E-Business Suite platform. 2: This OS is a 'database tier only' platform for Release 11i. For 12.1.1 or higher, it is also supported on the application tier via the migration process outlined in My Oracle Support Document 1188535.1. Pending Certification E-Business Suite 12.0 with 11.2.0.3 Split Tier Certification on Microsoft Windows x64 (64-bit) (2008 R2) is in progress and will be announced separately. This announcement for Oracle E-Business Suite 11i and R12 includes: Real Application Clusters (RAC) Oracle Database Vault Transparent Data Encryption (Column Encryption) TDE Tablespace Encryption Advanced Security Option (ASO)/Advanced Networking Option (ANO) Export/Import Process for Oracle E-Business Suite Release 11i and Release 12 Database Instances Transportable Database and Transportable Tablespaces Data Migration Processes for Oracle E-Business Suite Release 11i and Release 12 References MOS Document 881505.1 - Interoperability Notes - Oracle E-Business Suite Release 11i with Oracle Database 11g Release 2 (11.2.0) MOS Document 1058763.1 - Interoperability Notes - Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) MOS Document 1091086.1 - Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 11gR2 MOS Document 1091083.1 - Integrating Oracle E-Business Suite Release 12 with Oracle Database Vault 11gR2 MOS Document 216205.1 - Database Initialization Parameters for Oracle E-Business Suite 11i MOS Document 396009.1 - Database Initialization Parameters for Oracle Applications Release 12 MOS Document 823586.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 11i MOS Document 823587.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 MOS Document 403294.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 11i MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 828223.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 11i MOS Document 828229.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 12 MOS Document 391248.1 - Encrypting Oracle E-Business Suite Release 11i Network Traffic using Advanced Security Option and Advanced Networking Option MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 557738.1 - Export/Import Process for Oracle E-Business Suite Release 11i Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 741818.1 - Export/Import Process for Oracle E-Business Suite Release 12 Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 1366265.1 - Using Transportable Tablespaces to Migrate Oracle Applications 11i Using Oracle Database 11g Release 2 MOS Document 1311487.1 - Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 11g Release 2 MOS Document 729309.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 11i Using Oracle Database 10g Release 2 or 11g MOS Document 734763.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 10g Release 2 or 11g MOS Document 1188535.1 - Migrating Oracle E-Business Suite R12 to Microsoft Windows Server 2008 R2 Please also review the platform-specific Oracle Database Installation Guides for operating system and other prerequisites. Related Articles Database 11.2.0.2 Certified with EBS R12 on IBM: Linux on System z EBS R12 Certified with Database 11gR2 on SLES 11 11gR2 11.2.0.3 Database Certified with E-Business Suite

    Read the article

  • Windows Azure SDK 1.3 addresses early adopter feedback

    - by Eric Nelson
    At the end of November 2010 we released a new version of the Windows Azure SDK which contains many new features driven by the great feedback of early adopters plus a shiny new portal. New Portal implemented in Silverlight: The new portal is implemented using Silverlight and replaces the (IMHO rather clunky) original HTML + JavaScript portal. It is 100% better although does still have a few bugs. Enjoy! P.S. You can if you wish still use the old portal:   New runtime functionality: The following functionality is now generally available through the Windows Azure SDK and Windows Azure Tools for Visual Studio and the new Windows Azure Management Portal: Elevated Privileges and Full IIS. You can now run a portion or all of your code in Web and Worker roles with elevated administrator privileges. The Web role now provides Full IIS functionality, which enables multiple IIS sites per Web role and the ability to install IIS modules. Remote Desktop functionality enables you to connect to a running instance of your application or service in order to monitor activity and troubleshoot common problems. Windows Server 2008 R2 Roles: Windows Azure now supports Windows Server 2008 R2 in its Web, worker and VM roles. This new support enables you to take advantage of the full range of Windows Server 2008 R2 features such as IIS 7.5, AppLocker, and enhanced command-line and automated management using PowerShell Version 2.0. New runtime functionality – in beta: Windows Azure Virtual Machine Role: Support for more types of new and existing Windows applications will soon be available with the introduction of the Virtual Machine (VM) role. You can move more existing applications to Windows Azure, reducing the need to make costly code or deployment changes. Extra Small Windows Azure Instance, which is priced at $0.05 per compute hour, provides developers with a cost-effective training and development environment. Developers can also use the Extra Small instance to prototype cloud solutions at a lower cost. Windows Azure Connect: (formerly Project Sydney), which enables a simple and easy-to-manage mechanism to set up IP-based network connectivity between on-premises and Windows Azure resources, is the first Windows Azure Virtual Network feature that we’re making available as a CTP. You can sign up for any of the betas via the Windows Azure Management Portal. Improved processes and simplified operations New portal! (see above) Access to new diagnostic information including the ability to click on a role to see role type, deployment time and last reboot time A new sign-up process that dramatically reduces the number of steps needed to sign up for Windows Azure. New scenario based Windows Azure Platform forums to help answer questions and share knowledge more efficiently. Multiple Service Administrators: Windows Azure now supports multiple Windows Live IDs to have administrator privileges on the same Windows Azure account. The objective is to make it easy for a team to work on the same Windows Azure account while using their individual Windows Live IDs.   Related Links Please also let us know through Microsoft Platform Ready if and when you intend to build an application using the Windows Azure Platform. Or indeed if you already have (Well done). You will get access to some great benefits if you do (more on that in a future post). It also really helps us better understand the demand out there which directly impacts how we will plan the next six months of activities around the Windows Azure Platform. Visit Microsoft Platform Ready to tell us about your plans for your applications UK based? Interested in the Windows Azure Platform? Join http://ukazure.ning.com Get started with the Windows Azure Platform http://bit.ly/startazure

    Read the article

  • Help/Questions About New Team Foundation Server 2010 Installation

    - by user579218
    Hello. Before starting down the TFS2010 installation process, I have a few questions I'm hoping the community can help me with. We're planning on a single-server installation of TFS2010. Initially, we want version/source control and build services, but not reporting or SharePoint. We may add reporting and SharePoint capabilities later. Our environment will be Windows Server 2008 R2 (x64), SQL Server 2008 R2 (x64), Office 2010 (x86), Visual Studio 6 and 2010, and, of course, Team Foundation Server 2010. Can I install TFS2010 on a server that is on our domain? It's not a domain controller, it's just a member server on the domain. Should I install TFS2010 before or after putting the server on the domain? We have six developers that will be logging into their local development computers (which are also on the same domain) using their domain user accounts, do I add each domain user to the TFS2010 server's security groups? If so, which one(s)? Can I or should I use a domain user account as the TFS2010 service account? Or, should I just use Network Service? The TFS2010 install guide notes that none of the service accounts should belong to the Administrators security group, so which security group(s) are recommended for the service account(s)? We're planning on using a local instance of SQL Server 2008 R2 Standard with TFS2010, what service account should we use? Should we use the same domain account as TFS2010 or Local System or ?? The TFS2010 install guide isn't very specific on this. Since we're planning on this server being both the version/source control and build server, should we install our development environments (VS6, VS2010, Access2010) before installing TFS2010? Or does it matter? Thanks in advance for answering these questions.

    Read the article

  • Building ARM assembler vorbis decoder lib 'Tremolo' for iPhone

    - by Joachim Bengtsson
    I'm trying to compile Tremolo for iPhone. I've pulled in the files bitwise.c bitwiseARM.s codebook.c dpen.s dsp.c floor0.c floor1.c floor1ARM.s floor_lookup.c framing.c info.c mapping0.c mdct.c mdctARM.s misc.c res012.c into a new target, added the following custom settings: GCC_PREPROCESSOR_DEFINITIONS = _ARM_ASSEM_ GCC_C_LANGUAGE_STANDARD = gnu99 GCC_THUMB_SUPPORT = YES ... but as soon as xcode reaches the first assembler file, bitwiseARM.s, I get errors like these: /tremolo/bitwiseARM.s:3:Unknown pseudo-op: .global /tremolo/bitwiseARM.s:3:Rest of line ignored. 1st junk character valued 111 (o). /tremolo/bitwiseARM.s:4:Unknown pseudo-op: .global /tremolo/bitwiseARM.s:4:Rest of line ignored. 1st junk character valued 111 (o). /tremolo/bitwiseARM.s:5:Unknown pseudo-op: .global /tremolo/bitwiseARM.s:5:Rest of line ignored. 1st junk character valued 111 (o). /tremolo/bitwiseARM.s:6:Unknown pseudo-op: .global /tremolo/bitwiseARM.s:6:Rest of line ignored. 1st junk character valued 111 (o). /tremolo/bitwiseARM.s:11:bad instruction `STMFD r13!,{r10,r11,r14}' /tremolo/bitwiseARM.s:12:bad instruction `LDMIA r0,{r2,r3,r12}' /tremolo/bitwiseARM.s:16:bad instruction `SUBS r2,r2,r1' /tremolo/bitwiseARM.s:17:bad instruction `BLT look_slow' /tremolo/bitwiseARM.s:19:bad instruction `LDR r10,[r3]' The first error I could google, and changing .global to .globl fixed the first errors, but I still get the bad instructions, and I don't get why. Googling for the ARM instruction set, the above instructions look valid to me. I've tried toggling thumb support, and building for just armv7 instead of armv6, but neither helped.

    Read the article

  • Macro access to members of object where macro is defined

    - by Marc Grue
    Say I have a trait Foo that I instantiate with an initial value val foo = new Foo(6) // class Foo(i: Int) and I later call a second method that in turn calls myMacro foo.secondMethod(7) // def secondMethod(j: Int) = macro myMacro then, how can myMacro find out what my initial value of i (6) is? I didn't succeed with normal compilation reflection using c.prefix, c.eval(...) etc but instead found a 2-project solution: Project B: object CompilationB { def resultB(x: Int, y: Int) = macro resultB_impl def resultB_impl(c: Context)(x: c.Expr[Int], y: c.Expr[Int]) = c.universe.reify(x.splice * y.splice) } Project A (depends on project B): trait Foo { val i: Int // Pass through `i` to compilation B: def apply(y: Int) = CompilationB.resultB(i, y) } object CompilationA { def makeFoo(x: Int): Foo = macro makeFoo_impl def makeFoo_impl(c: Context)(x: c.Expr[Int]): c.Expr[Foo] = c.universe.reify(new Foo {val i = x.splice}) } We can create a Foo and set the i value either with normal instantiation or with a macro like makeFoo. The second approach allows us to customize a Foo at compile time in the first compilation and then in the second compilation further customize its response to input (i in this case)! In some way we get "meta-meta" capabilities (or "pataphysic"-capabilities ;-) Normally we would need to have foo in scope to introspect i (with for instance c.eval(...)). But by saving the i value inside the Foo object we can access it anytime and we could instantiate Foo anywhere: object Test extends App { import CompilationA._ // Normal instantiation val foo1 = new Foo {val i = 7} val r1 = foo1(6) // Macro instantiation val foo2 = makeFoo(7) val r2 = foo2(6) // "Curried" invocation val r3 = makeFoo(6)(7) println(s"Result 1 2 3: $r1 $r2 $r3") assert((r1, r2, r3) ==(42, 42, 42)) } My question Can I find i inside my example macros without this double compilation hackery?

    Read the article

  • Replicating SQL's 'Join' in Python

    - by Daniel Mathews
    I'm in the process of trying to switch from R to Python (mainly issues around general flexibility). With Numpy, matplotlib and ipython, I've am able to cover all my use cases save for merging 'datasets'. I would like to simulate SQL's join by clause (inner, outer, full) purely in python. R handles this with the 'merge' function. I've tried the numpy.lib.recfunctions join_by, but it critical issues with duplicates along the 'key': join_by(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2', defaults=None, usemask=True, asrecarray=False) Join arrays r1 and r2 on key key. The key should be either a string or a sequence of string corresponding to the fields used to join the array. An exception is raised if the key field cannot be found in the two input arrays. Neither r1 nor r2 should have any duplicates along key: the presence of duplicates will make the output quite unreliable. Note that duplicates are not looked for by the algorithm. source: http://presbrey.mit.edu:1234/numpy.lib.recfunctions.html Any pointers or help will be most appreciated!

    Read the article

  • Working with ieee format numbers in ARM

    - by Jake Sellers
    I'm trying to write an ARM program that will convert an ieee number to a TNS format number. TNS is a format used by some super computers, and is similar to ieee but different. I'm trying to use several masks to place the three different "part" of the ieee number in separate registers so I can move them around accordingly. Here is my unpack subroutine: UnpackIEEE LDR r1, SMASK ;load the sign bit mask into r1 LDR r2, EMASK ;load the exponent mask into r2 LDR r3, GMASK ;load the significand mask into r3 AND r4, r0, r1 ;apply sign mask to IEEE and save into r4 AND r5, r0, r2 ;apply exponent mask to IEEE and save into r5 AND r6, r0, r3 ;apply significand mask to IEEE and save into r6 MOV pc, r14 ;return And here are the masks and number declarations so you can understand: IEEE DCD 0x40300000 ;2.75 decimal or 01000000001100000000000000000000 binary SMASK DCD 0x80000000 ;Sign bit mask EMASK DCD 0x7F800000 ;Exponent mask GMASK DCD 0x007FFFFF ;Significand mask When I step through with the debugger, the results I get are not what I expect after working through it on paper. EDIT: What I mean, is that after the subroutine runs, registers 4, 5, and 6 all remain 0. I can't figure out why the masks are not working. I think I do not fully understand how the number is being stored in the register or using the masks wrong. Any help appreciated. If you need more info just ask. EDIT: entry point: Very simple, just trying to get these subroutines working. ENTRY LDR r1, IEEE ;load IEEE num into r1 BL UnpackIEEE ;call unpack sub SWI SWI_Exit ;finish

    Read the article

  • DQL delete from multiple tables (doctrine)

    - by singer
    Need to perform DQL delete from multple related tables. In SQL it is something like this: DELETE r1,r2 FROM ComRealty_objects r1, com_realty_objects_phones r2 WHERE r1.id IN (10,20) AND r2.id_object IN (10,20) I need to perform this statement using DQL, but I'm stuck on this :( <?php $dql = Doctrine_Query::create() ->delete('phones, comrealtyobjects') ->from('ComRealtyObjects comrealtyobjects') ->from('ComRealtyObjectsPhones phones') ->whereIn("comrealtyobjects.id", $ids) ->whereIn("phones.id_object", $ids); echo($dql->getSqlQuery()); ?> But DQL parser gives me this result: DELETE FROM `com_realty_objects_phones`, `ComRealty_objects` WHERE (`id` IN (?) AND `id_object` IN (?)) Searching google and stack overflow I found this(useful) topic: http://stackoverflow.com/questions/2247905/what-is-the-syntax-for-a-multi-table-delete-on-a-mysql-database-using-doctrine But this is not exactly my case - there was delete from single table. If there is a way to override dql parser behaviour? Or maybe some other way to delete records from multiple tables using doctrine. Note: If you are using doctrine behaviours(Doctrine_Record_Generator) you need first to initialize those tables using Doctrine_Core::initializeModels() to perform DQL operations on them.

    Read the article

  • How to pass multiple different records (not class due to delphi limitations) to a function?

    - by mingo
    Hi to all. I have a number of records I cannot convert to classes due to Delphi limitation (all of them uses class operators to implement comparisons). But I have to pass to store them in a class not knowing which record type I'm using. Something like this: type R1 = record begin x :Mytype; class operator Equal(a,b:R1) end; type R2 = record begin y :Mytype; class operator Equal(a,b:R2) end; type Rn = record begin z :Mytype; class operator Equal(a,b:Rn) end; type TC = class begin x : TObject; y : Mytype; function payload (n:TObject) end; function TC.payload(n:TObject) begin x := n; end; program: c : TC; x : R1; y : R2; ... c := TC.Create(): n:=TOBject(x); c.payload(n); Now, Delphi do not accept typecast from record to TObject, and I cannot make them classes due to Delphi limitation. Anyone knows a way to pass different records to a function and recognize their type when needed, as we do with class: if x is TMyClass then TMyClass(x) ... ???

    Read the article

  • Using mercurial and beyond compare 3(bc3) as the diff tool? help needed

    - by mhd
    Hi, in windows I am able to use winmerge as the external diff tool for hg using mercurial.ini,etc. Using some options switch that you can find in web(I think it's a japanese website) Anyway, here for example: hg winmerge -r1 -r2 will list file(s) change(s) between rev1 and rev2 in winmerge. I can just click which file to diff but for bc3: hg bcomp -r1 -r2 will make bc3 open a dialog which stated that a temp dir can't be found. The most I can do using bc3 and hg is hg bcomp -r1 -r2 myfile.cpp which will open diff between rev1 and rev2 of myfile.cpp So,it seems that hg+bc3 can't successfully acknowledge of all files change between revision. Only able to diff 1 file at a time. Anyone can use bc3 + hg better ? edit: Problem Solved ! Got the solution from scooter support page. I have to use bcompare instead of bcomp Here's a snippet of my mercurial.ini [extensions] hgext.win32text = ;mhd adds hgext.extdiff = ;mhd adds for bc [extdiff] cmd.bc3 = bcompare opts.bc3 = /ro ;mhd adds for winmerge ;[extdiff] ;cmd.winmerge = WinMergeU ;opts.winmerge = /r /e /x /ub

    Read the article

  • Create a model that switches between two different states using Temporal Logic?

    - by NLed
    Im trying to design a model that can manage different requests for different water sources. Platform : MAC OSX, using latest Python with TuLip module installed. For example, Definitions : Two water sources : w1 and w2 3 different requests : r1,r2,and r3 - Specifications : Water 1 (w1) is preferred, but w2 will be used if w1 unavailable. Water 2 is only used if w1 is depleted. r1 has the maximum priority. If all entities request simultaneously, r1's supply must not fall below 50%. - The water sources are not discrete but rather continuous, this will increase the difficulty of creating the model. I can do a crude discretization for the water levels but I prefer finding a model for the continuous state first. So how do I start doing that ? Some of my thoughts : Create a matrix W where w1,w2 ? W Create a matrix R where r1,r2,r3 ? R or leave all variables singular without putting them in a matrix I'm not an expert in coding so that's why I need help. Not sure what is the best way to start tackling this problem. I am only interested in the model, or a code sample of how can this be put together. edit Now imagine I do a crude discretization of the water sources to have w1=[0...4] and w2=[0...4] for 0, 25, 50, 75,100 percent respectively. == means implies Usage of water sources : if w1[0]==w2[4] -- meaning if water source 1 has 0%, then use 100% of water source 2 etc if w1[1]==w2[3] if w1[2]==w2[2] if w1[3]==w2[1] if w1[4]==w2[0] r1=r2=r3=[0,1] -- 0 means request OFF and 1 means request ON Now what model can be designed that will give each request 100% water depending on the values of w1 and w2 (w1 and w2 values are uncontrollable so cannot define specific value, but 0...4 is used for simplicity )

    Read the article

  • RRAS won’t start with 8007042a or event ID 7024, aka the “routing remote access unable to load Iprtrmgr.dll”

    - by KCotreau
    History: The history of this error, which has mostly gone unsolved, dates back to Windows 2000. Platforms affected: Windows Server 2008 R2, Server 2008, Server 2003 R2, Server 2003, Server 2000 (both 32-bit and 64-bit installs are affected). Error Messages Event ID: 7024 The Routing and Remote Access service terminated with service-specific error 2 (0x2). Event ID: 7024 The Routing and Remote Access service terminated with service-specific error 31 (0x1F). Event ID: 7024 The Routing and Remote Access service terminated with service-specific error 20205 (0x4EED). Event ID: 7024 The Routing and Remote Access service terminated with service-specific error 193 (0xC1). Event ID: 20103 Unable to load C:\WINDOWS\System32\iprtrmgr.dll . (32-bit installs). Event ID: 20103 Unable to load C:\WINDOWS\SysWOW64\iprtrmgr.dll . (64-bit installs).

    Read the article

  • Windows 2008 DFS Replication Issue

    - by e0594cn
    We have two Server 2008 R2 servers running DFS-R (named dfs01 and dfs02) in a 2008 R2 Domain. Today I found the files in server dfs01 can not be replicated to dfs02. So I used the command dfsrdiag backlog /rgname:<group> /rfname:<folder> /sendingmember:dfs01/receivingmember:dfs02 to check the backlog. After executing the command, I get the following error: Failed to execute GetVersionVector Method. Err: -2147217406 <0x80041002 operation Failed. How can I resolve this?

    Read the article

  • How to sysprep SQL Server Express?

    - by Jim
    We plan to deploy Hyper-V VHD with Windows Server 2008 R2 and SQL Server 2012 Express installed to multiple hosts. From my understanding, the correct way to do this is to install SQL Server in prepartion mode, sysprep Windows, then complete SQL Server installation when the VHD is deployed. I mostly followed the process in this blog post: http://sethusrinivasan.com/category/sysprep/ However, after the VHD is deployed, I'm unable to complete the SQL Server installation. It keeps saying "Upgrade matrix is incorrect". It seems that it's trying to upgrade itself to Enterprise edition (I was asked for product key during install, but I skipped it). Could anyone share their experience in deploying VHDs with SQL Server (we're fine with either SQL Server 2008 R2 or 2012)? I think the source of my issue is because I can't select "Express Edition" when entering the product key at the completion stage, so the installation is trying to do an upgrade to Enterprise Edition. I have no idea why the drop down list is empty.

    Read the article

  • vlan change via scvmm not working

    - by HarryMud
    Hi i have a windows 2008 R2 hyper-v server which is managed via scvmm 2008 R2. the guest os running on the hyper-v server is winxp with sp3. if i change the vlan id via scvmm the guest doesn't get an ip via dhcp i looked in the properties of the vm with the hyper-v mmc an see the vlan changed correctly but no ip connectivity either static or dhcp a reboot of the guest os didn't help if i change the vlan id via hyper-v mmc the guest gets an ip via dhcp scvmm seams to forget something after the vlan change has anyone a clue how to resolve this ? thx

    Read the article

  • Reboot loop after sysprep of AD machine

    - by rboarman
    Major screw-up here and I need to find out how much trouble I am in. I have an AD machine that is running Server 2008 R2, hyperv, DHCP and DNS. On the hyperv machine, I have a backup AD instance running along with a handfull of other server 2008 instances. Sysprep was run on the hyperv machine instead of one of the instances. I am attempting to bring the machine back up so I can try a system restore. When I boot the hyperv machine, I get an error that says “Windows could not complete the installation. To install windows on this computer , restart the installation” This message occurs in safe mode, AD restore mode and in last known configuration mode. How can I get my OS to boot at this point? Do I need to reinstall 2008 R2 from scratch?

    Read the article

  • Is there a way to add AD LDS users to an AD Domain Group or allow them domain security rights?

    - by Tom
    I have a web application in which our outside customers need access to run transactions (stored procs on Sql Server) on our domain. We have looked into LDS to keep these users separate from our domain. The problem we are having is allowing the LDS users the AD security rights to access these stored procs. For administration purposes we would like to use an AD group for each transaction (stored proc) which has access to execute. Is there a way to add LDS users to this AD group or allow them the security rights to do this? We have setup LDS and can authenicate an AD user thru to runs these transactions. LDS is running on Server 08 R2. AD is also Server 08 R2. Thanks.

    Read the article

  • How to fix? => Your system administrator does not allow the user of saved credentials to log on to the remote computer

    - by Pure.Krome
    At our office, any of our Windows 7 Clients get this error message when we try and RDP to a remote W2K8 Server outside of the office :- Your system administrator does not allow the user of saved credentials to log on to the remote computer XXX because its identity is not fully verified. Please enter new credentials A quick google search leads to some posts they all suggest I edit group policy, etc. I'm under the impression, that the common fix for this, is to follow those instructions -per Windows7 machine-. Ack :( Is there anyway I can do something via our office Active Directory .. which auto updates all Windows 7 clients in the office LAN?

    Read the article

  • SharePoint 2010 Server Configuration Error -> "Cannot connect to database master"

    - by Chrish Riis
    I recieve the following error when I try to configure SharePoint 2010 Server: "Cannot connect to the database master at SQL server at [computer.domain]. The database might not exist, or the current user does not have permission to connect to it." I run the following setup: Windows Server 2008 R2 Standard with SP1 and all the updates SQL Server 2008 R2 with SP1 SharePoint Server 2010 with SP1 Everything is installed on the same server (it's a testserver) I have tried the following: Rebooting the server Checking the install account's DB rights (dbcreator, securityadmin - I even let it have sysadmin) Opened up the firewall on port 1433 and 1434 Uninstalled both SQL and SP, then reinstalled the both Enabled all client protocols in SQL Server Configuration Made sure I used the correct account for installing SharePoint (local admin) Useful links: TCP/IP settings – http:// blog.vanmeeuwen-online.nl/2010/10/cannot-connect-to-database-master-at.html http:// ybbest.wordpress.com/2011/04/22/cannot-connect-to-database-master-at-sql-server-at-sql2008r2/ Wrong slash - http:// yakimadev.com/2010/11/cannot-connect-to-database-master-at-sql-server-at-serverdbname-error-during-sharepoint-2010-products-configuration-wizard-and-installation/ Port error - http:// www.knowsharepoint.com/2011/08/error-connecting-to-database-server.html

    Read the article

  • How to Grant IIS 7.5 access to a certificate in certificate store?

    - by thames
    In Windows 2003 it was simple to do and one could use the winhttpcertcfg.exe (download) to give "NETWORK SERVICE" account access to a certificate. I'm now using Windows Server 2008 R2 with IIS 7.5 and I am unable to find where and how to set permissions access permissions to a certificate in the certificate store. This Post showed how to do it in Vista and that winhttpcertcfg features were added into the certificates mmc however it doesn't seem to work with imported certificates or doesn't work anymore on Server 2008 R2. So does anyone have any idea on how give IIS 7.5 the correct permissions to read a certificate from the certificate store? And also what account from IIS 7.5 that needs the permission.

    Read the article

  • NetApp erroring with: STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT

    - by Sobrique
    Since a sitewide upgrade to Windows 7 on desktop, I've started having a problem with virus checking. Specifically - when doing a rename operation on a (filer hosted) CIFS share. The virus checker seems to be triggering a set of messages on the filer: [filerB: auth.trace.authenticateUser.loginTraceIP:info]: AUTH: Login attempt by user server-wk8-r2$ of domain MYDOMAIN from client machine 10.1.1.20 (server-wk8-r2). [filerB: auth.dc.trace.DCConnection.statusMsg:info]: AUTH: TraceDC- attempting authentication with domain controller \\MYDC. [filerB: auth.trace.authenticateUser.loginRejected:info]: AUTH: Login attempt by user rejected by the domain controller with error 0xc0000199: STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT. [filerB: auth.trace.authenticateUser.loginTraceMsg:info]: AUTH: Delaying the response by 5 seconds due to continuous failed login attempts by user server-wk8-r2$ of domain MYDOMAIN from client machine 10.1.1.20. This seems to specifically trigger on a rename so what we think is going on is the virus checker is seeing a 'new' file, and trying to do an on-access scan. The virus checker - previously running as LocalSystem and thus sending null as it's authentication request is now looking rather like a DOS attack, and causing the filer to temporarily black list. This 5s lock out each 'access attempt' is a minor nuisance most of the time, and really quite significant for some operations - e.g. large file transfers, where every file takes 5s Having done some digging, this seems to be related to NLTM authentication: Symptoms Error message: System error 1808 has occurred. The account used is a computer account. Use your global user account or local user account to access this server. A packet trace of the failure will show the error as: STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT (0xC0000199) Cause Microsoft has changed the functionality of how a Local System account identifies itself during NTLM authentication. This only impacts NTLM authentication. It does not impact Kerberos Authentication. Solution On the host, please set the following group policy entry and reboot the host. Network Security: Allow Local System to use computer identity for NTLM: Disabled Defining this group policy makes Windows Server 2008 R2 and Windows 7 function like Windows Server 2008 SP1. So we've now got a couple of workaround which aren't particularly nice - one is to change this security option. One is to disable virus checking, or otherwise exempt part of the infrastructure. And here's where I come to my request for assistance from ServerFault - what is the best way forwards? I lack Windows experience to be sure of what I'm seeing. I'm not entirely sure why NTLM is part of this picture in the first place - I thought we were using Kerberos authentication. I'm not sure how to start diagnosing or troubleshooting this. (We are going cross domain - workstation machine accounts are in a separate AD and DNS domain to my filer. Normal user authentication works fine however.) And failing that, can anyone suggest other lines of enquiry? I'd like to avoid a site wide security option change, or if I do go that way I'll need to be able to supply detailed reasoning. Likewise - disabling virus checking works as a short term workaround, and applying exclusions may help... but I'd rather not, and don't think that solves the underlying problem. EDIT: Filers in AD ldap have SPNs for: nfs/host.fully.qualified.domain nfs/host HOST/host.fully.qualified.domain HOST/host (Sorry, have to obfuscate those). Could it be that without a 'cifs/host.fully.qualified.domain' it's not going to work? (or some other SPN? ) Edit: As part of the searching I've been doing I've found: http://itwanderer.wordpress.com/2011/04/14/tread-lightly-kerberos-encryption-types/ Which suggests that several encryption types were disabled by default in Win7/2008R2. This might be pertinent, as we've definitely had a similar problem with Keberized NFSv4. There is a hidden option which may help some future Keberos users: options nfs.rpcsec.trace on (This hasn't given me anything yet though, so may just be NFS specific). Edit: Further digging has me tracking it back to cross domain authentication. It looks like my Windows 7 workstation (in one domain) is not getting Kerberos tickets for the other domain, in which my NetApp filer is CIFS joined. I've done this separately against a standalone server (Win2003 and Win2008) and didn't get Kerberos tickets for those either. Which means I think Kerberos might be broken, but I've no idea how to troubleshoot further. Edit: A further update: It looks like this may be down Kerberos tickets not being issued cross domain. This then triggers NTLM fallback, which then runs into this problem (since Windows 7). First port of call will be to investigate the Kerberos side of things, but in neither case do we have anything pointing at the Filer being the root cause. As such - as the storage engineer - it's out of my hands. However, if anyone can point me in the direction of troubleshooting Kerberos spanning two Windows AD domains (Kerberos Realms) then that would be appreciated. Options we're going to be considering for resolution: Amend policy option on all workstations via GPO (as above). Talking to AV vendor about the rename triggering scanning. Talking to AV vendor regarding running AV as service account. investigating Kerberos authentication (why it's not working, whether it should be).

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >