Search Results

Search found 10463 results on 419 pages for 'required'.

Page 190/419 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • How can I install Cinnamon on Ubuntu 12.04 and eliminate the following errors:

    - by jaorizabal
    $ sudo apt-get install cinnamon cinnamon-session cinnamon-settings Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'cinnamon' instead of 'cinnamon-session' Note, selecting 'cinnamon' instead of 'cinnamon-settings' Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help resolve the situation: The following packages have unmet dependencies: cinnamon : Depends: gir1.2-muffin-3.0 but it is not going to be installed Depends: libcogl5 (>= 1.7.4) but it is not installable Depends: libmuffin0 (>= 1.0.0-0ubuntu1~precise) but it is not going to be installed Recommends: gnome-themes-standard but it is not going to be installed Recommends: gnome-session-fallback but it is not going to be installed E: Unable to correct problems, you have held broken packages. I added this PPA: sudo add-apt-repository ppa:merlwiz79/cinnamon-ppa Then ran the following command: sudo apt-get update && sudo apt-get install cinnamon cinnamon-session cinnamon-settings How can I install the latest Cinnamon desktop? How can I fix this error?

    Read the article

  • COM+, DTC, and 80070422

    - by Chris Miller
    One of our  "packaged" software bits that accesses my servers is going through an upgrade right now.  Apparently this software requires DTC to be installed on my SQL Server, and able to accept remote connections.  So I look up how to do that in the knowledge base: http://support.microsoft.com/?kbid=555017 And immediately hit a roadblock.  The DTC components aren't showing up in my Component Services console.  The entire console's acting weird (well, weirder than usual) and when I go into the console and click "Options" it insists on having a timeout entered, and when I enter one, close the box, and go back, the setting's gone again and I'm required to re-enter it.  Lots of weirdness, and no DTC tab.  If you open the COM+ folders, you immediately get error 80070422. After a lot of searching I was looking through the Services listing on the box (after restarting DTC for the twelfth time) and saw that "Com+ System Application" was disabled.  I set it to manual, rebooted the box (test server) and everything started working. So, if you're trying to follow those instructions and discover that the Component Services tool is acting odder than usual, make sure that service isn't disabled.

    Read the article

  • Unexpected SQL Server 2008 Performance Tip: Avoid local variables in WHERE clause

    - by Jim Duffy
    Sometimes an application needs to have every last drop of performance it can get, others not so much. We’re in the process of converting some legacy Visual FoxPro data into SQL Server 2008 for an application and ran into a situation that required some performance tweaking. I figured the Making Microsoft SQL Server 2008 Fly session that Yavor Angelov (SQL Server Program Manager – Query Processing) presented at PDC 2009 last November would be a good place to start. I was right. One tip among the list of incredibly useful tips Yavor presented was “local variables are bad news for the Query Optimizer and they cause the Query Optimizer to guess”. What that means is you should be avoiding code like this in your stored procs even though it seems such an intuitively good idea. DECLARE @StartDate datetime SET @StartDate = '20091125' SELECT * FROM Orders WHERE OrderDate = @StartDate Instead you should be referencing the value directly in the WHERE clause so the Query Optimizer can create a better execution plan. SELECT * FROM Orders WHERE OrderDate = '20091125' My first thought about this one was we reference variables in the form of passed in parameters in WHERE clauses in many of our stored procs. Not to worry though because parameters ARE available to the Query Optimizer as it compiles the execution plan. I highly recommend checking out Yavor’s session for additional tips to help you squeeze every last drop of performance out of your queries. Have a day. :-|

    Read the article

  • Instructor Insight: Using the Container Database in Oracle Database 12 c

    - by Breanne Cooley
    The first time I examined the Oracle Database 12c architecture, I wasn’t quite sure what I thought about the Container Database (CDB). In the current release of the Oracle RDBMS, the administrator now has a choice of whether or not to employ a CDB. Bundling Databases Inside One Container In today’s IT industry, consolidation is a common challenge. With potentially hundreds of databases to manage and maintain, an administrator will require a great deal of time and resources to upgrade and patch software. Why not consider deploying a container database to streamline this activity? By “bundling” several databases together inside one container, in the form of a pluggable database, we can save on overhead process resources and CPU time. Furthermore, we can reduce the human effort required for periodically patching and maintaining the software. Minimizing Storage Most IT professionals understand the concept of storage, as in solid state or non-rotating. Let’s take one-to-many databases and “plug” them into ONE designated container database. We can minimize many redundant pieces that would otherwise require separate storage and architecture, as was the case in previous releases of the Oracle RDBMS. The data dictionary can be housed and shared in one CDB, with individual metadata content for each pluggable database. We also won’t need as many background processes either, thus reducing the overhead cost of the CPU resource. Improve Security Levels within Each Pluggable Database  We can now segregate the CDB-administrator role from that of the pluggable-database administrator as well, achieving improved security levels within each pluggable database and within the CDB. And if the administrator chooses to use the non-CDB architecture, everything is backwards compatible, too.  The bottom line: it's a good idea to at least consider using a CDB. -Christopher Andrews, Senior Principal Instructor, Oracle University

    Read the article

  • Running 64 bit Ubuntu distribution from 32 bit Ubuntu

    - by csg
    Related to this question How do I run qemu with 64bit processor on a 64bit machine?, I'm trying to run latest ubuntu 11.10 64bit distribution under Ubuntu 11.04 32 bit using qemu on a core2duo (64 bit cpu) machine, using following qemu parameters with no success. Error under qemu: "This kernel required an x86-64 CPU, but only detected an i686 CPU. Unable to boot - please use a kernel appropiate for your CPU" Isn't qemu suppose to emulate a 64 bit machine? I think I'm missing something, but I can't figure it out. qemu -cpu (kvm64|core2duo|qemu64) -boot d -cdrom ubuntu-11.10-desktop-amd64.iso qemu-system-x86_64 -boot d -cdrom ubuntu-11.10-desktop-amd64.iso Here is my uname -m i686 Here is my /proc/cpuinfo processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU P8400 @ 2.26GHz stepping : 6 cpu MHz : 800.000 cache size : 3072 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm dts tpr_shadow vnmi flexpriority bogomips : 4522.45 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management:

    Read the article

  • November EPM Patch Set Updates released

    - by p.anda
    (in via Greg) Greg has provided us an updated listing of current patches for the EPM system.  These follow on from our previous Blog post last month in October [link]. 17320505 - Oracle Hyperion Reporting and Analysis for Foundation - PSU 11.1.2.1.136 17413112 - Oracle Hyperion Planning, Fusion Edition - PSU 11.1.2.2.30516345450 - Oracle Hyperion Reporting and Analysis for Financial Reporting - PSU 11.1.2.1.134 17609530 - Hyperion Essbase RTC - PSU 11.1.2.3.00317609535 - Hyperion Essbase Server - PSU 11.1.2.3.00317609533 - Hyperion Essbase Client - PSU 11.1.2.3.00317609539 - Hyperion Essbase Client MSI - PSU 11.1.2.3.00317609518 - Hyperion Essbase Administration Services Server - PSU 11.1.2.3.00317609497 - Hyperion Essbase Administration Services Console MSI - PSU 11.1.2.3.00317609493 - Hyperion Analytic Provider Services - PSU 11.1.2.3.00316692973 - Oracle Hyperion Enterprise Performance Management - PSU 11.1.2.2.30116984944 - Oracle Hyperion Financial Close Management - PSU 11.1.2.2.35216989110 - Oracle Hyperion Financial Close Management - PSU 11.1.2.3.10017636270 - Hyperion Strategic Finance - PSU 11.1.2.1.103 Be sure to review the related Readme files available per Patch Set Update.  These describe the defects fixed and/or updates included along with requirements and instructions for applying the patch. To access simply click on the "Read Me" button when accessing the PSU via My Oracle Support | Patches & Updates. At any time to see listing of the latest Enterprise Performance Management (EPM) Patch Sets and Patch Set Updates for the current releases visit: Doc ID 1400559.1 - Available Patch Sets and Patch Set Updates for Oracle Hyperion EPM Products Doc ID 1525518.1 - Available Patch Sets and Patch Set Updates for Oracle Crystal Ball, DRM, FCM, HPCM and HSF For OBIEE keep up to-date with the latest Patches and Patch Set Updates by visiting: Doc ID 1488475.1 - OBIEE 11g: Required and Recommended Patches and Patch Sets

    Read the article

  • Oracle MDM at the MDM Summit in San Francisco

    - by David Butler
    Oracle is sponsoring the Product MDM track at this year’s MDM & Data Governance San Francisco Summit. Sachin Patel, Director of Product Strategy, Product Hub Applications, at Oracle will present the keynote: Product Master Data Management for Today’s Enterprise. Here’s the abstract: Today businesses struggle to boost operational efficiency and meet new product launch deadlines due to poor and cumbersome administrative processes. One of the primary reasons enterprises are unable to achieve cohesion is due to various domain silos and fragmented product data. This adversely affects business performance including, but not limited to, excess inventories, under-leveraged procurement spend, downstream invoicing or order errors and lost sales opportunities. In this session, you will learn the key elements and business processes that are required for you to master an enterprise product record. Additionally you will gain insights into how to improve the accuracy of your data and deliver reliable and consistent product information across your enterprise. This provides a high level of confidence that business managers can achieve their goals. In this session, you will understand how adopting a Master Data Management strategy for product information can help your enterprise change course towards a more profitable, competitive and successful business. Cisco Systems will join Sachin and cover their experiences, lessons learned and best practices. If you are in the Bay Area and interested in mastering your product data for the benefit of multiple applications, business processes and analytical systems, please join us at the Hyatt, Fisherman’s Wharf this Thursday, June 30th.

    Read the article

  • Trouble installing Rabbit VCS for nautilus

    - by Ranhiru Cooray
    I am using Ubuntu 11.10 and following instructions mentioned here to install Rabbit VCS. I added the PPA properly, did a sudo apt-get update and ran sudo apt-get install rabbitvcs-core rabbitvcs-nautilus rabbitvcs-thunar rabbitvcs-gedit rabbitvcs-cli There were dependency issues and I googled a bit and found out that I need to install RabbitVCS only for nautilus as that is the default file manager for Ubuntu. So I ran the install commands separately for rabbitvcs-core, rabbitvcs-gedit and rabbitvcs-cli. Now my understanding is that those are installed properly. However when I run the install command for rabbitvcs-nautilus, I still get a dependency issue. ranhiru@ranhiru-HP-HDX16-NoteBook-PC:~$ sudo apt-get install rabbitvcs-nautilus Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: rabbitvcs-nautilus : Depends: nautilus (< 1:3.0~) but 1:3.2.1-0ubuntu2 is to be installed Depends: python-nautilus (< 1.0~) but 1.0-0ubuntu2 is to be installed E: Unable to correct problems, you have held broken packages. How do I solve this?

    Read the article

  • UPK and the Oracle Unified Method can be used to deploy Oracle-Based Business Solutions

    - by Emily Chorba
    Originally developed to support Oracle's acquisition strategy, the Oracle Unified Method (OUM) defines a common implementation language across all of Oracle's products and technologies. OUM is a flexible, scalable, and evolving body of knowledge that combines existing best practices and field experience with an industry standard framework that includes the latest thinking around agile implementation and cloud computing.    Strong, proven methods are essential to ensuring successful enterprise IT projects both within Oracle and for our customers and partners. OUM provides a collection of repeatable processes that are the basis for agile implementations of Oracle enterprise business solutions. OUM also provides a structure for tracking progress and managing cost and risks. OUM is applicable to any size or type of IT project. While OUM is a plan-based method—including overview material, task and artifact descriptions, and templates—the method is intended to be tailored to support the appropriate level of ceremony (or agility) required for each project. Guidance is provided for identifying the minimum subset of tasks, tailoring the approach, executing iterative and incremental planning, and applying agile techniques, including support for managing projects using Scrum. Supplemental guidance provides specific support for Oracle products, such as UPK. OUM is available to Oracle employees, partners, and customers. Internal Use at Oracle: Employees can download OUM from MyDesktop. OUM Partner Program: OUM is available free of charge to Oracle PartnerNetwork (OPN) Diamond, Platinum, and Gold partners as a benefit of membership. These partners may download OUM from the Oracle Unified Method Knowledge Zone on OPN. OUM Customer Program: The OUM Customer Program allows customers to obtain copies of the method for their internal use by contracting with Oracle for a services engagement of two weeks or longer. Customers who have a signed contract with Oracle and meet the engagement qualification criteria as published on Customer tab of the OUM Website, are permitted to download the current release of OUM for their perpetual use. They may obtain subsequent releases published during a renewable, three-year access period To learn more about OUM, visit OUM Blog OUM on LinkedIn OUM on Twitter Emily Chorba, Principle Product Manager, Oracle User Productivity Kit

    Read the article

  • Is there a distributed project management software like Redmine?

    - by Tobias Kienzler
    I am quite familiar with and love using git, among other reasons due to its distributed nature. Now I'd like to set up some similarly distributed (FOSS) Project Management software with features similar to what Redmine offers, such as Issue & time tracking, milestones Gantt charts, calendar git integration, maybe some automatic linking of commits and issues Wiki (preferably with Mathjax support) Forum, news, notifications Multiple Projects However, I am looking for a solution that does not require a permanently accesible server, i.e. like in git, each user should have their own copy which can be easily synchronized with others. However it should be possible to not have a copy of every Project on every machine. Since trac uses multiple instances for multiple projects anyway, I was considering using that, but I neither know how well it adapts to simply giting the database itself (which would be be easiest way to handle the distribution due to git being used anyway), nor does it include all of Redmine's feature. So, can you recommend me a distributed project management software? If your suggestion is a software that usually runs on a server please include a description of the distribution method (e.g. whether simply putting the data in a git repository would do the trick), and if it's e.g. trac, please mention plugins required to include the features mentioned.

    Read the article

  • Problems Using CloudFlare On Blogger

    - by the_archer
    Here's the situation. I got a TLD for my blogger blog and set it up using the instructions from blogger. Blogger asks to: Add two CNAME records. For the first CNAME, where it says Name, Label or Host enter "www" and where it says Destination, Target orPoints To enter "ghs.google.com" . For the second CNAME, enter "NHRILA4K2RJG" as the Name and "gv-GQMUMYGHAMJWECXFLJXVXABIV23C55JIPNIAVD5IGFSXT653O5GA.domainverify.googlehosted.com." I did that on my domain host, and everything was working smoothly. Here's the things that happened: Typing myblog.blogspot.com in the address bar brought me to my new address www.mynewaddress.tld Typing my newaddress.tld brings me to www.mynewaddress.tld Now, I went through the instruction to setup CloudFlare and did everything as required. I saw that CloudFlare is active and working on my TLD www.mynewaddress.tld, however, when I am typing the blogspot address, i.e. myblog.blogspot.com, it's showing a notice that the blog is not hosted on blogger and that I should click "yes" to get redirected to the new website. However, the blog is still on blogger. I think the problem might be with this particular CNAME record Google asks to create, which I did not find imported to the CloudFlare nameservers: For the second CNAME, enter "NHRILA4K2RJG" as the Name and "gv-GQMUMYGHAMJWECXFLJXVXABIV23C55JIPNIAVD5IGFSXT653O5GA.domainverify.googlehosted.com." So I create that CNAME and added it to the CloudFlare panel. My question is - is that what will help Google determine that my blog is still hosted on Blogger? If so, should I turn off CloudFlare for that particular CNAME record or turn it on? Any help is very much appreciated :)

    Read the article

  • Industrialized SOA – topic of Business Technology Magazine

    - by JuergenKress
    Although it has become quieter around SOA, the concept is not buried at all. On the contrary, over the years it has reached a new maturity level. Hypes such as Cloud Computing and Big Data have pushed SOA out of the headlines; however "the new hypes have not replace service orientation, but built on it." The authors of this edition rank among to the SOA pioneers in Germany. They have gathered their collective knowledge for this issue and created a unique picture of the current state of SOA. According to them SOA has developed evolutionarily towards industrialization, towards a holistic platform - and thus towards a new Industrialized SOA. The issue 3.12 of the BT magazine (in Germany!) is available as an iPad App (http://it-republik.de/business-technology/bt-magazin-ipad-app), via mail (http://it-republik.de/business-technology/bt-magazin-ausgaben/Industrialized-SOA-000516.html) or at the kiosk! The magazine is published by: Berthold Maier Jürgen Kress Hajo Normann Danilo Schmiedel Guido Schmutz Bernd Trops Clemens Utschig-Utschig Torsten Winterberg For more information see www.bt-magazin.de SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Technorati Tags: Industrial SOA,Industrialized SOA,Berthold Maier,Hajo Normann,Danilo Schmiedel,Guido Schmutz,Bernd Trops,Clemens Utschig-Utschig,Torsten Winterberg,SOA Spezial II,Business Technology Magazin,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • New training on Power Pivot with recorded video courses

    - by Marco Russo (SQLBI)
    I and Alberto Ferrari started delivering training on Power Pivot in 2010, initially in classrooms and then also online. We also recorded videos for Project Botticelli, where you can find content about Microsoft tools and services for Business Intelligence. In the last months, we produced a recorded video course for people that want to learn Power Pivot without attending a scheduled course. We split the entire Power Pivot course training in three editions, offering at a lower price the more introductive modules: Beginner: introduces Power Pivot to any user who knows Excel and want to create reports with more complex and large data structures than a single table. Intermediate: improves skills on Power Pivot for Excel, introducing the DAX language and important features such as CALCULATE and Time Intelligence functions. Advanced: includes a depth coverage of the DAX language, which is required for writing complex calculations, and other advanced features of both Excel and Power Pivot. There are also two bundles, that includes two or three editions at a lower price. Most important, we have a special 40% launch discount on all published video courses using the coupon SQLBI-FRNDS-14 valid until August 31, 2014. Just follow the link to see a more complete description of the editions available and their discounted prices. Regular prices start at $29, which means that you can start a training with less than $18 using the special promotion. P.S.: we recently launched a new responsive version of the SQLBI web site, and now we also have a page dedicated to all videos available about our sessions in conferences around the world. You can find more than 30 hours of free videos here: http://www.sqlbi.com/tv.

    Read the article

  • Any tips for designing the invoicing/payment system of a SaaS?

    - by Alexandru Trandafir Catalin
    The SaaS is for real estate companies, and they can pay a monthly fee that will offer them 1000 publications but they can also consume additional publications or other services that will appear on their bill as extras. On registration the user can choose one of the 5 available plans that the only difference will be the quantity of publications their plan allows them to make. But they can pass that limit if they wish, and additional payment will be required on the next bill. A publication means: Publishing a property during one day, for 1 whole month would be: 30 publications. And 5 properties during one day would be 5 publications. So basically the user can: Make publications (already paid in the monthly fee, extra payment only if it passes the limit) Highlight that publication (extra payment) Publish on other websites or printed catalogues (extra payment) Doubts: How to handle modifications in pricing plans? Let's say quantities change, or you want to offer some free stuff. How to handle unpaid invoices? I mean, freeze the service until the payment has been done and then resume it. When to make the invoices? The idea is to make one invoice for the monthly fee and a second invoice for the extra services that were consumed. What payment methods to use? The choosen now is by bank account, and mobile phone validation with a SMS. If user doesn't pay we call that phone and ask for payment. Any examples on billing online services will be welcome! Thanks!

    Read the article

  • SOA &amp; E2.0 Partner Community Forum &ndash; registration is open!

    - by Jürgen Kress
    March 15th and 16th 2011, Utrecht, The Netherlands   Do you want to learn about how to sell the value of Fusion Middleware by combining SOA and E2.0 Solutions?   Do you want to meet with Oracle SOA and E2.0 Product management?   Do you want to exchange your knowledge and learn from successful SOA, BPM, WebCenter and UCM     implementations?   Do you want to understand Oracle’s Fusion Applications Strategy?   Do you want to network within the Oracle SOA Partner Community and the Oracle E2.0 Partner Community? Then please register for the Oracle SOA and E2.0 Partner Community forum that will be held in Utrecht, The Netherlands, on March 15th and 16th. Registration is free of charge. During this forum you can learn from success stories of partners, join different breakout sessions, gain information from other SOA and E2.0 partners and listen to a vibrate panel discussion. Additionally to the SOA and E2.0 Partner Community Forum, you can participate in technical hands on workshops on March 17th and 18th. The goal of these workshops is to prepare you for customer implementations. Please register by clicking here. ORACLE SOA and E2.0 PARTNER COMMUNITY FORUM Dates: Tuesday 15 March 2011, 11.00 - 18.00 hrs & Wednesday 16 March 2011, 09.00 - 15.45 hrs Place: Capgemini, Utrecht Netherlands For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Partner Community Forum,SOA,E2.0,David Shaffer,Oracle,SOA Suite,OPN,Jürgen Kress

    Read the article

  • Oracle on Oracle: Is that all?

    - by Darin Pendergraft
    On October 17th, I posted a short blog and a podcast interview with Chirag Andani, talking about how Oracle IT uses its own IDM products. Blog link here. In response, I received a comment from reader Jaime Cardoso ([email protected]) who posted: “- You could have talked about how by deploying Oracle's Open standards base technology you were able to integrate any new system in your infrastructure in days. - You could have talked about how by deploying federation you were enabling the business side to keep all their options open in terms of companies to buy and sell while maintaining perfect employee and customer's single view. - You could have talked about how you are now able to cut response times to your audit and security teams into 1/10th of your former times Instead you spent 6 minutes talking about single sign on and self provisioning? If I didn't knew your IDM offer so well I would now be wondering what its differences from Microsoft's offer was. Sorry for not giving a positive comment here but, please your IDM suite is very good and, you simply aren't promoting it well enough” So I decided to send Jaime a note asking him about his experience, and to get his perspective on what makes the Oracle products great. What I found out is that Jaime is a very experienced IDM Architect with several major projects under his belt. Darin Pendergraft: Can you tell me a bit about your experience? How long have you worked in IT, and what is your IDM experience? Jaime Cardoso: I started working in "serious" IT in 1998 when I became Netscape's technical specialist in Portugal. Netscape Portugal didn't exist so, I was working for their VAR here. Most of my work at the time was with Netscape's mail server and LDAP server. Since that time I've been bouncing between the system's side like Sun resellers, Solaris stuff and even worked with Sun's Engineering in the making of an Hierarchical Storage Product (Sun CIS if you know it) and the application's side, mostly in LDAP and IDM. Over the years I've been doing support, service delivery and pre-sales / architecture design of IDM solutions in most big customers in Portugal, to name a few projects: - The first European deployment of Sun Access Manager (SAPO – Portugal Telecom) - The identity repository of 5/5 of the Biggest Portuguese banks - The Portuguese government federation of services project DP: OK, in your blog response, you mentioned 3 topics: 1. Using Oracle's standards based architecture; (you) were able to integrate any new system in days: can you give an example? What systems, how long did it take, number of apps/users/accounts/roles etc. JC: It's relatively easy to design a user management strategy for a static environment, or if you simply assume that you're an <insert vendor here> shop and all your systems will bow to that vendor's will. We've all seen that path, the use of proprietary technologies in interoperability solutions but, then reality kicks in. As an ISP I recall that I made the technical decision to use Active Directory as a central authentication system for the entire IT infrastructure. Clients, systems, apps, everything was there. As a good part of the systems and apps were running on UNIX, then a connector became needed in order to have UNIX boxes to authenticate against AD. And, that strategy worked but, each new machine required the component to be installed, monitoring had to be made for that component and each new app had to be independently certified. A self care user portal was an ongoing project, AD access assumes the client is inside the domain, something the ISP's customers (and UNIX boxes) weren't nor had any intention of ever being. When the Windows 2008 rollout was done, Microsoft changed the Active Directory interface. The Windows administrators didn't have enough know-how about directories and the way systems outside the MS world behaved so, on the go live, things weren't properly tested and a general outage followed. Several hours and 1 roll back later, everything was back working. But, the ISP still had to change all of its applications to work with the new access methods and reset the effort spent on the self service user portal. To keep with the same strategy, they would also have to trust Microsoft not to change interfaces again. Simply by putting up an Oracle LDAP server in the middle and replicating the user info from the AD into LDAP, most of the problems went away. Even systems for which no AD connector existed had PAM in them so, integration was made at the OS level, fully supported by the OS supplier. Sun Identity Manager already had a self care portal, combined with a user workflow so, all the clearances had to be given before the account was created or updated. Adding a new system as a client for these authentication services was simply a new checkbox in the OS installer and, even True64 systems were, for the first time integrated also with a 5 minute work of a junior system admin. True, all the windows clients and MS apps still went to the AD for their authentication needs so, from the start everybody knew that they weren't 100% free of migration pains but, now they had a single point of problems to look at. If you're looking for numbers: - 500K directory entries (users) - 2-300 systems After the initial setup, I personally integrated about 20 systems / apps against LDAP in 1 day while being watched by the different IT teams. The internal IT staff did the rest. DP: 2. Using Federation allows the business to keep options open for buying and selling companies, and yet maintain a single view for both employee and customer. What do you mean by this? Can you give an example? JC: The market is dynamic. The company that's being bought today tomorrow will be sold again. Companies that spread on different markets may see the regulator forcing a sale of part of a company due to monopoly reasons and companies that are in multiple countries have to comply with different legislations. Our job, as IT architects, while addressing the customers and employees authentication services, is quite hard and, quite contrary. On one hand, we need to give access to all of our employees to the relevant systems, apps and resources and, we already have marketing talking with us trying to find out who's a customer of the bough company but not from ours to address. On the other hand, we have to do that and keep in mind we may have to break up all that effort and that different countries legislation may became a problem with a full integration plan. That's a job for user Federation. you don't want to be the one who's telling your President that he will sell that business unit without it's customer's database (making the deal worth a lot less) or that the buyer will take with him a copy of your entire customer's database. Federation enables you to start controlling permissions to users outside of your traditional authentication realm. So what if the people of that company you just bought are keeping their old logins? Do you want, because of that, to have a dedicated system for their expenses reports? And do you want to keep their sales (and pre-sales) people out of the loop in terms of your group's path? Control the information flow, establish a Federation trust circle and give access to your apps to users that haven't (yet?) been brought into your internal login systems. You can still see your users in a unified view, you obviously control if a user has access to any particular application, either that user is in your local database or stored in a directory on the other side of the world. DP: 3. Cut response times of audit and security teams to 1/10. Is this a real number? Can you give an example? JC: No, I don't have any backing for this number. One of the companies I did system Administration for has a SOX compliance policy in place (I remind you that I live in Portugal so, this definition of SOX may be somewhat different from what you're used to) and, every time the audit team says they'll do another audit, we have to negotiate with them the size of the sample and we spend about 15 man/days gathering all the required info they ask. I did some work with Sun's Identity auditor and, from what I've been seeing, Oracle's product is even better and, I've seen that most of the information they ask would have been provided in a few hours with the help of this tool. I do stand by what I said here but, to be honest, someone from Identity Auditor team would do a much better job than me explaining this time savings. Jaime is right: the Oracle IDM products have a lot of business value, and Oracle IT is using them for a lot more than I was able to cover in the short podcast that I posted. I want to thank Jaime for his comments and perspective. We want these blog posts to be informative and honest – so if you have feedback for the Oracle IDM team on any topic discussed here, please post your comments below.

    Read the article

  • Microphone - static background noise suppression

    - by user1873947
    My soundcard is Realtek ALC 892. On Windows 7 I use official Realtek drivers, on Linux I use PulseAudio (on Ubuntu 13.10). On both Windows and Linux, when I enable microphone boost +30db (required because my microphone is quiet), I get very annoying and loud background noise (I also confirmed the background noise with Audacity on both systems). However, Windows Realtek drivers have noise suppression option which works (after enabling it, Audacity shows no background noise and my ears also confirm that there is no background noise). My question is how can I enable background noise suppression in ALSA/PulseAudio? Is there any module I can install or maybe there is a setting for it that can be enabled in config file? I can't find solution for it and this is the only thing that prevents me from switching to Linux completely - as I talk using microphone a lot and on Windows the Realtek software removes the background noise completely and PulseAudio doesn't remove it, which means the recorded voice on Linux is very bad. I know I could buy better soundcard and microphone, but as I said, Windows Realtek drivers remove the noise on software level in real time (ie no noise when talking on TeamSpeak3/Steam/whatever voip programme) so I hope that there is such option on Linux as well. Thanks in advance! This is also crossposted on Unix StackExchange

    Read the article

  • Ubuntu 12.04, xbmc, opengl, intel motherboard

    - by Sean Hagen
    I've got an HTPC that I built myself, with a Asus P5G41T-M Motherboard. It's got an on-board HDMI port, and I've been using that with no problems. I started out with Mythbuntu ( an older version ), and recently updated to 12.04.1 LTS without any issues. I've been thinking about trying out XBMC for a while, and I decided to give it a go. Unfortunately, I seem to be running into quite a few issues. I got XBMC installed from the repos without any issues, but when I try to run it from a console, a box pops up with the following: XBMC needs hardware accelerated OpenGL rendering. Install an appropriate graphics driver. Please consule XBMC Wiki for supported hardware http://wiki.xbmc.org/?title=Supported_hardware In the console, it prints out the following: X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 136 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 When I run vainfo, I get this: libva: VA-API version 0.32.0 libva: va_getDriverName() returns 0 libva: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so libva: va_openDriver() returns 0 vainfo: VA-API version: 0.32 (libva 1.0.15) vainfo: Driver version: Intel i965 driver - 1.0.15 vainfo: Supported profile and entrypoints VAProfileMPEG2Simple : VAEntrypointVLD VAProfileMPEG2Main : VAEntrypointVLD The file /usr/lib/x86_64-linux-gnu/dri/i964_drv_video.so exists: # ls -l /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so -rw-r--r-- 1 root root 628728 Mar 29 2012 /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so And in /var/log/Xorg.0.log the following error pops up: GLX error: Can not get required symbols. I'm not really sure where to go from here. I've tried searching all over for how to fix this problem. I've done "apt-get --reinstall xserver-xorg" ( as well as a few other video driver packages ) a few times, and no change. Any help in getting this issue sorted out would be awesome.

    Read the article

  • Announcing Oracle-Demantra 12.2.1 Release

    - by user702295
    We are excited to announce Oracle Demantra 12.2.1 is now available for new and existing customers. All customers who are not incorporating Demantra with other VCP products are welcome to upgrade without any restrictions. Customers who are using Demantra in conjunction with VCP products will need to upgrade VCP to 12.2.1 which requires application and participation in Oracle E-Business Suite early adopter program. Demantra 12.2.1 includes a wide array of new features driven by customer requirements and needs. Key features include: ·       Streamlined import and export from Microsoft Excel ·       Support Gregorian Month data aggregation in weekly system ·       Multilanguage support for eleven languages ·       Promotion Calendar Optimization ·       Enhanced integration with Advanced Planning Command Center (VCP 12.2.1 Required) Demantra 12.2.1 will work with JD Edwards EnterpriseOne 9.1 using the AIA 11.4 for the Value Chain Planning Base Integration Pack. Demantra 12.2.1 will only work with VCP 12.2.1. Demantra 12.2.1 and VCP 12.2.1 will work with EBS 12.1.3 or EBS 12.2.1.

    Read the article

  • Opitz Consulting wins the Oracle SOA Partner Community Award

    - by Jürgen Kress
    Thanks for the nice post! Ein wichtiger Preis für die SOA-Community: Der Oracle EMEA SOA Community Award Im Rahmen der Oracle Open World verlieh Oracle den „Oracle EMEA SOA Community Award" im Bereich "Outstanding Contribution“ im Jahr 2010 bereits zum dritten Mal in Folge an OPITZ CONSULTING: Award als erster SOA Specialized Partner in Europa In 2010 errangen die SOA-Spezialisten von OPITZ CONSULTING den begehrten SOA Partner Community Award. Das Oracle SOA-Team um Jürgen Kress honoriert hiermit das Erreichen der ersten Oracle SOA Spezialisierung in Deutschland, die Community-Arbeit, das Durchführen von SOA-Trainings (auch für anderen Oracle Partner) und das allgemeine Wachstum des OPITZ CONSULTING SOA-Bereichs. Verbindung von EA, BPM und SOA gewürdigt Im Jahr 2009 wurde ein OPITZ CONSULTING Direktor Strategie & Innovation geehrt: Dirk Stähler (Bild li.) erhielt den Award für sein Engagement im Aufbau und der Weiterentwicklung von BPM- und SOA-Themen. Ein besonderer Grund für die Verleihung an ihn waren seine Arbeiten zur effektiven Verbindung von Enterprise Architecture, Business Process Management und SOA. Award für SOA-Community-Arbeit und -Publikationen OPITZ CONSULTING Direktor Strategie & Innovation und Oracle ACE Director, Torsten Winterberg (Bild re.), holte diesen Preis im Jahr 2008 für OPITZ CONSULTING nach Deutschland. Er wurde für sein außerordentliches Engagement in der Etablierung von serviceorientierten Architekturen gewürdigt. Dazu gehören beispielsweise der Aufbau einer Special Interest Group SOA, Roundtable-Initiativen sowie umfangreiche Veröffentlichungen zu SOA.   For more information on the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Community,Opitz,Opitz Consulting,Torsten Winterberginterberg,oracle,opn,Jürgen Kress

    Read the article

  • Responsive website VS mobile website

    - by Saif Bechan
    I am creating a new blog. Nowadays, especially for a blog, it's important that the websites are accessible for all devices. Now I have to make a choice on what to do. I have seen 2 options. Option 1 is to go with a normal fixed website, for example 960px wide (grid960). And for mobile users have a mobile version. This takes some more time, but then there are 2 good versions of the website. Option 2 I haven't seen a lot yet, creating a adaptive website, or also called responsive website. I am now looking into the LESS framework, where the website automatically switches to to required width. Only downside is that when the normal browser is re-sized, everything re-sizes. Another problem I found is that pinch-to-zoom on devices does not work. Now the question is, which one would you prefer for a blog. One that constantly changes layout when you move your device, or one where you have the choice to view mobile and normal. If there are any other options, please let me know.

    Read the article

  • Install modern browser on Maverick?

    - by feklee
    I tried installing Chrome from the official repository, but I get: $ sudo apt-get install google-chrome-stable Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: google-chrome-stable : Depends: gconf-service but it is not installable Depends: libgconf-2-4 (>= 2.31.1) but it is not installable Depends: libgtk2.0-0 (>= 2.24.0) but 2.22.0-0ubuntu1 is to be installed Depends: libnspr4 (>= 1.8.0.10) but it is not installable Depends: libnss3 (>= 3.14.3) but it is not installable Depends: libstdc++6 (>= 4.6) but 4.5.1-7ubuntu2 is to be installed Depends: libx11-6 (>= 2:1.4.99.1) but 2:1.3.3-3ubuntu1 is to be installed E: Broken packages Note: This is neither my system, nor do I want to do a full system upgrade. Any modern browser will do. Flash plugin is also needed, if not included in the browser.

    Read the article

  • Making a Statement: How to retrieve the T-SQL statement that caused an event

    - by extended_events
    If you’ve done any troubleshooting of T-SQL, you know that sooner or later, probably sooner, you’re going to want to take a look at the actual statements you’re dealing with. In extended events we offer an action (See the BOL topic that covers Extended Events Objects for a description of actions) named sql_text that seems like it is just the ticket. Well…not always – sounds like a good reason for a blog post. When is a statement not THE statement? The sql_text action returns the same information that is returned from DBCC INPUTBUFFER, which may or may not be what you want. For example, if you execute a stored procedure, the sql_text action will return something along the lines of “EXEC sp_notwhatiwanted” assuming that is the statement you sent from the client. Often times folks would like something more specific, like the actual statements that are being run from within the stored procedure or batch. Enter the stack Extended events offers another action, this one with the descriptive name of tsql_stack, that includes the sql_handle and offset information about the statements being run when an event occurs. With the sql_handle and offset values you can retrieve the specific statement you seek using the DMV dm_exec_sql_statement. The BOL topic for dm_exec_sql_statement provides an example for how to extract this information, so I’ll cover the gymnastics required to get the sql_handle and offset values out of the tsql_stack data collected by the action. I’m the first to admit that this isn’t pretty, but this is what we have in SQL Server 2008 and 2008 R2. We will be making it easier to get statement level information in the next major release of SQL Server. The sample code For this example I have a stored procedure that includes multiple statements and I have a need to differentiate between those two statements in my tracing. I’m going to track two events: module_end tracks the completion of the stored procedure execution and sp_statement_completed tracks the execution of each statement within a stored procedure. I’m adding the tsql_stack events (since that’s the topic of this post) and the sql_text action for comparison sake. (If you have questions about creating event sessions, check out Pedro’s post Introduction to Extended Events.) USE AdventureWorks2008GO -- Test SPCREATE PROCEDURE sp_multiple_statementsASSELECT 'This is the first statement'SELECT 'this is the second statement'GO -- Create a session to look at the spCREATE EVENT SESSION track_sprocs ON SERVERADD EVENT sqlserver.module_end (ACTION (sqlserver.tsql_stack, sqlserver.sql_text)),ADD EVENT sqlserver.sp_statement_completed (ACTION (sqlserver.tsql_stack, sqlserver.sql_text))ADD TARGET package0.ring_bufferWITH (MAX_DISPATCH_LATENCY = 1 SECONDS)GO -- Start the sessionALTER EVENT SESSION track_sprocs ON SERVERSTATE = STARTGO -- Run the test procedureEXEC sp_multiple_statementsGO -- Stop collection of events but maintain ring bufferALTER EVENT SESSION track_sprocs ON SERVERDROP EVENT sqlserver.module_end,DROP EVENT sqlserver.sp_statement_completedGO Aside: Altering the session to drop the events is a neat little trick that allows me to stop collection of events while keeping in-memory targets such as the ring buffer available for use. If you stop the session the in-memory target data is lost. Now that we’ve collected some events related to running the stored procedure, we need to do some processing of the data. I’m going to do this in multiple steps using temporary tables so you can see what’s going on; kind of like having to “show your work” on a math test. The first step is to just cast the target data into XML so I can work with it. After that you can pull out the interesting columns, for our purposes I’m going to limit the output to just the event name, object name, stack and sql text. You can see that I’ve don a second CAST, this time of the tsql_stack column, so that I can further process this data. -- Store the XML data to a temp tableSELECT CAST( t.target_data AS XML) xml_dataINTO #xml_event_dataFROM sys.dm_xe_sessions s INNER JOIN sys.dm_xe_session_targets t    ON s.address = t.event_session_addressWHERE s.name = 'track_sprocs' SELECT * FROM #xml_event_data -- Parse the column data out of the XML blockSELECT    event_xml.value('(./@name)', 'varchar(100)') as [event_name],    event_xml.value('(./data[@name="object_name"]/value)[1]', 'varchar(255)') as [object_name],    CAST(event_xml.value('(./action[@name="tsql_stack"]/value)[1]','varchar(MAX)') as XML) as [stack_xml],    event_xml.value('(./action[@name="sql_text"]/value)[1]', 'varchar(max)') as [sql_text]INTO #event_dataFROM #xml_event_data    CROSS APPLY xml_data.nodes('//event') n (event_xml) SELECT * FROM #event_data event_name object_name stack_xml sql_text sp_statement_completed NULL <frame level="1" handle="0x03000500D0057C1403B79600669D00000100000000000000" line="4" offsetStart="94" offsetEnd="172" /><frame level="2" handle="0x01000500CF3F0331B05EC084000000000000000000000000" line="1" offsetStart="0" offsetEnd="-1" /> EXEC sp_multiple_statements sp_statement_completed NULL <frame level="1" handle="0x03000500D0057C1403B79600669D00000100000000000000" line="6" offsetStart="174" offsetEnd="-1" /><frame level="2" handle="0x01000500CF3F0331B05EC084000000000000000000000000" line="1" offsetStart="0" offsetEnd="-1" /> EXEC sp_multiple_statements module_end sp_multiple_statements <frame level="1" handle="0x03000500D0057C1403B79600669D00000100000000000000" line="0" offsetStart="0" offsetEnd="0" /><frame level="2" handle="0x01000500CF3F0331B05EC084000000000000000000000000" line="1" offsetStart="0" offsetEnd="-1" /> EXEC sp_multiple_statements After parsing the columns it’s easier to see what is recorded. You can see that I got back two sp_statement_completed events, which makes sense given the test procedure I’m running, and I got back a single module_end for the entire statement. As described, the sql_text isn’t telling me what I really want to know for the first two events so a little extra effort is required. -- Parse the tsql stack information into columnsSELECT    event_name,    object_name,    frame_xml.value('(./@level)', 'int') as [frame_level],    frame_xml.value('(./@handle)', 'varchar(MAX)') as [sql_handle],    frame_xml.value('(./@offsetStart)', 'int') as [offset_start],    frame_xml.value('(./@offsetEnd)', 'int') as [offset_end]INTO #stack_data    FROM #event_data        CROSS APPLY    stack_xml.nodes('//frame') n (frame_xml)    SELECT * from #stack_data event_name object_name frame_level sql_handle offset_start offset_end sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 94 172 sp_statement_completed NULL 2 0x01000500CF3F0331B05EC084000000000000000000000000 0 -1 sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 174 -1 sp_statement_completed NULL 2 0x01000500CF3F0331B05EC084000000000000000000000000 0 -1 module_end sp_multiple_statements 1 0x03000500D0057C1403B79600669D00000100000000000000 0 0 module_end sp_multiple_statements 2 0x01000500CF3F0331B05EC084000000000000000000000000 0 -1 Parsing out the stack information doubles the fun and I get two rows for each event. If you examine the stack from the previous table, you can see that each stack has two frames and my query is parsing each event into frames, so this is expected. There is nothing magic about the two frames, that’s just how many I get for this example, it could be fewer or more depending on your statements. The key point here is that I now have a sql_handle and the offset values for those handles, so I can use dm_exec_sql_statement to get the actual statement. Just a reminder, this DMV can only return what is in the cache – if you have old data it’s possible your statements have been ejected from the cache. “Old” is a relative term when talking about caches and can be impacted by server load and how often your statement is actually used. As with most things in life, your mileage may vary. SELECT    qs.*,     SUBSTRING(st.text, (qs.offset_start/2)+1,         ((CASE qs.offset_end          WHEN -1 THEN DATALENGTH(st.text)         ELSE qs.offset_end         END - qs.offset_start)/2) + 1) AS statement_textFROM #stack_data AS qsCROSS APPLY sys.dm_exec_sql_text(CONVERT(varbinary(max),sql_handle,1)) AS st event_name object_name frame_level sql_handle offset_start offset_end statement_text sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 94 172 SELECT 'This is the first statement' sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 174 -1 SELECT 'this is the second statement' module_end sp_multiple_statements 1 0x03000500D0057C1403B79600669D00000100000000000000 0 0 C Now that looks more like what we were after, the statement_text field is showing the actual statement being run when the sp_statement_completed event occurs. You’ll notice that it’s back down to one row per event, what happened to frame 2? The short answer is, “I don’t know.” In SQL Server 2008 nothing is returned from dm_exec_sql_statement for the second frame and I believe this to be a bug; this behavior has changed in the next major release and I see the actual statement run from the client in frame 2. (In other words I see the same statement that is returned by the sql_text action  or DBCC INPUTBUFFER) There is also something odd going on with frame 1 returned from the module_end event; you can see that the offset values are both 0 and only the first letter of the statement is returned. It seems like the offset_end should actually be –1 in this case and I’m not sure why it’s not returning this correctly. This behavior is being investigated and will hopefully be corrected in the next major version. You can workaround this final oddity by ignoring the offsets and just returning the entire cached statement. SELECT    event_name,    sql_handle,    ts.textFROM #stack_data    CROSS APPLY sys.dm_exec_sql_text(CONVERT(varbinary(max),sql_handle,1)) as ts event_name sql_handle text sp_statement_completed 0x0300070025999F11776BAF006F9D00000100000000000000 CREATE PROCEDURE sp_multiple_statements AS SELECT 'This is the first statement' SELECT 'this is the second statement' sp_statement_completed 0x0300070025999F11776BAF006F9D00000100000000000000 CREATE PROCEDURE sp_multiple_statements AS SELECT 'This is the first statement' SELECT 'this is the second statement' module_end 0x0300070025999F11776BAF006F9D00000100000000000000 CREATE PROCEDURE sp_multiple_statements AS SELECT 'This is the first statement' SELECT 'this is the second statement' Obviously this gives more than you want for the sp_statement_completed events, but it’s the right information for module_end. I leave it to you to determine when this information is needed and use the workaround when appropriate. Aside: You might think it’s odd that I’m showing apparent bugs with my samples, but you’re going to see this behavior if you use this method, so you need to know about it.I’m all about transparency. Happy Eventing- Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell

    - by SQLOS Team
    Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell Blog This blog post comes from Khalid Mouss, Senior Program Manager in Microsoft SQL Server. Overview The goal of this blog is to demonstrate how we can automate through PowerShell connecting multiple SQL Server deployments in Windows Azure Virtual Machines. We would configure TCP port that we would open (and close) though Windows firewall from a remote PowerShell session to the Virtual Machine (VM). This will demonstrate how to take the advantage of the remote PowerShell support in Windows Azure Virtual Machines to automate the steps required to connect SQL Server in the same cloud service and in different cloud services.  Scenario 1: VMs connected through the same Cloud Service 2 Virtual machines configured in the same cloud service. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually required. Step 1 – Provision VMs and Configure Ports   Provision VM1; named DemoVM1 as follows (see examples screenshots below if using the portal):   Provision VM2 (DemoVM2) with PowerShell Remoting enabled and connected to DemoVM1 above (see examples screenshots below if using the portal): After provisioning of the 2 VMs above, here is the default port configurations for example: Step2 – Verify / Confirm the TCP port used by the database Engine By the default, the port will be configured to be 1433 – this can be changed to a different port number if desired.   1. RDP to each of the VMs created below – this will also ensure the VMs complete SysPrep(ing) and complete configuration 2. Go to SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols for <SQL instance> -> TCP/IP - > IP Addresses   3. Confirm the port number used by SQL Server Engine; in this case 1433 4. Update from Windows Authentication to Mixed mode   5.       Restart SQL Server service for the change to take effect 6.       Repeat steps 3., 4., and 5. For the second VM: DemoVM2 Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <username> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) Your will then be prompted to enter the password. Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok. Step 5 – Now connect from DemoVM2 to DB instance in DemoVM1 Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.   Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can longer connect from VM3 remotely to VM1. Scenario 2: VMs provisioned in different Cloud Services 2 Virtual machines configured in different cloud services. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on on-premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually needed. Step 1 – Provision new VM3 Provision VM3; named DemoVM3 as follows (see examples screenshots below if using the portal): After provisioning is complete, here is the default port configurations: Step 2 – Add public port to VM1 connect to from VM3’s DB instance Since VM3 and VM1 are not connected in the same cloud service, we will need to specify the full DNS address while connecting between the machines which includes the public port. We shall add a public port 57000 in this case that is linked to private port 1433 which will be used later to connect to the DB instance. Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <UserName> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) You will then be prompted to enter the password.   Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: Ok. netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok.   Step 5 – Now connect from DemoVM3 to DB instance in DemoVM1 RDP into VM3, launch SSM and Connect to VM1’s DB instance as follows. You must specify the full server name using the DNS address and public port number configured above. Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port   Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.  Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can no longer connect from VM3 remotely to VM1. Conclusion Through the new support for remote PowerShell in Windows Azure Virtual Machines, one can script and automate many Virtual Machine and SQL management tasks. In this blog, we have demonstrated, how to start a remote PowerShell session, re-configure Virtual Machine firewall to allow (or disallow) SQL Server connections. References SQL Server in Windows Azure Virtual Machines   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Continuous Integration using Docker

    - by Leon Mergen
    One of the main advantages of Docker is the isolated environment it brings, and I want to leverage that advantage in my continuous integration workflow. A "normal" CI workflow goes something like this: Poll repository for changes Pull from repository Install dependencies Run tests In a Dockerized workflow, it would be something like this: Poll repository for changes Pull from repository Build docker image Run docker image as container Run tests Kill docker container My problem is with the "run tests" step: since Docker is an isolated environment, intuitively I would like to treat it as one; this means the preferred method of communication are sockets. However, this only works well in certain situations (a webapp, for example). When testing different kind of services (for example, a background service that only communicated with a database), a different approach would be required. What is the best way to approach this problem? Is it a problem with my application's design, and should I design it in a more TDD, service-oriented way that always listens on some socket? Or should I just give up on isolation, and do something like this: Poll repository for changes Pull from repository Build docker image Run docker image as container Open SSH session into container Run tests Kill docker container SSH'ing into the container seems like an ugly solution to me, since it requires deep knowledge of the contents of the container, and thus break the isolation. I would love to hear SO's different approaches to this problem.

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >