Search Results

Search found 19460 results on 779 pages for 'local administrator'.

Page 175/779 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • Mavericks system Ruby and gem broken

    - by T1000
    When I tried to run ruby -v or gem -v (or any other command), I get: dyld: lazy symbol binding failed: Symbol not found: _ruby_run Referenced from: /usr/local/bin/ruby Expected in: /usr/lib/libruby.dylib dyld: Symbol not found: _ruby_run Referenced from: /usr/local/bin/ruby Expected in: /usr/lib/libruby.dylib This is after I ran rvm system to temporally switch to the system default Ruby. RVM is working fine, but I have a special need to install a gem to the system Ruby and I can't because of this problem. Does anyone know why? It seems to be some kind of link problem to Ruby, but I'm don't know how to solve this. I ran which ruby and it's at this point located in "/usr/local/bin/ruby". I checked the Ruby in "/usr/lib/" and it's pointing to my system Ruby: "../../System/Library/Frameworks/Ruby.framework/Versions/Current/usr/lib/ruby" Any help would be appreciated.

    Read the article

  • OFM 11g: OAM SSO for Forms and ADF Faces

    - by olaf.heimburger
    In my blog entry OFM 11g: Implementing OAM SSO with Forms we set the foundation for providing a complete Single Sign-On solution based on Oracle Access Manager (OAM). This foundation should now be used to combine Forms 11g and ADF Faces 11g applications with a transparent login. The Beginning Before we start, lets re-consider the requirements to achieve the ultimate goal. These are:- Access to the Forms 11g Application must be authenticated by OAM (protected). Access to the ADF Faces 11g Application must be authenticated by OAM (protected). Switching from one application to the other should not result in a re-authentication (aka single sign-on). User identity should be availble to the application without any extra work in the application code. All these are the common requirements for a single sign-on solution. The challenge here is that Forms relies on Oracle AS SSO (OSSO or "the old SSO") while ADF Faces is quite open and can be protected by Oracle AS SSO and Oracle Access Manager SSO (OAM SSO or "the modern SSO"). Both application types can use their own login mechanism. The Forms 11g Application To demonstrate the SSO functionality, we use the standard Forms test (/forms/frmservlet?form=test.fmx). Although this shows nothing specific in the Forms application, it is good enough to demonstrate that it is protected. The ADF Faces 11g Application With ADF 11g you can develop quite a number of useful Faces based applications. Among many features, it comes with the ADF Security feature that provides you with functionality to protect your pages, regions, and even TaskFlows from un-authenticated usage in a declarative way.To demonstrate that functionality a sample application with different access levels plus a login dialog is used. This application comes with a publc page that has protected content (a button). Once you are authenticated for the application, the protected content and some personalisation (the users name) is shown. Protecting Forms 11g As already explained in the OFM 11g: Implementing OAM SSO with Forms, the easiest way to protect a Forms application is to configure it as a OSSO partner application, setup mod_osso, test it, migrate OSSO to OAM SSO with the Upgrade Agent, reconfigure mod_osso, and you are done.Sort of. By default the OAM is configured to run in co-exist mode. This means that a user has to re-authenticate to the Forms application when logged into an OAM SSO application before. To avoid this, you must disable the co-exist mode, for example by using WLST and issue the disableCoexistMode on the OAM server. Protecting ADF Faces 11g To protect an ADF Faces 11g application we have to consider two scenarios: Use a HTTPD server in front of WLS Use WLS without a HTTPD server Both scenarios have their pro's and cons' and we won't get into details and just describe how to configure both. Scenario 1: HTTPD Server with WLS In this scenario we have to setup the environment in some steps:- Configure a WebGate at OAMThis configuration can be done through the OAM console or by a script. No matter which way you choose, the WebGate configuration files will be created for you. Install the OAM WebGate into an HTTPD serverThe type of webgate you need to install depends on you HTTPD server. With Oracle HTTP Server 11g you can use the latest OAM 11g WebGate. With other HTTPD servers you must resort to OAM 10g WebGates. A OAM 11g WebGate can use the pre-created configuration files supplied during the WebGate configuration at OAM. An OAM 10g WebGate asks for the specific configuration and verifies it during installation. Configure the WLS plugin to forward the requests to WLSAgain, depending on your HTTPD Server you have different plugins to forward requests to WLS. With OHS 11g you can use the pre-installed mod_wl_ohs plugin. Its configuration is quite simple and straightforward. Configure an OAM SSPI Provider as a IdentityAsserter in WLS to retrieve the user identifierThis configuration is quite important as it retrieves the user identifier for the next step. If you have a SOA Suite installation within your OFM_HOME, the necessary software is already installed and you only need to setup your Security Realm within WLS.You can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. We add the OAMIdentityAsserter as the first SSPI Provider. It is important that the Control Flag is set to SUFFICIENT. Every other configuration can be left as is, no changes are necessary here. Configure an OAM Identity Provider to get the real user identityIn OFM 11g: Implementing OAM SSO with Forms we have configured an OID as Identity Store. To get the user identity we need to configure the same OID as an SSPI Provider for WLS. This will retrieve the real user information from OID and creates the JAAS Subject and Principals to be used by any application within WLS.Again, you can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. Now add the OIDAuthenticator as the second SSPI Provider. It is important that the Control Flag is set to OPTIONAL. After we saved this setup, we need to configure this provider by setting the Provider Specific details to access OID. Scenario 2: WLS only This scenario is a bit easier but requires more work in the WLS setup:- Configure a WebGate at OAMThis configuration can be done through the OAM console or by a script. No matter which way you choose, the WebGate configuration files will be created for you. Configure the OAM SSPI Provider as IdentityAuthenticator to authenticate and set the user identifierWhen using the OAM SSPI Provider as OAMAuthenticator we create it with the Control Flag as SUFFICIENT. Afte saving it, the Provider Specific settings must be configured to allow the OAM SSPI Provider to connect to the OAM Server. Configure an OAM Identity Provider to get the real user identity providerAgain, you can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. Now add the OIDAuthenticator as the second SSPI Provider. It is important that the Control Flag is set to OPTIONAL. After we saved this setup, we need to configure this provider by setting the Provider Specific details to access OID. Configure ADF 11g Application for OAM Actually, there are no changes to be made within the ADF application. We only need to add the value CLIENT_CERT to the <auth-mode> tag in the <login-config> tag in the web.xml file. Testing To test the configuration, simply point your browser to one of both appliction URLs. OAM should kick in and redirect you to the OAM Login page. After you have entered the correct credentials, access to the URLs is granted and you will see the application. Enjoy!

    Read the article

  • Oracle Enterprise Manager users present today at Oracle Users Forum

    - by Anand Akela
    Oracle Users Forum starts in a few minutes at Moscone West, Levels 2 & 3. There are more than hundreds of Oracle user sessions during the day. Many Oracle Oracle Enterprise Manager users are presenting today as well.  In addition, we will have a Twitter Chat today from 11:30 AM to 12:30 PM with IOUG leaders, Enterprise Manager SIG contributors and many speakers. You can participate in the chat using hash tag #em12c on Twitter.com or by going to  tweetchat.com/room/em12c      (Needs Twitter credential for participating).  Feel free to join IOUG and Enterprise team members at the User Group Pavilion on 2nd Floor, Moscone West. RSVP by going http://tweetvite.com/event/IOUG  . Don't miss the Oracle Open World welcome keynote by Larry Ellison this evening at 5 PM . Here is the complete list of Oracle Enterprise Manager sessions during the Oracle Users Forum : Time Session Title Speakers Location 8:00AM - 8:45AM UGF4569 - Oracle RAC Migration with Oracle Automatic Storage Management and Oracle Enterprise Manager 12c VINOD Emmanuel -Database Engineering, Dell, Inc. Wendy Chen - Sr. Systems Engineer, Dell, Inc. Moscone West - 2011 8:00AM - 8:45AM UGF10389 -  Monitoring Storage Systems for Oracle Enterprise Manager 12c Anand Ranganathan - Product Manager, NetApp Moscone West - 2016 9:00AM - 10:00AM UGF2571 - Make Oracle Enterprise Manager Sing and Dance with the Command-Line Interface Ray Smith - Senior Database Administrator, Portland General Electric Moscone West - 2011 10:30AM - 11:30AM UGF2850 - Optimal Support: Oracle Enterprise Manager 12c Cloud Control, My Oracle Support, and More April Sims - DBA, Southern Utah University Moscone West - 2011 12:30PM-2:00PM UGF5131 - Migrating from Oracle Enterprise Manager 10g Grid Control to 12c Cloud Control    Leighton Nelson - Database Administrator, Mercy Moscone West - 2011 2:15PM-3:15PM UGF6511 -  Database Performance Tuning: Get the Best out of Oracle Enterprise Manager 12c Cloud Control Mike Ault - Oracle Guru, TEXAS MEMORY SYSTEMS INC Tariq Farooq - CEO/Founder, BrainSurface Moscone West - 2011 3:30PM-4:30PM UGF4556 - Will It Blend? Verifying Capacity in Server and Database Consolidations Jeremiah Wilton - Database Technology, Blue Gecko / DatAvail Moscone West - 2018 3:30PM-4:30PM UGF10400 - Oracle Enterprise Manager 12c: Monitoring, Metric Extensions, and Configuration Best Practices Kellyn Pot'Vin - Sr. Technical Consultant, Enkitec Moscone West - 2011 Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • How to undo an hg merge? (I think.)

    - by Grumdrig
    I'm new to collaborating with Mercurial. My situation: Another programmer changed rev 1 of a file to replace 4-space indents with 2-space indent. (I.e. changed every line.) Call that rev 2, pushed to the remote repo. I've committed substantive changes rev 1 with various code changes in my local workspace. Call that rev 3. I've hg pulled and hg mergeed without a clear idea of what was going on. The conflicts are myriad and not really substantive. So I really wish I'd changed my local repo to 2-space indents before merging; then the merge will be trivial (i'm supposing). But I can't seem to back up. I think I need to hg update -r 3 but it says abort: outstanding uncommitted merges. How can I undo the merge, changes spacing in my local repo, and remerge?

    Read the article

  • Why do I get this error when I try to push my SQLite3 to Postgresql (via Taps) on Cedar Stack?

    - by rhodee
    I've done quite a bit of research on Heroku Dev Center and I am now looking to the community for help. Here is my problem. I can not push my db to Heroku Cedar Stack. I am trying to migrate a sqlite database to postgresql via Taps gem. When I am ready to deploy I run: bundle install --without production heroku run db:push I get the following result: Running db:seed attached to terminal... up, run.17 sh: db:seed: not found heroku run rake db:migrate And when I run the migration: heroku run rake db:migrate I get the following: Running rake db:migrate attached to terminal... up, run.18 rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb) /usr/local/lib/ruby/1.9.1/rake.rb:2367:in `raw_load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:2007:in `block in load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:2058:in `standard_exception_handling' /usr/local/lib/ruby/1.9.1/rake.rb:2006:in `load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:1991:in `run' /usr/local/bin/rake:31:in `<main>' Everytime I push to Heroku (git push heroku master) it fails because my gem file is attempting to install sqlite3 gem-even though its inside of the development and test groups in my Gemfile. My database.yml production environment still points to sqlite adapter even after I have run the following command successfully: heroku config:add BUNDLE_WITHOUT="test development" --app app_name_on_heroku Out of ideas. Please help. If its useful I can post results of my gemfile, heroku ps and logs. Cheers UPDATE: After following @John's direction I now receive the following terminal message. Sending schema Schema: 100% |==========================================| Time: 00:00:07 Sending indexes schema_migrat: 100% |==========================================| Time: 00:00:00 Sending data 4 tables, 6 records schema_migrat: 0% | | ETA: --:--:-- Saving session to push_201111070749.dat.. !!! Caught Server Exception HTTP CODE: 500 Taps Server Error: LoadError: no such file to load -- sequel/adapters/ And the following warnings: ["/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:in require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:inblock in tsk_require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:72:in block in check_requiring_thread'", "<internal:prelude>:10:insynchronize'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:69:in check_requiring_thread'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:intsk_require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/database/connecting.rb:25:in adapter_class'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/database/connecting.rb:54:inconnect'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:119:in connect'", "/app/lib/taps/db_session.rb:14:inconn'", "/app/lib/taps/server.rb:91:in block in <class:Server>'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865:in block in route'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521:ininstance_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521:in route_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:500:inblock (2 levels) in route!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497:inblock in route!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476:in each'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476:inroute!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:601:in dispatch!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411:inblock in call!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in instance_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:inblock in invoke'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:ininvoke'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411:in call!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:399:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/auth/basic.rb:25:in call'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979:inblock in call'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:1005:in synchronize'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979:incall'", "/home/heroku_rack/lib/static_assets.rb:9:in call'", "/home/heroku_rack/lib/last_access.rb:15:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:47:in block in call'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:41:ineach'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:41:in call'", "/home/heroku_rack/lib/date_header.rb:14:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/builder.rb:77:in call'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:76:inblock in pre_process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:inpre_process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:57:in process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:42:inreceive_data'", "/app/.bundle/gems/ruby/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in run_machine'", "/app/.bundle/gems/ruby/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:inrun'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/backends/base.rb:57:in start'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/server.rb:156:instart'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/controllers/controller.rb:80:in start'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/runner.rb:177:inrun_command'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/runner.rb:143:in run!'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/bin/thin:6:in'", "/usr/ruby1.9.2/bin/thin:19:in load'", "/usr/ruby1.9.2/bin/thin:19:in'"]

    Read the article

  • The SSL Bindings Issue–Web Pro Week 6 of 52

    - by OWScott
    We have a chicken before the egg issue with HTTPS bindings.  This video—week 6 of a 52 week series for the web administrator—covers why HTTPS bindings don’t support host headers the same as HTTP bindings do.  In this video I show the issue and use Wireshark to see it in action. If you haven’t seen the other weeks, you can find past and future videos on the Web Pro Series landing page. The SSL Bindings Issue

    Read the article

  • m2eclipse workspace resolution

    - by Bartosz Radaczynski
    Hi all, I am using m2eclipse for managing maven projects in eclipse. It seems that in the previous release that I was using (0.9.8) the workspace resolution did not work at all, but right now it also does not work quite as I would expect. Namely, when the "resolve dependencied from workspace" setting for a project is not checked, the project turns red and cannot be build. The message says: artifact xxx x.y-SNAPSHOT cannot be found int local repository (or something to that extent). The trouble is that m2eclipse is putting information about workspace project into my local repo. Is there a way to change this behaviour? P.S. The workaround for this is to close the xxx project, then m2eclipse resolved the dependency to whatever version I've had previously in the local repository (i.e. the non-snapshot version).

    Read the article

  • Oracle Database to SQL Server Comparisons

    One of the initial obstacles a database administrator encounters is learning where features of his/her system live or reside on a less familiar system. Steve Callan approaches this feature comparison by taking SQL Server and mapping its features back into Oracle.

    Read the article

  • getting firewatir to run on mac osx: jssh problems

    - by z3cko
    I am trying to get firewatir to run on Mac OSX Leopard. I have Firefox 3.6rc2 installed but running the most simple script does not work: require 'rubygems' require 'firewatir' ff=FireWatir::Firefox.new ff.goto("http://mail.yahoo.com") i am getting the following error /usr/local/lib/ruby/gems/1.8/gems/firewatir-1.6.5/lib/firewatir/firefox.rb:237:in `set_defaults': Unable to connect to machine : 127.0.0.1 on port 9997. Make sure that JSSh is properly installed and Firefox is running with '-jssh' option (Watir::Exception::UnableToStartJSShException) from /usr/local/lib/ruby/gems/1.8/gems/firewatir-1.6.5/lib/firewatir/firefox.rb:131:in `initialize' from ./watir-test.rb:12:in `new' from ./watir-test.rb:12 even when I am trying to start Firefox with the -jssh option, I get an error (although another one) /Applications/Firefox.app/Contents/MacOS/firefox-bin -jssh the error output in that case: /usr/local/lib/ruby/gems/1.8/gems/firewatir-1.6.5/lib/firewatir/firefox.rb:125:in `initialize': Firefox is running without -jssh (RuntimeError) is there any tutorial or hnt to get firewatir actually running on Mac OSX?

    Read the article

  • How to gracefully exit SLIME and EMACS

    - by Gregory Gelfond
    Hi All, I have a question regarding how to "gracefully exit SLIME", when I quit Emacs. Here is the relevant portion of my config file: ;; SLIME configuration (setq inferior-lisp-program "/usr/local/bin/sbcl") (add-to-list 'load-path "~/Scripts/slime/") (require 'slime) (slime-setup) ;; configure SLIME to gracefully quit when emacs ;; terminates (defun slime-smart-quit () (interactive) (when (slime-connected-p) (if (equal (slime-machine-instance) "Gregory-Gelfonds-MacBook-Pro.local") (slime-quit-lisp) (slime-disconnect))) (slime-kill-all-buffers)) (add-hook 'kill-emacs-hook 'slime-smart-quit) To my knowledge this should automatically kill SLIME and it's associated processes whenever I exit Emacs. However, every time I exit, I still get the prompt: Proc Status Buffer Command ---- ------ ------ ------- SLIME Lisp open *cl-connection* (network stream connection to 127.0.0.1) inferior-lisp run *inferior-lisp* /usr/local/bin/sbcl Active processes exist; kill them and exit anyway? (yes or no) Can someone shed some insight as to what I'm missing from my config? Thanks in advance.

    Read the article

  • Unable to load libsctp.so for non root user

    - by sankoz
    I have a Linux application that uses the libsctp.so library. When I run it as root, it runs fine. But when I run it as an ordinary user, it gives the following error: error while loading shared libraries: libsctp.so.1: cannot open shared object file: No such file or directory But, when I do ldd as ordinary user, it is able to see the library: [sanjeev@devtest6 src]$ ldd myapp ... ... libsctp.so.1 => /usr/local/lib/libsctp.so.1 (0x00d17000) [sanjeev@devtest6 src]$ ls -lL /usr/local/lib/libsctp.so.1 -rwxrwxrwx 1 root root 27430 2009-06-29 11:26 /usr/local/lib/libsctp.so.1 [sanjeev@devtest6 src]$ What could be wrong? How is the ldd is able to find libsctp.so, but when actually running the app, it is not able to find the same library?

    Read the article

  • Using LogParser - part 3

    - by fatherjack
    This is the third part in a series of articles about using LogParser, specifically from a DBA point of view but there are many uses that any system administrator could put LogParser to in order to make their life easier. In Part 1 we downloaded, installed the software and ran a very basic query. In Part 2 we ran some queries and filtered in/out specific rows according to our requirements. In this part we will be looking at how to collect data from more than one location and from different sources...(read more)

    Read the article

  • How to make QT support HTML 5 database?

    - by Mickey Shine
    I am using Qt 4.7.1 and embedded a webview in my app. But I got the following error when trying to visit http://webkit.org/demos/sticky-notes/ to test the HTML 5 database feature Failed to open the database on disk. This is probably because the version was bad or there is not enough space left in this domain's quota I compiled my static Qt library with the following command: configure --prefix=/usr/local/qt-static-release-db --accessibility --multimedia --audio-backend --svg --webkit --javascript-jit --script --scripttools --declarative --release -nomake examples -nomake demos --static --openssl -I /usr/local/ssl/include -L /usr/local/ssl/lib -confirm-license -sql-qsqlite -sql-qmysql -sql-qodbc

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • dlopen() with dependencies between libraries

    - by peastman
    My program uses plugins, that are loaded dynamically with dlopen(). The locations of these plugins can be arbitrary, so they aren't necessarily in the library path. In some cases, one plugin needs to depend on another plugin. So if A and B are dynamic libraries, I'll first load A, then load B which uses symbols defined in A. My reading of the dlopen() documentation implies that if I specify RTLD_GLOBAL this should all work. But it doesn't. When I call dlopen() on the second library, it fails with an error saying it couldn't find the first one (which had already been loaded with dlopen()): Error loading library /usr/local/openmm/lib/plugins/libOpenMMRPMDOpenCL.dylib: dlopen(/usr/local/openmm/lib/plugins/libOpenMMRPMDOpenCL.dylib, 9): Library not loaded: libOpenMMOpenCL.dylib Referenced from: /usr/local/openmm/lib/plugins/libOpenMMRPMDOpenCL.dylib Reason: image not found How can I make this work?

    Read the article

  • Fleximage gem on Ruby 1.9

    - by jaycode
    Running a Rails application with Fleximage in Ruby 1.8.7 works fine, but in Ruby 1.9 returned error: /usr/local/lib/ruby/gems/1.9.1/gems/fleximage-1.0.4/lib/fleximage/model.rb:340: [BUG] Segmentation fault ruby 1.9.1p376 (2009-12-07 revision 26041) [i386-darwin10.0.0] -- control frame ---------- c:0060 p:---- s:0295 b:0295 l:000294 d:000294 CFUNC :read c:0059 p:0060 s:0291 b:0291 l:000290 d:000290 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/fleximage-1.0.4/lib/fleximage/model.rb:340 c:0058 p:0084 s:0285 b:0285 l:000275 d:000284 BLOCK /usr/local/lib/ruby/gems/1.9.1/gems/activerecord-2.3.5/lib/active_record/base.rb:2746 c:0057 p:---- s:0281 b:0281 l:000280 d:000280 FINISH c:0056 p:---- s:0279 b:0279 l:000278 d:000278 CFUNC :each And further very long line of error. My fleximage is stored as gem, keeping it as plugin returning a different set of errors so I don't bother. How to fix this?

    Read the article

  • *** glibc detected *** perl: munmap_chunk(): invalid pointer

    - by sid_com
    At the and of a script-output (parsing a xhtml-site with XML::LibXML::Reader) I get this: * glibc detected * perl: munmap_chunk(): invalid pointer: 0x0000000000b362e0 * ======= Backtrace: ========= /lib64/libc.so.6[0x7fb84952fc76] /usr/lib64/libxml2.so.2[0x7fb848b75e17] /usr/lib64/libxml2.so.2(xmlHashFree+0xa6)[0x7fb848b691b6] ... ... ======= Memory map: ======== 00400000-0053d000 r-xp 00000000 08:01 182002 /usr/local/bin/perl 0073c000-0073d000 r--p 0013c000 08:01 182002 /usr/local/bin/perl 0073d000-00741000 rw-p 0013d000 08:01 182002 /usr/local/bin/perl 00741000-00c60000 rw-p 00000000 00:00 0 [heap] 7fb8482cd000-7fb8482e3000 r-xp 00000000 08:01 2404 /lib64/libgcc_s.so.1 ... ... Is this due a bug?

    Read the article

  • Using MobileMe idisk as a git repository

    - by Ben Guest
    I am trying to use git and MobileMe as a version control system for a personal project I am working across several computers. So far i have done the following. Created and empty bare repository on my local computer $ mkdir myproject.git $ cd myproject.git $ git init --bare $ git update-server-info I then copied the myproject.git directory to the mobile me disk, and sync my computer with mobile me. I then switched to the directory where my project was on my local machine, set the remote origin and try to push the local repository to mobile me $ cd myproject $ git remote add origin https://<username>@idisk.me.com/<username>/myproject.git/ $ git push --all Im am then asked for my password twice. The first time is the mobile me password, any other password gets an error. After entering the second password, and believe me i've tried everything, terminal just hangs. So what am I doing wrong? (Besides trying to use mobileme as a git repository) Thanks, Ben.

    Read the article

  • Linq to SQL problem

    - by Ronnie Overby
    I have a local collection of recordId's (integers). I need to retrieve records that have every one of their child records' ids in that local collection. Here is my query: public List<int> OwnerIds { get; private set; } ... filteredPatches = from p in filteredPatches where OwnerIds.All(o => p.PatchesOwners.Select(x => x.OwnerId).Contains(o)) select p; I am getting this error: Local sequence cannot be used in Linq to SQL implementation of query operators except the Contains() operator. I get that .All() isn't supported by Linq to SQL, but is there a way to do what I am trying to do?

    Read the article

  • Understanding IIS Bindings

    - by OWScott
    Internet Information Services (IIS) uses 4 decision points for the site bindings.  They are the protocol, port, IP and host header.  This video lesson walks through the bindings and shows how each one is used. This is part 5 of a 52 week series on various topics for the Web Administrator. Other weeks include: Week 1 – Ping and Tracert Week 2 – Understanding DNS zone records Week 3 – Nslookup – the Ultimate DNS Troubleshooting Tool Week 4 – Three Tricks for Capturing Command Line Output Understanding IIS Bindings

    Read the article

  • Problem using yaml-cpp on OS X

    - by Thomas
    So I'm having trouble compiling my application which is using yaml-cpp I'm including "yaml.h" in my source files (just like the examples in the yaml-cpp wiki) but when I try compiling the application I get the following error: g++ -c -o entityresourcemanager.o entityresourcemanager.cpp entityresourcemanager.cpp:2:18: error: yaml.h: No such file or directory make: *** [entityresourcemanager.o] Error 1 my makefile looks like this: CC = g++ CFLAGS = -Wall APPNAME = game UNAME = uname OBJECTS := $(patsubst %.cpp,%.o,$(wildcard *.cpp)) mac: $(OBJECTS) $(CC) `pkg-config --cflags --libs sdl` `pkg-config --cflags --libs yaml-cpp` $(CFLAGS) -o $(APPNAME) $(OBJECTS) pkg-config --cflags --libs yaml-cpp returns: -I/usr/local/include/yaml-cpp -L/usr/local/lib -lyaml-cpp and yaml.h is indeed located in /usr/local/include/yaml-cpp Any idea what I could do? Thanks

    Read the article

  • A Perspective on Database Performance Tuning

    Fundamentally, database performance tuning is done for two basic reasons, to reduce response time and to reduce resource usage, both of which can apply for any given situation. Julian Stuhler looks at database performance tuning, and why it remains one of the most important topics for any DBA, developer or systems administrator.

    Read the article

  • window.open("c:\test.txt") from Silverlight

    - by queen3
    I'm trying to open local file from Silverlight. I try Window.Navigate("c:\test.pdf", "_blank") and invoking JavaScript like this: window.open("c:\test.pdf", "_blank") Both give "Access is denied". However it works in plain HTML when I do <input type="button" value="test" onclick="window.open('c:\test.pdf', '_blank')" /> Is it Silverlight security restriction? Can I open a local file in a browser from Silverlight application? The reason behind this is that users store local paths and want to open those files from the app.

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >