Search Results

Search found 15651 results on 627 pages for 'setup'.

Page 82/627 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Newbie : installing and upgrading python module.

    - by iamgopal
    I have downloaded and install a python library, via setup.py , python2.5 setup.py install ... now the version is changed at the source . a newer library is available. originally , i have clone it via mercurial, and install it. right now , i have updated repository. how do i use the newer version ? overwrite the installation ? by simply doing setup.py install again ?

    Read the article

  • Junit 4 test suite and individual test classes

    - by Hypnus
    I have a JUnit 4 test suite with BeforeClass and AfterClass methods that make a setup/teardown for the following test classes. What I need is to run the test classes also by them selves, but for that I need a setup/teardown scenario (BeforeClass and AfterClass or something like that) for each test class. The thing is that when I run the suite I do not want to execute the setup/teardown before and after each test class, I only want to execute the setup/teardown from the test suite (once). Is it possible ? Thanks in advance.

    Read the article

  • Is it possible to build exe on Vista and deploy on XP using py2exe

    - by dfens
    I have created some program using python on Windows Vista. But I want to deploy it on Windows XP. Is it necessary to make new build on windows XP? Or there is possibility to make build that will work on both of these systems? EDIT (EDIT 2 - very simple program does not work also): My setup: from distutils.core import setup import py2exe setup(console=['orderer.py']) Using dependency explorer i checked that dependencies are: msvcr90.dll kernel32.dll +ntdll.dll

    Read the article

  • Arduino: Putting servos in my class causes them to rotate all the way to one side

    - by user2526712
    I am trying to create a new class that controls two servos. My code compiles just fine. However, when I run it, the servos just turn all the way to one direction. This seems to happen when I try instantiating the class (when in the constructor, I attach the servos in the class to pins). In My class's header file, I have [UPDATED] #ifndef ServoController_h #define ServoController_h #include "Arduino.h" #include <Servo.h> class ServoController { public: ServoController(int rotateServoPin, int elevateServoPin); void rotate(int degrees); void elevate(int degrees); private: Servo rotateServo; Servo elevateServo; int elevationAngle; int azimuthAngle; }; #endif Code so far for my Class: #include "Arduino.h" #include "ServoController.h" ServoController::ServoController(int rotateServoPin, int elevateServoPin) { azimuthAngle = 0; elevationAngle = 0; elevateServo.attach(elevateServoPin); rotateServo.attach(rotateServoPin); } void ServoController::rotate(int degrees) { //TO DO rotateServo.write(degrees); } void ServoController::elevate(int degrees) { //TO DO elevateServo.write(degrees); } And finally my arduino sketch so far is just: #include <ServoController.h> #include <Servo.h> ServoController sc(2 , 3); void setup() { } void loop() { } I'm pretty sure the circuit I am using is fine, since if I do not use my class, and just use the servo library directly in my arduino file, the servos move correctly. any ideas why this might happen? [UPDATE] I actually got this working. In my constructor, I have removed the lines to attach the servos to pins. Instead, I have added another method to my class which does the attachment. ServoController::ServoController(int rotateServoPin, int elevateServoPin) { azimuthAngle = 0; elevationAngle = 0; // elevateServo.attach(elevateServoPin); // rotateServo.attach(rotateServoPin); } void ServoController::attachPins(int rotateServoPin, int elevateServoPin) { azimuthAngle = 0; elevationAngle = 0; elevateServo.attach(elevateServoPin); rotateServo.attach(rotateServoPin); } I then call this in my sketch's setup() function: void setup() { sc.attachPins(2,3); } It seems like if I attach my servos outside of the setup() function, my problem occurs. [UPDATE July 27 9:13PM] Verified something with another test: I created a new sketch where I attached a servo before setup(): #include <Servo.h> Servo servo0; servo0.attach(2); void setup() { } void loop() // this function runs repeatedly after setup() finishes { servo0.write(90); delay(2000); servo0.write(135); delay(2000); servo0.write(45); delay(2000); } When I try to compile, Arduino throws an error: "testservotest:4: error: expected constructor, destructor, or type conversion before '.' token" So there was an error, but it was not thrown when the attach method was called from a class Thanks very much

    Read the article

  • Windows Service is not Working.

    - by prateeksaluja20
    I had made a windows service in visual studio 2008 in C#.inside the service i had written only single line code try { System.Diagnostics.Process.Start(@"E:\Users\Sk\Desktop\category.txt"); } catch { } then i add the project insatller & change the serviceProcessInstaller1 Account proerty as local system Also change the serviceInstaller1 start type proerty as Automatic. then i build the project.it was succesfull. after that i add another project that was setup project.i had added pprimary project output & i had added the custom action as "Primary output from DemoWindowsService (Active)".then buil the setup.setup was build sucessfully.then i install the setup & then went to services start the service.service stated properly butit was not performing the task. i had checked the path is correct & also i tried to do System.Diagnostics.Process.Start(@"E:\Windows\system32\notepad.exe") but still result is same.i tried alot but not getting the ans soleasse help me to solve this problem.

    Read the article

  • How to search for a file or directory in Linux Ubuntu machine

    - by Jury A
    I created an EC2 instance (Ubuntu 64 bit) and attached a volume from a publicly available snapshot to the instance. I successfully mounted the volume. I am supposed to be able to run a script from this attached volume using the following steps as explained in the tutorial: Log in to your virtual machine. mkdir /space mount /dev/sdf1 /space cd /space ./setup-script The problem is that, when I try: ./setup-script I got the following message: -bash: ./setup-script: No such file or directory What is the problem ? How can I search for the ./setup-script in the whole machine ? I'm not very familiar with linux system. Please, help. For more details about the issue: Look at my previous post: Error when mounting drive

    Read the article

  • How can I remove Ruby, Rails & mysql to start the installation process again?

    - by ben
    I've been trying to get setup with Ruby on Rails today, but I think I've followed some bad instructions along the way, and nothing seems to work. I've now borrowed the book "Agile Web Development with Rails, Third Edition" from a friend, and want to follow the setup instructions in that. Firstly, do I need to remove what I've setup previously? If yes, how do I do it? I can't seem to find instructions anywhere. I'm running OSX 10.6.1, so I know that it came with some stuff already setup, but I've been installing customized stuff over the top which I think I'll have to remove. Thanks for reading!

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • Capistrano asks for SSH password when deploying from local machine to server

    - by GhostRider
    When I try to ssh to a server, I'm able to do it as my id_rsa.pub key is added to the authorized keys in the server. Now when I try to deploy my code via Capistrano to the server from my local project folder, the server asks for a password. I'm unable to understand what could be the issue if I'm able to ssh and unable to deploy to the same server. $ cap deploy:setup "no seed data" triggering start callbacks for `deploy:setup' * 13:42:18 == Currently executing `multistage:ensure' *** Defaulting to `development' * 13:42:18 == Currently executing `development' * 13:42:18 == Currently executing `deploy:setup' triggering before callbacks for `deploy:setup' * 13:42:18 == Currently executing `db:configure_mongoid' * executing "mkdir -p /home/deploy/apps/development/flyingbird/shared/config" servers: ["dev1.noob.com", "176.9.24.217"] Password: Cap script: # gem install capistrano capistrano-ext capistrano_colors begin; require 'capistrano_colors'; rescue LoadError; end require "bundler/capistrano" # RVM bootstrap # $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require 'rvm/capistrano' set :rvm_ruby_string, 'ruby-1.9.2-p290' set :rvm_type, :user # or :user # Application setup default_run_options[:pty] = true # allow pseudo-terminals ssh_options[:forward_agent] = true # forward SSH keys (this will use your SSH key to get the code from git repository) ssh_options[:port] = 22 set :ip, "dev1.noob.com" set :application, "flyingbird" set :repository, "repo-path" set :scm, :git set :branch, fetch(:branch, "master") set :deploy_via, :remote_cache set :rails_env, "production" set :use_sudo, false set :scm_username, "user" set :user, "user1" set(:database_username) { application } set(:production_database) { application + "_production" } set(:staging_database) { application + "_staging" } set(:development_database) { application + "_development" } role :web, ip # Your HTTP server, Apache/etc role :app, ip # This may be the same as your `Web` server role :db, ip, :primary => true # This is where Rails migrations will run # Use multi-staging require "capistrano/ext/multistage" set :stages, ["development", "staging", "production"] set :default_stage, rails_env before "deploy:setup", "db:configure_mongoid" # Uncomment if you use any of these databases after "deploy:update_code", "db:symlink_mongoid" after "deploy:update_code", "uploads:configure_shared" after "uploads:configure_shared", "uploads:symlink" after 'deploy:update_code', 'bundler:symlink_bundled_gems' after 'deploy:update_code', 'bundler:install' after "deploy:update_code", "rvm:trust_rvmrc" # Use this to update crontab if you use 'whenever' gem # after "deploy:symlink", "deploy:update_crontab" if ARGV.include?("seed_data") after "deploy", "db:seed" else p "no seed data" end #Custom tasks to handle resque and redis restart before "deploy", "deploy:stop_workers" after "deploy", "deploy:restart_redis" after "deploy", "deploy:start_workers" after "deploy", "deploy:cleanup" 'Create symlink for public uploads' namespace :uploads do task :symlink do run <<-CMD rm -rf #{release_path}/public/uploads && mkdir -p #{release_path}/public && ln -nfs #{shared_path}/public/uploads #{release_path}/public/uploads CMD end task :configure_shared do run "mkdir -p #{shared_path}/public" run "mkdir -p #{shared_path}/public/uploads" end end namespace :rvm do desc 'Trust rvmrc file' task :trust_rvmrc do run "rvm rvmrc trust #{current_release}" end end namespace :db do desc "Create mongoid.yml in shared path" task :configure_mongoid do db_config = <<-EOF defaults: &defaults host: localhost production: <<: *defaults database: #{production_database} staging: <<: *defaults database: #{staging_database} EOF run "mkdir -p #{shared_path}/config" put db_config, "#{shared_path}/config/mongoid.yml" end desc "Make symlink for mongoid.yml" task :symlink_mongoid do run "ln -nfs #{shared_path}/config/mongoid.yml #{release_path}/config/mongoid.yml" end desc "Fill the database with seed data" task :seed do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake db:seed" end end namespace :bundler do desc "Symlink bundled gems on each release" task :symlink_bundled_gems, :roles => :app do run "mkdir -p #{shared_path}/bundled_gems" run "ln -nfs #{shared_path}/bundled_gems #{release_path}/vendor/bundle" end desc "Install bundled gems " task :install, :roles => :app do run "cd #{release_path} && bundle install --deployment" end end namespace :deploy do task :start, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Restart the app" task :restart, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Start the workers" task :stop_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:stop_workers" end desc "Restart Redis server" task :restart_redis do "/etc/init.d/redis-server restart" end desc "Start the workers" task :start_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:start_workers" end end

    Read the article

  • Windows 2008 R2 SMB / CIFS Logging to diagnose Brother MFC Network Scanning

    - by Steven Potter
    I am attempting to setup network scanning on a brother MFC-9970CDW printer. According to the Brother documentation, the printer is setup to connect to any CIFS network share. I applied all of the appropriate setting in the printer however I get a "sending error" when I try to scan a document. When I look at the logs of the 2008 R2 server that I am attempting to connect to; I can see in the security log where the printer successfully authenticates, however nothing else is logged. I would assume that immediately after the authentication, the printer is making a CIFS request and some sort of error is occurring, however I can't seem to find any way to log this information to find out what is going on. Is it possible to get Windows 2008 to log SMB/CIFS traffic? Followup: I installed Microsoft netmon and captured the packets associated with the transaction: 510 3:04:28 PM 7/9/2012 34.4277743 System 192.168.1.134 192.168.1.10 SMB SMB:C; Negotiate, Dialect = NT LM 0.12 {SMBOverTCP:30, TCP:29, IPv4:22} 511 3:04:28 PM 7/9/2012 34.4281246 System 192.168.1.10 192.168.1.134 SMB SMB:R; Negotiate, Dialect is NT LM 0.12 (#0), SpnegoToken (1.3.6.1.5.5.2) {SMBOverTCP:30, TCP:29, IPv4:22} 519 3:04:29 PM 7/9/2012 34.8986214 System 192.168.1.134 192.168.1.10 SMB SMB:C; Session Setup Andx, NTLM NEGOTIATE MESSAGE {SMBOverTCP:30, TCP:29, IPv4:22} 520 3:04:29 PM 7/9/2012 34.8989310 System 192.168.1.10 192.168.1.134 SMB SMB:R; Session Setup Andx, NTLM CHALLENGE MESSAGE - NT Status: System - Error, Code = (22) STATUS_MORE_PROCESSING_REQUIRED {SMBOverTCP:30, TCP:29, IPv4:22} 522 3:04:29 PM 7/9/2012 34.9022870 System 192.168.1.134 192.168.1.10 SMB SMB:C; Session Setup Andx, NTLM AUTHENTICATE MESSAGEVersion:v2, Domain: CORP, User: PRINTSUPOFF, Workstation: BRN001BA9AD1FE6 {SMBOverTCP:30, TCP:29, IPv4:22} 523 3:04:29 PM 7/9/2012 34.9032421 System 192.168.1.10 192.168.1.134 SMB SMB:R; Session Setup Andx {SMBOverTCP:30, TCP:29, IPv4:22} 525 3:04:29 PM 7/9/2012 34.9051855 System 192.168.1.134 192.168.1.10 SMB SMB:C; Tree Connect Andx, Path = \\192.168.1.10\IPC$, Service = ????? {SMBOverTCP:30, TCP:29, IPv4:22} 526 3:04:29 PM 7/9/2012 34.9053083 System 192.168.1.10 192.168.1.134 SMB SMB:R; Tree Connect Andx, Service = IPC {SMBOverTCP:30, TCP:29, IPv4:22} 528 3:04:29 PM 7/9/2012 34.9073573 System 192.168.1.134 192.168.1.10 DFSC DFSC:Get DFS Referral Request, FileName: \\192.168.1.10\NSCFILES, MaxReferralLevel: 3 {SMB:33, SMBOverTCP:30, TCP:29, IPv4:22} 529 3:04:29 PM 7/9/2012 34.9152042 System 192.168.1.10 192.168.1.134 SMB SMB:R; Transact2, Get Dfs Referral - NT Status: System - Error, Code = (549) STATUS_NOT_FOUND {SMB:33, SMBOverTCP:30, TCP:29, IPv4:22} 531 3:04:29 PM 7/9/2012 34.9169738 System 192.168.1.134 192.168.1.10 SMB SMB:C; Tree Disconnect {SMBOverTCP:30, TCP:29, IPv4:22} 532 3:04:29 PM 7/9/2012 34.9170688 System 192.168.1.10 192.168.1.134 SMB SMB:R; Tree Disconnect {SMBOverTCP:30, TCP:29, IPv4:22} As you can see, the DFS referral fails and the transaction is shut down. I can't see any reason for the DFS referral to fail. The only reference I can find online is: https://bugzilla.samba.org/show_bug.cgi?id=8003 Anyone have any ideas for a solution?

    Read the article

  • Setting up VPN client: L2TP with IPsec

    - by zachar
    I've got to connect to vpn server. It works on Windows, but in Ubuntu 10.04 not. Number of options is confusing for me. There is the input that I have: IP Address of VPN Pre-shared key to authenticate Information that MS-CHAPv2 is used Login and Password to VPN I was trying to achive that with network manager and with L2TP IPsec VPN Manager 1.0.9 but at failed. There is some logged information from L2TP IPsec VPN Manager 1.0.9: Nov 09 15:21:46.854 ipsec_setup: Stopping Openswan IPsec... Nov 09 15:21:48.088 Stopping xl2tpd: xl2tpd. Nov 09 15:21:48.132 ipsec_setup: Starting Openswan IPsec U2.6.23/K2.6.32-49-generic... Nov 09 15:21:48.308 ipsec__plutorun: Starting Pluto subsystem... Nov 09 15:21:48.318 ipsec__plutorun: adjusting ipsec.d to /etc/ipsec.d Nov 09 15:21:48.338 ipsec__plutorun: 002 added connection description "my_vpn_name" Nov 09 15:21:48.348 ipsec__plutorun: 003 NAT-Traversal: Trying new style NAT-T Nov 09 15:21:48.348 ipsec__plutorun: 003 NAT-Traversal: ESPINUDP(1) setup failed for new style NAT-T family IPv4 (errno=19) Nov 09 15:21:48.349 ipsec__plutorun: 003 NAT-Traversal: Trying old style NAT-T Nov 09 15:21:48.994 104 "my_vpn_name" #1: STATE_MAIN_I1: initiate Nov 09 15:21:48.994 003 "my_vpn_name" #1: received Vendor ID payload [RFC 3947] method set to=109 Nov 09 15:21:48.994 003 "my_vpn_name" #1: received Vendor ID payload [Dead Peer Detection] Nov 09 15:21:48.994 106 "my_vpn_name" #1: STATE_MAIN_I2: sent MI2, expecting MR2 Nov 09 15:21:48.994 003 "my_vpn_name" #1: NAT-Traversal: Result using RFC 3947 (NAT-Traversal): i am NATed Nov 09 15:21:48.994 108 "my_vpn_name" #1: STATE_MAIN_I3: sent MI3, expecting MR3 Nov 09 15:21:48.994 004 "my_vpn_name" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=oakley_3des_cbc_192 prf=oakley_sha group=modp1024} Nov 09 15:21:48.995 117 "my_vpn_name" #2: STATE_QUICK_I1: initiate Nov 09 15:21:48.995 004 "my_vpn_name" #2: STATE_QUICK_I2: sent QI2, IPsec SA established transport mode {ESP=>0x0c96795d <0x483e1a42 xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=none} Nov 09 15:21:49.996 [ERROR 210] Failed to open l2tp control file 'c my_vpn_name' and from syslog: Nov 9 15:21:46 o99 L2tpIPsecVpnControlDaemon: Opening client connection Nov 9 15:21:46 o99 L2tpIPsecVpnControlDaemon: Executing command ipsec setup stop Nov 9 15:21:46 o99 ipsec_setup: Stopping Openswan IPsec... Nov 9 15:21:48 o99 kernel: [ 4350.245171] NET: Unregistered protocol family 15 Nov 9 15:21:48 o99 ipsec_setup: ...Openswan IPsec stopped Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command ipsec setup stop finished with exit code 0 Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command invoke-rc.d xl2tpd stop Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command invoke-rc.d xl2tpd stop finished with exit code 0 Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Opening client connection Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Closing client connection Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command ipsec setup start Nov 9 15:21:48 o99 kernel: [ 4350.312483] NET: Registered protocol family 15 Nov 9 15:21:48 o99 ipsec_setup: Starting Openswan IPsec U2.6.23/K2.6.32-49-generic... Nov 9 15:21:48 o99 ipsec_setup: Using NETKEY(XFRM) stack Nov 9 15:21:48 o99 kernel: [ 4350.410774] Initializing XFRM netlink socket Nov 9 15:21:48 o99 kernel: [ 4350.413601] padlock: VIA PadLock not detected. Nov 9 15:21:48 o99 kernel: [ 4350.427311] padlock: VIA PadLock Hash Engine not detected. Nov 9 15:21:48 o99 kernel: [ 4350.441533] padlock: VIA PadLock not detected. Nov 9 15:21:48 o99 ipsec_setup: ...Openswan IPsec started Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command ipsec setup start finished with exit code 0 Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command invoke-rc.d xl2tpd start Nov 9 15:21:48 o99 ipsec__plutorun: adjusting ipsec.d to /etc/ipsec.d Nov 9 15:21:48 o99 pluto: adjusting ipsec.d to /etc/ipsec.d Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command invoke-rc.d xl2tpd start finished with exit code 0 Nov 9 15:21:48 o99 ipsec__plutorun: 002 added connection description "my_vpn_name" Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command ipsec auto --ready Nov 9 15:21:48 o99 ipsec__plutorun: 003 NAT-Traversal: Trying new style NAT-T Nov 9 15:21:48 o99 ipsec__plutorun: 003 NAT-Traversal: ESPINUDP(1) setup failed for new style NAT-T family IPv4 (errno=19) Nov 9 15:21:48 o99 ipsec__plutorun: 003 NAT-Traversal: Trying old style NAT-T Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command ipsec auto --ready finished with exit code 0 Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command ipsec auto --up my_vpn_name Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command ipsec auto --up my_vpn_name finished with exit code 0 Nov 9 15:21:49 o99 L2tpIPsecVpnControlDaemon: Closing client connection Can anyone tell me something more about that? Where is the mistake?

    Read the article

  • How do I install Catia V5 in Wine?

    - by user73160
    I try to install Catia V5R19 (or V5R18) using Wine. I have Ubuntu 12.04 with all up dates. I install wine Terminal winetricks vcrun2005 vcrun2008 I copy msdrm.dll in *.wine/drive_c/windows/system32* I mount Catia ISO and launch setup.exe Error : Wrong parameters The setup launches, and after validating the installation I got a frozen screen See screenshot < Any advice ? Solution ? Thanks !

    Read the article

  • I have Oracle SQL Developer Installed, Now What?

    - by thatjeffsmith
    If you’re here because you downloaded a copy of Oracle SQL Developer and now you need help connecting to a database, then you’re in the right place. I’ll show you what you need to get up and going so you can finish your homework, teach yourself Oracle database, or get ready for that job interview. You’ll need about 30 minutes to set everything up…and about 5 years to become proficient with Oracle Oracle Database come with SQL Developer but SQL Developer doesn’t include a database If you install Oracle database, it includes a copy of SQL Developer. If you’re running that copy of SQL Developer, please take a second to upgrade now, as it is WAY out of date. But I’m here to talk to the folks that have downloaded SQL Developer and want to know what to do next. You’ve got it running. You see this ‘Connection’ dialog, and… Where am I connecting to, and who as? You NEED a database Installing SQL Developer does not give you a database. So you’re going to need to install Oracle and create a database, or connect to a database that is already up and running somewhere. Basically you need to know the following: where is this database, what’s it called, and what port is the listener running on? The Default Connection properties in SQL Developer These default settings CAN work, but ONLY if you have installed Oracle Database Express Edition (XE). Localhost is a network alias for 127.0.0.1 which is an IP address that maps to the ‘local’ machine, or the machine you are reading this blog post on. The listener is a service that runs on the server and handles connections for the databases on that machine. You can run a database without a listener and you can run a listener without a database, but you can’t connect to a database on a different server unless both that database and listener are up and running. Each listener ‘listens’ on one or more ports, you need to know the port number for each connection. The default port is 1521, but 1522 is often pretty common. I know all of this sounds very complicated Oracle is a very sophisticated piece of software. It’s not analogous to downloading a mobile phone app and and using it 10 seconds later. It’s not like installing Office/Access either – it requires services, environment setup, kernel tweaks, etc. However. Normally an administrator will setup and install Oracle, create the database, and configure the listener for everyone else to use. They’ll often also setup the connection details for everyone via a ‘TNSNAMES.ORA’ file. This file contains a list of database connection details for folks to browse – kind of like an Oracle database phoneboook. If someone has given you a TNSNAMES.ORA file, or setup your machine to have access to a TNSNAMES file, then you can just switch to the ‘TNS’ connection type, and use the dropdown to select the database you want to connect to. Then you don’t have to worry about the server names, database names, and the port numbers. ORCL – that sounds promising! ORCL is the default SID when creating a new database with the Database Creation Assistant (DBCA). It’s just me, and I need help! No administrator, no database, no nothing. What do you do? You have a few options: Buy a copy of Oracle and download, install, and create a database Download and install XE (FREE!) Download, import, and run our Developer Days Hands-on-Lab (FREE!) If you’re a student (or anyone else) with little to no experience with Oracle, then I recommend the third option. Oracle Technology Network Developer Day: Hands-on Database Application Development Lab The OTN lab runs on a A Virtual Box image which contains: 11gR2 Enterprise Edition copy of Oracle a database and listener running for you to connect to lots of demo data for you to play with SQL Developer installed and ready to connect Some browser based labs you can step through to learn Oracle You download the image, you download and install Virtual Box (also FREE!), then you IMPORT the image you previously downloaded. You then ‘Start’ the image. It will boot a copy of Oracle Enterprise Linux (OEL), start your database, and all that jazz. You can then start up and run SQL Developer inside the image OR you can connect to the database running on the image using the copy of SQL Developer you installed on your host machine. Setup Port Forwarding to Make It Easy to Connect From Your Host When you start the image, it will be assigned an IP address. Depending on what network adapter you select in the image preferences, you may get something that can get out to the internet from your image, something your host machine can see and connect to, or something that kind of just lives out there in a vacuum. You want to avoid the ‘vacuum’ option – unless you’re OK with running SQL Developer inside the Linux image. Open the Virtual Box image properties and go to the Networking options. We’re going to setup port forwarding. This will tell your machine that anything that happens on port 1521 (the default Oracle Listener port), should just go to the image’s port 1521. So I can connect to ‘localhost’ and it will magically get transferred to the image that is running. Oracle Virtual Box Port Forwarding 1521 listener database Now You Just Need a Username and Password The default passwords on this image are all ‘oracle’ – so you can connect as SYS, HR, or whatever – just use ‘oracle’ as the password. The Linux passowrds are all ‘oracle’ too, so you can login as ‘root’ or as ‘oracle’ in the Linux desktop. Connect! Connect as HR to your Oracle database running on the OTN Developer Days Virtual Box image If you’re connecting to someone else’s database, you need to ask the person that manages that environment to create for you an account. Don’t try to ‘guess’ or ‘figure out’ what the username and password is. Introduce yourself, explain your situation, and ask kindly for access. This is your first test – can you connect? I know it’s hard to get started with Oracle. There are however many things we offer to make this easier. You’ll need to do a bit of RTM first though. Once you know what’s required, you will be much more likely to succeed. Of course, if you need help, you know where to find me

    Read the article

  • OFM 11g: OAM SSO for Forms and ADF Faces

    - by olaf.heimburger
    In my blog entry OFM 11g: Implementing OAM SSO with Forms we set the foundation for providing a complete Single Sign-On solution based on Oracle Access Manager (OAM). This foundation should now be used to combine Forms 11g and ADF Faces 11g applications with a transparent login. The Beginning Before we start, lets re-consider the requirements to achieve the ultimate goal. These are:- Access to the Forms 11g Application must be authenticated by OAM (protected). Access to the ADF Faces 11g Application must be authenticated by OAM (protected). Switching from one application to the other should not result in a re-authentication (aka single sign-on). User identity should be availble to the application without any extra work in the application code. All these are the common requirements for a single sign-on solution. The challenge here is that Forms relies on Oracle AS SSO (OSSO or "the old SSO") while ADF Faces is quite open and can be protected by Oracle AS SSO and Oracle Access Manager SSO (OAM SSO or "the modern SSO"). Both application types can use their own login mechanism. The Forms 11g Application To demonstrate the SSO functionality, we use the standard Forms test (/forms/frmservlet?form=test.fmx). Although this shows nothing specific in the Forms application, it is good enough to demonstrate that it is protected. The ADF Faces 11g Application With ADF 11g you can develop quite a number of useful Faces based applications. Among many features, it comes with the ADF Security feature that provides you with functionality to protect your pages, regions, and even TaskFlows from un-authenticated usage in a declarative way.To demonstrate that functionality a sample application with different access levels plus a login dialog is used. This application comes with a publc page that has protected content (a button). Once you are authenticated for the application, the protected content and some personalisation (the users name) is shown. Protecting Forms 11g As already explained in the OFM 11g: Implementing OAM SSO with Forms, the easiest way to protect a Forms application is to configure it as a OSSO partner application, setup mod_osso, test it, migrate OSSO to OAM SSO with the Upgrade Agent, reconfigure mod_osso, and you are done.Sort of. By default the OAM is configured to run in co-exist mode. This means that a user has to re-authenticate to the Forms application when logged into an OAM SSO application before. To avoid this, you must disable the co-exist mode, for example by using WLST and issue the disableCoexistMode on the OAM server. Protecting ADF Faces 11g To protect an ADF Faces 11g application we have to consider two scenarios: Use a HTTPD server in front of WLS Use WLS without a HTTPD server Both scenarios have their pro's and cons' and we won't get into details and just describe how to configure both. Scenario 1: HTTPD Server with WLS In this scenario we have to setup the environment in some steps:- Configure a WebGate at OAMThis configuration can be done through the OAM console or by a script. No matter which way you choose, the WebGate configuration files will be created for you. Install the OAM WebGate into an HTTPD serverThe type of webgate you need to install depends on you HTTPD server. With Oracle HTTP Server 11g you can use the latest OAM 11g WebGate. With other HTTPD servers you must resort to OAM 10g WebGates. A OAM 11g WebGate can use the pre-created configuration files supplied during the WebGate configuration at OAM. An OAM 10g WebGate asks for the specific configuration and verifies it during installation. Configure the WLS plugin to forward the requests to WLSAgain, depending on your HTTPD Server you have different plugins to forward requests to WLS. With OHS 11g you can use the pre-installed mod_wl_ohs plugin. Its configuration is quite simple and straightforward. Configure an OAM SSPI Provider as a IdentityAsserter in WLS to retrieve the user identifierThis configuration is quite important as it retrieves the user identifier for the next step. If you have a SOA Suite installation within your OFM_HOME, the necessary software is already installed and you only need to setup your Security Realm within WLS.You can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. We add the OAMIdentityAsserter as the first SSPI Provider. It is important that the Control Flag is set to SUFFICIENT. Every other configuration can be left as is, no changes are necessary here. Configure an OAM Identity Provider to get the real user identityIn OFM 11g: Implementing OAM SSO with Forms we have configured an OID as Identity Store. To get the user identity we need to configure the same OID as an SSPI Provider for WLS. This will retrieve the real user information from OID and creates the JAAS Subject and Principals to be used by any application within WLS.Again, you can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. Now add the OIDAuthenticator as the second SSPI Provider. It is important that the Control Flag is set to OPTIONAL. After we saved this setup, we need to configure this provider by setting the Provider Specific details to access OID. Scenario 2: WLS only This scenario is a bit easier but requires more work in the WLS setup:- Configure a WebGate at OAMThis configuration can be done through the OAM console or by a script. No matter which way you choose, the WebGate configuration files will be created for you. Configure the OAM SSPI Provider as IdentityAuthenticator to authenticate and set the user identifierWhen using the OAM SSPI Provider as OAMAuthenticator we create it with the Control Flag as SUFFICIENT. Afte saving it, the Provider Specific settings must be configured to allow the OAM SSPI Provider to connect to the OAM Server. Configure an OAM Identity Provider to get the real user identity providerAgain, you can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. Now add the OIDAuthenticator as the second SSPI Provider. It is important that the Control Flag is set to OPTIONAL. After we saved this setup, we need to configure this provider by setting the Provider Specific details to access OID. Configure ADF 11g Application for OAM Actually, there are no changes to be made within the ADF application. We only need to add the value CLIENT_CERT to the <auth-mode> tag in the <login-config> tag in the web.xml file. Testing To test the configuration, simply point your browser to one of both appliction URLs. OAM should kick in and redirect you to the OAM Login page. After you have entered the correct credentials, access to the URLs is granted and you will see the application. Enjoy!

    Read the article

  • Part 1: What are EBS Customizations?

    - by volker.eckardt(at)oracle.com
    Everything what is not shipped as Oracle standard may be called customization. And very often we differentiate between setup and customization, although setup can also be required when working with customizations.This highlights one of the first challenges, because someone needs to track setup brought over with customizations and this needs to be synchronized with the (standard) setup done manually. This is not only a tracking issue, but also a documentation issue. I will cover this in one of the following blogs in more detail.But back to the topic itself. Mainly our code pieces (java, pl/sql, sql, shell scripts), custom objects (tables, views, packages etc.) and application objects (concurrent programs, lookups, forms, reports, OAF pages etc.) are treated as customizations. In general we define two types: customization by extension and customization by modification. For sure we like to minimize standard code modifications, but sometimes it is just not possible to provide a certain functionality without doing it.Keep in mind that the EBS provides a number of alternatives for modifications, just to mention some:Files in file system    add your custom top before the standard top to the pathBI Publisher Report    add a custom layout and disable the standard layout, automatically yours will be taken.Form /OAF Change    use personalization or substitutionUsing such techniques you are on the safe site regarding standard patches, but for sure a retest is always required!Many customizations are growing over the time, initially it was just one file, but in between we have 5, 10 or 15 files in our customization pack. The more files you have, the more important is the installation order.Last but not least also personalization's are treated as customizations, although you may not use any deployment pack to transfer such personalisation's (but you can). For OAF personalization's you can use iSetup, I have also enabled iSetup to allow Forms personalizations to transport.Interfaces and conversion objects are quite often also categorized as customizations and I promote this decision. Your development standards are related to all these kinds of custom code whether we are exchanging data with users (via form or report) or with other systems (via inbound or outbound interface).To cover all these types of customizations two acronyms have been defined: RICE and CEMLI.RICE = Reports, Interfaces, Conversions, and ExtensionsCEMLI = Customization, Extension, Modification, Localization, IntegrationThe word CEMLI has been introduced by Oracle On Demand and is used within Oracle projects quite often, but also RICE is well known as acronym.It doesn't matter which acronym you are using, the main task here is to classify and categorize your customizations to allow everyone to understand when you talk about RICE- 211, CEMLI XXFI_BAST or XXOM_RPT_030.Side note: Such references are not automatically objects prefixes, but they are often used as such. I plan also to address this point in one other blog.Thank you!Volker

    Read the article

  • InfoPath Cannot Start Microsoft Visual Studio Tools for Applications

    - by ybbest
    When I am trying to access developer tools under developer tab in InfoPath Designer 2010 , I got this error InfoPath Cannot Start Microsoft Visual Studio Tools for Applications(See the screenshot below)     I got this error because , I do not install VSTA when I install office2010.To Install VSTA, you need to Launch Office 2010 setup from your Office 2010 installation media,choose the Add or Remove Features radio button in the installer then Set the Visual Studio Tools for Applications option to Run from My Computer and continue through the setup wizard. (See the screenshot below).Once this is done , you are ready to start VSTA for you InfoPath Form.

    Read the article

  • Installing Gnome Classic on Ubuntu Server 12.04.1 64bit

    - by varunyellina
    I've installed Ubuntu Server Edition and setup open ssh,samba and lamp on my home desktop just to work on LAN. I also want setup a GUI on it for daily use. I've already performed the following sudo apt-get install gnome-session-fallback sudo apt-get install lightdm-gtk-greeter sudo apt-get install xinit I don't want to install Unity or the Gnome3 Shell on my system. Also I haven't found instructions to installing gnome-classic on a server edition(although it shouldn't make a difference).How do I get it to work?

    Read the article

  • How do I set up nvidia graphics adapter to put out 1080p, it seems to be using interlace mode>

    - by keepitsimpleengineer
    After upgrading to 12.04, my mythbuntu client/server seems to be running in 1080i, the clue comes from: [ 1176.117] (II) NVIDIA(0): Setting mode "1920x1080_60i" [ 1231.340] (II) NVIDIA(0): Setting mode "DFP-1:1920x1080_60@1920x1080+0+0" This is from Xorg.0.log. This whole thing started from video tearing when watching Mythtv recordings. It didn't happen in 10.10. Should I use "TVStandard" "HD1080p" in the screen section since this is a dedicated HTPC? It only connects to an HDTV (1080p) via hdmi. Here is the current xorg.conf file: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 270.29 (buildd@allspice) Fri Feb 25 14:42:07 UTC 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup # InputDevice "Keyboard0" "CoreKeyboard" # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup # InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" FontPath "unix/:7100" EndSection # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup #Section "InputDevice" # # generated from default # Identifier "Mouse0" # Driver "mouse" # Option "Protocol" "auto" # Option "Device" "/dev/psaux" # Option "Emulate3Buttons" "no" # Option "ZAxisMapping" "4 5" #EndSection # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup #Section "InputDevice" # # generated from default # Identifier "Keyboard0" # Driver "kbd" #EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "SAMSUNG" HorizSync 26.0 - 81.0 VertRefresh 24.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 240" Option "TripleBuffer" "1" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection After a little digging, the question changes slightly, to wit... Per Chapter 19 of nvidia README... "If the EDID for the display device reported a preferred mode timing, and that mode timing is considered a valid mode, then that mode is used as the "nvidia-auto-select" mode." The EDID for my HDMI connected LCD monitor says use first device as preferred. Prefer first detailed timing : Yes Also: (--) NVIDIA(0): EDID maximum pixel clock : 230.0 MHz The list: (from startx -- -verbose 6 ) (--) NVIDIA(0): Detailed Timings: (--) NVIDIA(0): 1920 x 1080 @ 60 Hz (--) NVIDIA(0): Pixel Clock : 148.50 MHz (--) NVIDIA(0): HRes, HSyncStart : 1920, 2008 (--) NVIDIA(0): HSyncEnd, HTotal : 2052, 2200 (--) NVIDIA(0): VRes, VSyncStart : 1080, 1084 (--) NVIDIA(0): VSyncEnd, VTotal : 1089, 1125 (--) NVIDIA(0): H/V Polarity : +/+ This is the actual mode selected: (from xorg.0.log) (--) NVIDIA(0): 1920 x 1080 @ 60 Hz (--) NVIDIA(0): Pixel Clock : 74.18 MHz (--) NVIDIA(0): HRes, HSyncStart : 1920, 2008 (--) NVIDIA(0): HSyncEnd, HTotal : 2052, 2200 (--) NVIDIA(0): VRes, VSyncStart : 1080, 1084 (--) NVIDIA(0): VSyncEnd, VTotal : 1094, 1124 (--) NVIDIA(0): H/V Polarity : +/+ (--) NVIDIA(0): Extra : Interlaced (--) NVIDIA(0): CEA Format : 5 So my HTPC is down-converting to 1080i and then the Monitor is up-converting to 1080p How can I fix this, please?

    Read the article

  • Do you write common pre-conditions for a large number of unit test cases ?

    - by Vinoth Kumar
    I have heard/read writing common pre-conditions for a large number of test cases is a bad thing, since this dependency may cause large number of test cases to fail if something changes . What are your thoughts on it ? If this is so , then what exactly is the purpose of setUp() method in Junit that runs before each test case ? If the same code inside setUp() runs before each test case , why cant it run only once before running all the test cases together ?

    Read the article

  • Combining Shared Secret and Username Token – Azure Service Bus

    - by Michael Stephenson
    As discussed in the introduction article this walkthrough will explain how you can implement WCF security with the Windows Azure Service Bus to ensure that you can protect your endpoint in the cloud with a shared secret but also flow through a username token so that in your listening WCF service you will be able to identify who sent the message. This could either be in the form of an application or a user depending on how you want to use your token. Prerequisites Before going into the walk through I want to explain a few assumptions about the scenario we are implementing but to keep the article shorter I am not going to walk through all of the steps in how to setup some of this. In the solution we have a simple console application which will represent the client application. There is also the services WCF application which contains the WCF service we will expose via the Windows Azure Service Bus. The WCF Service application in this example was hosted in IIS 7 on Windows 2008 R2 with AppFabric Server installed and configured to auto-start the WCF listening services. I am not going to go through significant detail around the IIS setup because it should not matter in relation to this article however if you want to understand more about how to configure WCF and IIS for such a scenario please refer to the following paper which goes into a lot of detail about how to configure this. The link is: http://tinyurl.com/8s5nwrz   The Service Component To begin with let's look at the service component and how it can be configured to listen to the service bus using a shared secret but to also accept a username token from the client. In the sample the service component is called Acme.Azure.ServiceBus.Poc.UN.Services. It has a single service which is the Visual Studio template for a WCF service when you add a new WCF Service Application so we have a service called Service1 with its Echo method. Nothing special so far!.... The next step is to look at the web.config file to see how we have configured the WCF service. In the services section of the WCF configuration you can see I have created my service and I have created a local endpoint which I simply used to do a little bit of diagnostics and to check it was working, but more importantly there is the Windows Azure endpoint which is using the ws2007HttpRelayBinding (note that this should also work just the same if your using netTcpRelayBinding). The key points to note on the above picture are the service behavior called MyServiceBehaviour and the service bus endpoints behavior called MyEndpointBehaviour. We will go into these in more detail later.   The Relay Binding The relay binding for the service has been configured to use the TransportWithMessageCredential security mode. This is the important bit where the transport security really relates to the interaction between the service and listening to the Azure Service Bus and the message credential is where we will use our username token like we have specified in the message/clientCrentialType attribute. Note also that we have left the relayClientAuthenticationType set to RelayAccessToken. This means that authentication will be made against ACS for accessing the service bus and messages will not be accepted from any sender who has not been authenticated by ACS.   The Endpoint Behaviour In the below picture you can see the endpoint behavior which is configured to use the shared secret client credential for accessing the service bus and also for diagnostic purposes I have included the service registry element. Hopefully if you are familiar with using Windows Azure Service Bus relay feature the above is very familiar to you and this is a very common setup for this section. There is nothing specific to the username token implementation here. The Service Behaviour Now we come to the bit with most of the username token bits in it. When you configure the service behavior I have included the serviceCredentials element and then setup to use userNameAuthentication and you can see that I have created my own custom username token validator.   This setup means that WCF will hand off to my class for validating the username token details. I have also added the serviceSecurityAudit element to give me a simple auditing of access capability. My UsernamePassword Validator The below picture shows you the details of the username password validator class I have implemented. WCF will hand off to this class when validating the token and give me a nice way to check the token credentials against an on-premise store. You have all of the validation features with a non-service bus WCF implementation available such as validating the username password against active directory or ASP.net membership features or as in my case above something much simpler.   The Client Now let's take a look at the client side of this solution and how we can configure the client to authenticate against ACS but also send a username token over to the service component so it can implement additional security checks on-premise. I have a console application and in the program class I want to use the proxy generated with Add Service Reference to send a message via the Azure Service Bus. You can see in my WCF client configuration below I have setup my details for the azure service bus url and am using the ws2007HttpRelayBinding. Next is my configuration for the relay binding. You can see below I have configured security to use TransportWithMessageCredential so we will flow the username token with the message and also the RelayAccessToken relayClientAuthenticationType which means the component will validate against ACS before being allowed to access the relay endpoint to send a message.     After the binding we need to configure the endpoint behavior like in the below picture. This is the normal configuration to use a shared secret for accessing a Service Bus endpoint.   Finally below we have the code of the client in the console application which will call the service bus. You can see that we have created our proxy and then made a normal call to a WCF service but this time we have also set the ClientCredentials to use the appropriate username and password which will be flown through the service bus and to our service which will validate them.     Conclusion As you can see from the above walkthrough it is not too difficult to configure a service to use both a shared secret and username token at the same time. This gives you the power and protection offered by the access control service in the cloud but also the ability to flow additional tokens to the on-premise component for additional security features to be implemented. Sample The sample used in this post is available at the following location: https://s3.amazonaws.com/CSCBlogSamples/Acme.Azure.ServiceBus.Poc.UN.zip

    Read the article

  • SQL Server Manageability Series: how to change the default path of .cache files of a data collector? #sql #mdw #dba

    - by ssqa.net
    How to change the default path of .cache files of a data collector after the Management Data Warehouse (MDW has been setup? This was the question asked by one of the DBAs in a client's place, instantly I enquired that were there any folder specified while setting up the MDW and obvious answer was no as there were left default. This means all the .CACHE files are stored under %C\TEMP directory which may post out of disk space problem on the server where the MDW is setup to collect. Going back...(read more)

    Read the article

  • Google Analytics Filters not removing traffic from other domain

    - by Nic Hubbard
    We have a frustrating problem where someone copied our site code including our Google Analytics code. So we are getting stats logged from their site which is very frustrating. I have setup 4 Filters, each trying to disallow any traffic from this other website, but still their traffic is being shown, including on the Real Time section. Do Filters even work to exclude traffic? Here is how I have it setup: Neither of these seem to help at all.

    Read the article

  • how to set power plan / power profile for battery

    - by user86274
    How can i setup different power profile settings? For example i want to set low screen brightness when i am on battery power and full brightness when i am not. From "power settings" i can setup brightness but it is persistent no matter what power i use. Is there any software that i can use to prolong my battery power when i use ubuntu? I can use my laptop at least 1 more hour when i am using windows....

    Read the article

  • Need help with a complex 3d scene (using Ogre and bullet)

    - by Matthias
    In my setup there is a box with a hole on one side, and a freely movable "stick" (or bar, tube). This stick can be inserted/moved through the hole into the box. This hole is exactly as wide as the diameter of the stick. In reality, when you would now hold the end of the stick in your hand and move the hand left/right or up/down, the other end of the stick, which is inside the box, would move into the opposite direction of your hand movement (because the stick is affixed at the pivot point where it is entering the box through the hole). (I hope you understand what I mean so far.) Now I need to simulate such a setup in a 3d program. I have already successfully developed an Ogre3d framework for this application, including bullet. But what I don't know is how I can implement in my program what I have described above. This application must include two more features: The scene camera is attached to the end of the stick that is inserted into the box. So when the user would move the mouse (to control "his" end of the stick outside the box), then the camera attached to the stick would move in the opposite direction, as described above. The stick has some length, and the user can push it further into the box, or pull it closer to him again. That means of course that the max. radius on which the end of the stick inside the box can move depends on how far the stick is pushed into the box. Thus, the more the stick is pushed into the box, the larger the max. radius of this end of the stick with the camera will be. I understand this is maybe quite a complex thing, so I don't expect any real source code here. I already have the Ogre and bullet part as said up and running, as well as a camera attached to the stick. This works fine. What I don't know though is how I can simulate the setup described above. Especially the requirement that the stick is affixed at the position of the hole on the box, where it is inserted into the box. Any ideas how I could approach to implement the described setup?

    Read the article

  • build my own CDN service

    - by user5332
    I have two servers, both with self domain 1st www.myexample1.com 2nd www.myexample2.com and now I would like to setup CDN of www.myexample1.com to www.myexample2.com but I dont know how setup DNS or Apache that, so both servers served files for www.myexample1.com request ... I don't need solve databases, sessions or someting else... but I need know, how to do that both server will available as www.myexample1.com

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >