Search Results

Search found 1748 results on 70 pages for 'branch prediction'.

Page 45/70 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • (vqmod) Error: Could not load language total/sub_total

    - by Steve
    I have moved my Opencart website to a new host, and I receive the error: Notice: Error: Could not load language total/sub_total! in .../vqmod/vqcache/vq2-system_library_language.php on line 41 How do I resolve this? I have tried renaming vqmod/xml to vqmod/xml.bad, with no result. I have tried renaming /vqmod/vqcache to /vqmod/vqcache.bad, with no result. Update: In \system\library\language.php, I commented out an else branch which resulted in an application exit.

    Read the article

  • Deleting remote branches?

    - by keruilin
    When I run git branch -a, it prints out like this, for ex: branch_a remotes/origin/branch_a Few questions: What does branch_a indicate? What does remotes/origin/branch_a indicate? How do I delete remotes/origin/branch_a?

    Read the article

  • Capistrano asks for SSH password when deploying from local machine to server

    - by GhostRider
    When I try to ssh to a server, I'm able to do it as my id_rsa.pub key is added to the authorized keys in the server. Now when I try to deploy my code via Capistrano to the server from my local project folder, the server asks for a password. I'm unable to understand what could be the issue if I'm able to ssh and unable to deploy to the same server. $ cap deploy:setup "no seed data" triggering start callbacks for `deploy:setup' * 13:42:18 == Currently executing `multistage:ensure' *** Defaulting to `development' * 13:42:18 == Currently executing `development' * 13:42:18 == Currently executing `deploy:setup' triggering before callbacks for `deploy:setup' * 13:42:18 == Currently executing `db:configure_mongoid' * executing "mkdir -p /home/deploy/apps/development/flyingbird/shared/config" servers: ["dev1.noob.com", "176.9.24.217"] Password: Cap script: # gem install capistrano capistrano-ext capistrano_colors begin; require 'capistrano_colors'; rescue LoadError; end require "bundler/capistrano" # RVM bootstrap # $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require 'rvm/capistrano' set :rvm_ruby_string, 'ruby-1.9.2-p290' set :rvm_type, :user # or :user # Application setup default_run_options[:pty] = true # allow pseudo-terminals ssh_options[:forward_agent] = true # forward SSH keys (this will use your SSH key to get the code from git repository) ssh_options[:port] = 22 set :ip, "dev1.noob.com" set :application, "flyingbird" set :repository, "repo-path" set :scm, :git set :branch, fetch(:branch, "master") set :deploy_via, :remote_cache set :rails_env, "production" set :use_sudo, false set :scm_username, "user" set :user, "user1" set(:database_username) { application } set(:production_database) { application + "_production" } set(:staging_database) { application + "_staging" } set(:development_database) { application + "_development" } role :web, ip # Your HTTP server, Apache/etc role :app, ip # This may be the same as your `Web` server role :db, ip, :primary => true # This is where Rails migrations will run # Use multi-staging require "capistrano/ext/multistage" set :stages, ["development", "staging", "production"] set :default_stage, rails_env before "deploy:setup", "db:configure_mongoid" # Uncomment if you use any of these databases after "deploy:update_code", "db:symlink_mongoid" after "deploy:update_code", "uploads:configure_shared" after "uploads:configure_shared", "uploads:symlink" after 'deploy:update_code', 'bundler:symlink_bundled_gems' after 'deploy:update_code', 'bundler:install' after "deploy:update_code", "rvm:trust_rvmrc" # Use this to update crontab if you use 'whenever' gem # after "deploy:symlink", "deploy:update_crontab" if ARGV.include?("seed_data") after "deploy", "db:seed" else p "no seed data" end #Custom tasks to handle resque and redis restart before "deploy", "deploy:stop_workers" after "deploy", "deploy:restart_redis" after "deploy", "deploy:start_workers" after "deploy", "deploy:cleanup" 'Create symlink for public uploads' namespace :uploads do task :symlink do run <<-CMD rm -rf #{release_path}/public/uploads && mkdir -p #{release_path}/public && ln -nfs #{shared_path}/public/uploads #{release_path}/public/uploads CMD end task :configure_shared do run "mkdir -p #{shared_path}/public" run "mkdir -p #{shared_path}/public/uploads" end end namespace :rvm do desc 'Trust rvmrc file' task :trust_rvmrc do run "rvm rvmrc trust #{current_release}" end end namespace :db do desc "Create mongoid.yml in shared path" task :configure_mongoid do db_config = <<-EOF defaults: &defaults host: localhost production: <<: *defaults database: #{production_database} staging: <<: *defaults database: #{staging_database} EOF run "mkdir -p #{shared_path}/config" put db_config, "#{shared_path}/config/mongoid.yml" end desc "Make symlink for mongoid.yml" task :symlink_mongoid do run "ln -nfs #{shared_path}/config/mongoid.yml #{release_path}/config/mongoid.yml" end desc "Fill the database with seed data" task :seed do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake db:seed" end end namespace :bundler do desc "Symlink bundled gems on each release" task :symlink_bundled_gems, :roles => :app do run "mkdir -p #{shared_path}/bundled_gems" run "ln -nfs #{shared_path}/bundled_gems #{release_path}/vendor/bundle" end desc "Install bundled gems " task :install, :roles => :app do run "cd #{release_path} && bundle install --deployment" end end namespace :deploy do task :start, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Restart the app" task :restart, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Start the workers" task :stop_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:stop_workers" end desc "Restart Redis server" task :restart_redis do "/etc/init.d/redis-server restart" end desc "Start the workers" task :start_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:start_workers" end end

    Read the article

  • vim-powerline colors are out of whack in urxvt

    - by komidore64
    I have attached two images showing what my vim-powerline looks like. As you can see, something has happened to the colors and I cannot figure out how to fix it. I'm running Fedora 17 on a clean install with i3 (default config) and urxvt. Here is my bashrc: # .bashrc if [[ "$(uname)" != "Darwin" ]]; then # non mac os x # source global bashrc if [[ -f "/etc/bashrc" ]]; then . /etc/bashrc fi export TERM='xterm-256color' # probably shouldn't do this fi # bash prompt with colors # [ <user>@<hostname> <working directory> {current git branch (if you're in a repo)} ] # ==> PS1="\[\e[1;33m\][ \u\[\e[1;37m\]@\[\e[1;32m\]\h\[\e[1;33m\] \W\$(git branch 2> /dev/null | grep -e '\* ' | sed 's/^..\(.*\)/ {\[\e[1;36m\]\1\[\e[1;33m\]}/') ]\[\e[0m\]\n==> " # execute only in Mac OS X if [[ "$(uname)" == 'Darwin' ]]; then # if OS X has a $HOME/bin folder, then add it to PATH if [[ -d "$HOME/bin" ]]; then export PATH="$PATH:$HOME/bin" fi alias ls='ls -G' # ls with colors fi alias ll='ls -lah' # long listing of all files with human readable file sizes alias tree='tree -C' # turns on coloring for tree command alias mkdir='mkdir -p' # create parent directories as needed alias vim='vim -p' # if more than one file, open files in tabs export EDITOR='vim' # super-secret work stuff if [[ -f "$HOME/.workbashrc" ]]; then . $HOME/.workbashrc fi # Add RVM to PATH for scripting if [[ -d "$HOME/.rvm/bin" ]]; then # if installed PATH=$PATH:$HOME/.rvm/bin fi and my Xdefaults: ! URxvt config ! colors! URxvt.background: #101010 URxvt.foreground: #ededed URxvt.cursorColor: #666666 URxvt.color0: #2E3436 URxvt.color8: #555753 URxvt.color1: #993C3C URxvt.color9: #BF4141 URxvt.color2: #3C993C URxvt.color10: #41BF41 URxvt.color3: #99993C URxvt.color11: #BFBF41 URxvt.color4: #3C6199 URxvt.color12: #4174FB URxvt.color5: #993C99 URxvt.color13: #BF41BF URxvt.color6: #3C9999 URxvt.color14: #41BFBF URxvt.color7: #D3D7CF URxvt.color15: #E3E3E3 ! options URxvt*loginShell: true URxvt*font: xft:DejaVu Sans Mono for Powerline:antialias=true:size=12 URxvt*saveLines: 8192 URxvt*scrollstyle: plain URxvt*scrollBar_right: true URxvt*scrollTtyOutput: true URxvt*scrollTtyKeypress: true URxvt*urlLauncher: google-chrome and finally my vimrc set nocompatible set dir=~/.vim/ " set one place for vim swap files " vundler for vim plugins ---- filetype off set rtp+=~/.vim/bundle/vundle call vundle#rc() Bundle 'gmarik/vundle' Bundle 'tpope/vim-surround' Bundle 'greyblake/vim-preview' Bundle 'Lokaltog/vim-powerline' Bundle 'tpope/vim-endwise' Bundle 'kien/ctrlp.vim' " ---------------------------- syntax enable filetype plugin indent on " Powerline ------------------ set noshowmode set laststatus=2 let g:Powerline_symbols = 'fancy' " show fancy symbols (requires patched font) set encoding=utf-8 " ---------------------------- " ctrlp ---------------------- let g:ctrlp_open_multiple_files = 'tj' " open multiple files in additional tabs let g:ctrlp_show_hidden = 1 " include dotfiles and dotdirs in ctrlp indexing let g:ctrlp_prompt_mappings = { \ 'AcceptSelection("e")': ['<c-t>'], \ 'AcceptSelection("t")': ['<cr>', '<2-LeftMouse>'], \ } " remap <cr> to open file in a new tab " ---------------------------- set showcmd set tabpagemax=100 set hlsearch set incsearch set nowrapscan set ignorecase set smartcase set ruler set tabstop=4 set shiftwidth=4 set expandtab set wildmode=list:longest autocmd BufWritePre * :%s/\s\+$//e "remove trailing whitespace " :REV to "revert" file to state of the most recent save command REV earlier 1f " disable netrw -------------- let g:loaded_netrw = 1 let g:loaded_netrwPlugin = 1 " ---------------------------- Any guidance as to fixing the statusline would be fantastic. I've found a github issue outlining almost the exact same problem, but the solution was never posted. Thank you.

    Read the article

  • Choice and setup of version control

    - by Peter M
    I am about to set up an new laptop and in the process transition to a new version control system as part of a general cleanup. Currently I use a centralized version control system (yes it is VSS, and yes I know all the pro's and con's of that system, but as a single user system it works well for me). I have very little requirements for a new system and I am free to choose among any of the current mainstream players, but cost constraints will push me towards oss. Some of my requirements are: Runs on a single machine (ie the laptop in question) under windows I am not sharing things with other developers or workers - this is more for my own historical benefits. I want to version source code, documentation and binary files I have a large hierarchy of projects that are unrelated (see below) I have files within the hierarchy that don't need to be controlled (but could be) Some projects use Visual Studio, so some integration there could be nice. There could be some sharing of files between jobs. I generally only need a small about of branching in code files The directory hierarchy that I have at the moment is somewhat like: Root | |--Customer #1 | | | |--Job #1 | | | | | |--Data files received from Customer for Job (not controlled) | | |--Documentation files (controlled) | | |--Project information files (not controlled - but could be) | | |--Software Project Files (controlled) | | |--Scratch dir for job (not controlled) | | | |--Job #2 | | (same structure as above) | |--Customer #2 | |.. | |--Cusmtomer #n |.. Currently I have about 22 customers with differing numbers of projects underneath them. At the moment I have a single VSS repository based at the root of the directory structure. If I kept with a centralized system (ie SVN) I believe that I should keep the same approach and continue with a single repository based from the root dir. Is this a valid approach? However if I move to a distributed tool then I am unsure of how I should handle the situation. My initial guess is that I should not have a repository based on the root of my entire directory structure - but that is a guess so I really don't know how valid it is. Should I pitch a distributed approach at the Root, Customer, Job or sub-Job directory level? Also what I am not clear on with distributed tools (and perhaps with SVN as well), is if I can branch parts of a repository. For example, I can see branching source code in software projects as being useful, but branching my documentation as not being useful. So if I pitch a repository at the Job level, can I just branch the Software Project Files? Or would all files in that Job be branched? Every time I look at distributed tools I get a nagging feeling that they are not suited to my style of setup. I am uncomfortable with idea of having to manually set up something like 50 to 80 separate repositories (if I pitch at the Job level, or 20+ if at the Customer level) within my directory hierarchy. This feeling also extends to having all those repositories scattered around as well - however I do have a backup strategy that I trust, so this latter feeling is pretty well unfounded. So what advice can you all give me? Thanks in advance!

    Read the article

  • Enhanced REST Support in Oracle Service Bus 11gR1

    - by jeff.x.davies
    In a previous entry on REST and Oracle Service Bus (see http://blogs.oracle.com/jeffdavies/2009/06/restful_services_with_oracle_s_1.html) I encoded the REST query string really as part of the relative URL. For example, consider the following URI: http://localhost:7001/SimpleREST/Products/id=1234 Now, technically there is nothing wrong with this approach. However, it is generally more common to encode the search parameters into the query string. Take a look at the following URI that shows this principle http://localhost:7001/SimpleREST/Products?id=1234 At first blush this appears to be a trivial change. However, this approach is more intuitive, especially if you are passing in multiple parameters. For example: http://localhost:7001/SimpleREST/Products?cat=electronics&subcat=television&mfg=sony The above URI is obviously used to retrieve a list of televisions made by Sony. In prior versions of OSB (before 11gR1PS3), parsing the query string of a URI was more difficult than in the current release. In 11gR1PS3 it is now much easier to parse the query strings, which in turn makes developing REST services in OSB even easier. In this blog entry, we will re-implement the REST-ful Products services using query strings for passing parameter information. Lets begin with the implementation of the Products REST service. This service is implemented in the Products.proxy file of the project. Lets begin with the overall structure of the service, as shown in the following screenshot. This is a common pattern for REST services in the Oracle Service Bus. You implement different flows for each of the HTTP verbs that you want your service to support. Lets take a look at how the GET verb is implemented. This is the path that is taken of you were to point your browser to: http://localhost:7001/SimpleREST/Products/id=1234 There is an Assign action in the request pipeline that shows how to extract a query parameter. Here is the expression that is used to extract the id parameter: $inbound/ctx:transport/ctx:request/http:query-parameters/http:parameter[@name="id"]/@value The Assign action that stores the value into an OSB variable named id. Using this type of XPath statement you can query for any variables by name, without regard to their order in the parameter list. The Log statement is there simply to provided some debugging info in the OSB server console. The response pipeline contains a Replace action that constructs the response document for our rest service. Most of the response data is static, but the ID field that is returned is set based upon the query-parameter that was passed into the REST proxy. Testing the REST service with a browser is very simple. Just point it to the URL I showed you earlier. However, the browser is really only good for testing simple GET services. The OSB Test Console provides a much more robust environment for testing REST services, no matter which HTTP verb is used. Lets see how to use the Test Console to test this GET service. Open the OSB we console (http://localhost:7001/sbconsole) and log in as the administrator. Click on the Test Console icon (the little "bug") next to the Products proxy service in the SimpleREST project. This will bring up the Test Console browser window. Unlike SOAP services, we don't need to do much work in the request document because all of our request information will be encoded into the URI of the service itself. Belore the Request Document section of the Test Console is the Transport section. Expand that section and modify the query-parameters and http-method fields as shown in the next screenshot. By default, the query-parameters field will have the tags already defined. You just need to add a tag for each parameter you want to pass into the service. For out purposes with this particular call, you'd set the quer-parameters field as follows: <tp:parameter name="id" value="1234" /> </tp:query-parameters> Now you are ready to push the Execute button to see the results of the call. That covers the process for parsing query parameters using OSB. However, what if you have an OSB proxy service that needs to consume a REST-ful service? How do you tell OSB to pass the query parameters to the external service? In the sample code you will see a 2nd proxy service called CallREST. It invokes the Products proxy service in exactly the same way it would invoke any REST service. Our CallREST proxy service is defined as a SOAP service. This help to demonstrate OSBs ability to mediate between service consumers and service providers, decreasing the level of coupling between them. If you examine the message flow for the CallREST proxy service, you'll see that it uses an Operational branch to isolate processing logic for each operation that is defined by the SOAP service. We will focus on the getProductDetail branch, that calls the Products REST service using the HTTP GET verb. Expand the getProduct pipeline and the stage node that it contains. There is a single Assign statement that simply extracts the productID from the SOA request and stores it in a local OSB variable. Nothing suprising here. The real work (and the real learning) occurs in the Route node below the pipeline. The first thing to learn is that you need to use a route node when calling REST services, not a Service Callout or a Publish action. That's because only the Routing action has access to the $oubound variable, especially when invoking a business service. The Routing action contains 3 Insert actions. The first Insert action shows how to specify the HTTP verb as a GET. The second insert action simply inserts the XML node into the request. This element does not exist in the request by default, so we need to add it manually. Now that we have the element defined in our outbound request, we can fill it with the parameters that we want to send to the REST service. In the following screenshot you can see how we define the id parameter based on the productID value we extracted earlier from the SOAP request document. That expression will look for the parameter that has the name id and extract its value. That's all there is to it. You now know how to take full advantage of the query parameter parsing capability of the Oracle Service Bus 11gR1PS2. Download the sample source code here: rest2_sbconfig.jar Ubuntu and the OSB Test Console You will get an error when you try to use the Test Console with the Oracle Service Bus, using Ubuntu (or likely a number of other Linux distros also). The error (shown below) will state that the Test Console service is not running. The fix for this problem is quite simple. Open up the WebLogic Server administrator console (usually running at http://localhost:7001/console). In the Domain Structure window on the left side of the console, select the Servers entry under the Environment heading. The select the Admin Server entry in the main window of the console. By default, you should be viewing the Configuration tabe and the General sub tab in the main window. Look for the Listen Address field. By default it is blank, which means it is listening on all interfaces. For some reason Ubuntu doesn't like this. So enter a value like localhost or the specific IP address or DNS name for your server (usually its just localhost in development envirionments). Save your changes and restart the server. Your Test Console will now work correctly.

    Read the article

  • I still think Twitter is dead &hellip; but

    - by Randy Walker
    Twitter finally hit the mainstream about 8 months ago, but I’ve been saying for a couple of years now, without a real way for the company to earn money, what’s the future fate of Twitter?  On the personal side, where is the real value for the users?  For the most part, Twitter has replaced most people’s IM (instant messaging), at least in the technology circles I run in.  It still has value for users as a communication tool.  But I see it more as a fad.  My prediction is over the next 6 months we’ll start seeing a usage drop (if we haven’t already started to see it). On the business side, how does Twitter make money?  It doesn’t.  If you use the text messaging capabilities, you see a few ads.  But most smart phone and PC users, won’t ever see them.  I still think Twitter has the best chance to make money by forcing the “collectors” to pay money.  You know what I mean by “collector”, those people that collect tons of followers or friends.  If Twitter caps the number of followers and makes you pay to have more, would you?  The normal twitter user doesn’t have that many followers, and this is where my title comes in … BUT The financial value for Twitter is really seen through businesses connecting with their customers.  I’ve seen 3 effective ways this has been accomplished. 1. Giving your customers a coupon or announcing a sale My favorite is @amazonmp3, Being a huge music lover, I get notified when they put music on sale. Various restaurants like @ruthschris_ARK will let their favorite customers know about certain specials @BluefinMemphis I was traveling through Memphis once looking for a sushi restaurant when they had %50 off if we mentioned we saw them on Twitter.  It was their first attempt at trying to encourage customers in the door, and after talking with the management, it was a huge success 2. Giveaways @namecheap Several companies have started huge marketing campaigns, but my favorite is watching companies post trivia questions, and the first person to respond wins a prize. 3. Responding to Customer Complaints I once posted a complaint about American Express (a company that I have slowly come to really dislike) but they actually had someone contact me to try and resolve the issue.  I give them credit for paying attention, but still dislike them for their horrible credit practices.

    Read the article

  • RightNow CX Cloud Service Combined with Oracle Fusion CRM in the Cloud

    - by Richard Lefebvre
    ·        The May 2012 release of Oracle’s RightNow CX Cloud Service, the customer experience suite, is now integrated with Oracle Fusion CRM, helping organizations to achieve sustainable business growth through relevant, cross-channel customer interactions that can increase revenue opportunities and drive organizational efficiencies. Relevant Interactions Build Stronger Customer Relationships ·          Armed with a comprehensive view of all customer interactions across channels, the context and status of these interactions, and an awareness of the customer’s value to the organization, companies can now offer more relevant products and services to customers. ·         Using the combined Oracle RightNow CX Cloud Service and Oracle Fusion CRM solutions, organizations can increase customer retention, drive higher levels of customer advocacy, and increase sales conversion rates with tools designed to: - Provide a complete, cross-channel view of the customer to sales, marketing and service. - Empower sales and service departments to easily collaborate to proactively solve customer issues, using opportunities to provide purchase advice at the right time and with the right solutions. - Allow sales to easily review service history in preparation for sales calls. - Enable agents to understand customer value based upon prior buying habits and existing opportunities. Deeper Insight Enables Targeted, Personalized Opportunities ·          The combination of Oracle RightNow CX Cloud Service and Oracle Fusion CRM allows sales and marketing organizations to simultaneously leverage service interactions from RightNow CX and sales prediction and segmentation capabilities from Fusion Sales. This helps companies to: - Better match products and services to specific customer needs based on customer service history.  - Deliver targeted, personalized interactions intended to help customers derive more value from purchases and to inform future buying decisions. - Identify new opportunities to increase deal size and conversion rates. Supporting Quotes ·         “Every interaction is a relationship opportunity to grow your business. When these interactions are relevant and add value for customers, customers are more likely to trust the relationship and seek purchase advice,” said David Vap, group vice president, Oracle. “This customer trust provides an opportunity to increase customer product adoption and to reduce the cost of customer acquisition, thereby increasing company profitability.” Supporting Resources ·         Oracle Fusion CRM ·         Oracle Fusion Applications ·         Oracle RightNow CX Cloud Service ·         OracleCRM on Facebook ·         OracleCRM on YouTube

    Read the article

  • Tips on ensuring Model Quality

    - by [email protected]
    Given enough data that represents well the domain and models that reflect exactly the decision being optimized, models usually provide good predictions that ensure lift. Nevertheless, sometimes the modeling situation is less than ideal. In this blog entry we explore the problems found in a few such situations and how to avoid them.1 - The Model does not reflect the problem you are trying to solveFor example, you may be trying to solve the problem: "What product should I recommend to this customer" but your model learns on the problem: "Given that a customer has acquired our products, what is the likelihood for each product". In this case the model you built may be too far of a proxy for the problem you are really trying to solve. What you could do in this case is try to build a model based on the result from recommendations of products to customers. If there is not enough data from actual recommendations, you could use a hybrid approach in which you would use the [bad] proxy model until the recommendation model converges.2 - Data is not predictive enoughIf the inputs are not correlated with the output then the models may be unable to provide good predictions. For example, if the input is the phase of the moon and the weather and the output is what car did the customer buy, there may be no correlations found. In this case you should see a low quality model.The solution in this case is to include more relevant inputs.3 - Not enough cases seenIf the data learned does not include enough cases, at least 200 positive examples for each output, then the quality of recommendations may be low. The obvious solution is to include more data records. If this is not possible, then it may be possible to build a model based on the characteristics of the output choices rather than the choices themselves. For example, instead of using products as output, use the product category, price and brand name, and then combine these models.4 - Output leaking into input giving the false impression of good quality modelsIf the input data in the training includes values that have changed or are available only because the output happened, then you will find some strong correlations between the input and the output, but these strong correlations do not reflect the data that you will have available at decision (prediction) time. For example, if you are building a model to predict whether a web site visitor will succeed in registering, and the input includes the variable DaysSinceRegistration, and you learn when this variable has already been set, you will probably see a big correlation between having a Zero (or one) in this variable and the fact that registration was successful.The solution is to remove these variables from the input or make sure they reflect the value as of the time of decision and not after the result is known. 

    Read the article

  • Is Infiniband going to get squeezed by iWARP and external QPI?

    - by andy.grover
    The Inquirer certainly thinks so.However, I'm not so sure it makes sense to compare Infiniband to an as-yet-unannounced optical external QPI. QPI is currently a processor interconnect. CPUs, RAM, and devices connected by it are conceptually part of the same machine -- they run a single OS, for example. They are both "networks" or "fabrics" but they have very different design trade-offs.Another widely-used bus in the system is closer to Infiniband than QPI -- PCI Express. Isn't it more likely that PCIe could take on IB? There are companies already who have solutions that use external PCI Express for cluster interconnect, but these have not gained significant market share. Why would QPI, a technology whose sweet spot is even further from Infiniband's than PCIe, be able to challenge Infiniband? It's hard to speculate without much information, but right now it doesn't seem likely to me.The other prediction made in the article is that Intel's 10GbE iWARP card could squeeze IB on the low end, due to its greater compatibility and lower cost.It's definitely never a good idea to bet against Ethernet when it comes to mass-market, commodity networking. Ethernet will win. 10GbE will win. But, there are now two competing ways to implement the low-latency RDMA Verbs interface on top of Ethernet. iWARP is essentially RDMA over TCP/IP over Ethernet. The new alternative is IBoE (Infiniband over Ethernet, aka RoCEE, aka "Rocky"). This encapsulates the IB packet protocol directly in the Ethernet frame. It loses the layer 3 routability of iWARP, but better maintains software compatibility with existing apps that use IB, and is simpler to implement in both software and hardware. iWARP has a substantial head start, but I believe that IBoE silicon will eventually be cheaper, and more likely to be implemented in commodity Ethernet hardware.I think IBoE is going to take low-end market share from traditional IB, but I think this is a situation IB hardware vendors have no problem accepting. Commoditized IBoE NICs invite greater use of RDMA features, and when higher performance is needed, customers can upgrade to "real" IB, maintaining IB's justification for higher prices. (IB max interconnect speeds have historically been 2-4x higher than Ethernet, and I don't see that changing.)(ObDisclosure: My current employer now sells IB hardware. I previously also worked at Intel. My opinions are my own, duh.)

    Read the article

  • Agilist, Heal Thyself!

    - by Dylan Smith
    I’ve been meaning to blog about a great experience I had earlier in the year at Prairie Dev Con Calgary.  Myself and Steve Rogalsky did a session that we called “Agilist, Heal Thyself!”.  We used a format that was new to me, but that Steve had seen used at another conference.  What we did was start by asking the audience to give us a list of challenges they had had when adopting agile.  We wrote them all down, then had everybody vote on the most interesting ones.  Then we split into two groups, and each group was assigned one of the agile challenges.  We had 20 minutes to discuss the challenge, and suggest solutions or approaches to improve things.  At the end of the 20 minutes, each of the groups gave a brief summary of their discussion and learning's, then we mixed up the groups and repeated with another 2 challenges. The 2 groups I was part of had some really interesting discussions, and suggestions: Unfinished Stories at the end of Sprints The first agile challenge we tackled, was something that every single Scrum team I have worked with has struggled with.  What happens when you get to the end of a Sprint, and there are some stories that are only partially completed.  The team in question was getting very de-moralized as they felt that every Sprint was a failure as they never had a set of fully completed stories. How do you avoid this? and/or what do you do when it happens? There were 2 pieces of advice that were well received: 1. Try to bring stories to completion before starting new ones.  This is advice I give all my Scrum teams.  If you have a 3-week sprint, what happens all too often is you get to the end of week 2, and a lot of stories are almost done; but almost none are completely done.  This is a Bad Thing.  I encourage the teams I work with to only start a new story as a very last resort.  If you finish your task look at the stories in progress and see if there’s anything you can do to help before moving onto a new story.  In the daily standup, put a focus on seeing what stories got completed yesterday, if a few days go by with none getting completed, be sure this fact is visible to the team and do something about it.  Something I’ve been doing recently is introducing WIP (Work In Progress) limits while using Scrum.  My current team has 2-week sprints, and we usually have about a dozen or stories in a sprint.  We instituted a WIP limit of 4 stories.  If 4 stories have been started but not finished then nobody is allowed to start new stories.  This made it obvious very quickly that our QA tasks were our bottleneck (we have 4 devs, but only 1.5 testers).  The WIP limit forced the developers to start to pickup QA tasks before moving onto the next dev tasks, and we ended our sprints with many more stories completely finished than we did before introducing WIP limits. 2. Rather than using time-boxed sprints, why not just do away with them altogether and go to a continuous flow type approach like KanBan.  Limit WIP to keep things under control, but don’t have a fixed time box at the end of which all tasks are supposed to be done.  This eliminates the problem almost entirely.  At some points in the project (releases) you need to be able to burn down all the half finished stories to get a stable release build, but this probably occurs less often than every sprint, and there are alternative approaches to achieve it using branching strategies rather than forcing your team to try to get to Zero WIP every 2-weeks (e.g. when you are ready for a release, create a new branch for any new stories, but finish all existing stories in the current branch and release it). Trying to Introduce Agile into a team with previous Bad Agile Experiences One of the agile adoption challenges somebody described, was he was in a leadership role on a team he had recently joined – lets call him Dave.  This team was currently very waterfall in their ALM process, but they were about to start on a new green-field project.  Dave wanted to use this new project as an opportunity to do things the “right way”, using an Agile methodology like Scrum, adopting TDD, automated builds, proper branching strategies, etc.  The problem he was facing is everybody else on the team had previously gone through an “Agile Adoption” that was a horrible failure.  Dave blamed this failure on the consultant brought in previously to lead this agile transition, but regardless of the reason, the team had very negative feelings towards agile, and was very resistant to trying it out again.  Dave possibly had the authority to try to force the team to adopt Agile practices, but we all know that doesn’t work very well.  What was Dave to do? Ultimately, the best advice was to question *why* did Dave want to adopt all these various practices. Rather than trying to convince his team that these were the “right way” to run a dev project, and trying to do a Big Bang approach to introducing change.  He would be better served by identifying problems the team currently faces, have a discussion with the team to get everybody to agree that specific problems existed, then have an open discussion about ways to address those problems.  This way Dave could incrementally introduce agile practices, and he doesn’t even need to identify them as “agile” practices if he doesn’t want to.  For example, when we discussed with Dave, he said probably the teams biggest problem was long periods without feedback from users, then finding out too late that the software is not going to meet their needs.  Rather than Dave jumping right to introducing Scrum and all it entails, it would be easier to get buy-in from team if he framed it as a discussion of existing problems, and brainstorming possible solutions.  And possibly most importantly, don’t try to do massive changes all at once with a team that has not bought-into those changes.  Taking an incremental approach has a greater chance of success. I see something similar in my day job all the time too.  Clients who for one reason or another claim to not be fans of agile (or not ready for agile yet).  But then they go on to ask me to help them get shorter feedback cycles, quicker delivery cycles, iterative development processes, etc.  It’s kind of funny at times, sometimes you just need to phrase the suggestions in terms they are using and avoid the word “agile”. PS – I haven’t blogged all that much over the past couple of years, but in an attempt to motivate myself, a few of us have accepted a blogger challenge.  There’s 6 of us who have all put some money into a pool, and the agreement is that we each need to blog at least once every 2-weeks.  The first 2-week period that we miss we’re eliminated.  Last person standing gets the money.  So expect at least one blog post every couple of weeks for the near future (I hope!).  And check out the blogs of the other 5 people in this blogger challenge: Steve Rogalsky: http://winnipegagilist.blogspot.ca Aaron Kowall: http://www.geekswithblogs.net/caffeinatedgeek Tyler Doerkson: http://blog.tylerdoerksen.com David Alpert: http://www.spinthemoose.com Dave White: http://www.agileramblings.com (note: site not available yet.  should be shortly or he owes me some money!)

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-29

    - by Bob Rhubart
    ORCLville: OOW 2012 - Crystal BallOracle ACE Director Floyd Teter cooks up some tongue-in-cheek predictions for news and announcements that might come out of Oracle OpenWorld 2012. What's your prediction? Oracle Optimized Solutions at Oracle OpenWorld 2012 | Oracle Hardware Hardware matters, too! The people behind the Oracle Hardware blog have put together a list of Oracle Openworld 2012 sessions focused Oracle Optimized Solutions, "designed, pre-tested, tuned and fully documented architectures for optimal performance and availability." Just plug the session ID numbers into Schedule Builder and you're good to go. AIX Checklist for stable OBIEE deployment | Dick Dunbar "OBIEE is a complicated system with many moving parts and connection points," according to Oracle Business Inteligence escalation engineer Dick Dunbar. "The purpose of this article is to provide a checklist to discuss OBIEE deployment with your systems administrators." Demo for OPN: Coherence Management with EM Cloud Control 12c Oracle Partner Network members can check out a new Coherence Management demo that showcases some of the key capabilities of Management Pack for Oracle Coherence and JVM Diagnostics. "The demo flow showcases the key enhancements made in Enterprise Manager 12c release which includes new customizable performance summary, cache data management and configuration management," according to the WebLogic Partner Community EMEA blog. The Pragmatic Architect: To Boldly Go Where No One Has Gone Before | Frank Buschmann "Many architects have technical knowledge that's both impressive and sound, which is indeed an inevitable basis for design success," says Frank Buschmann. "Yet, a lot of software projects fail or suffer due to severe challenges in their architecture. The key to mastery is how architects approach design, what they value, and where they focus their attention and work." As retail dies, whom will be the winners? | Peter Evans-Greenwood "The problem for many retailers is that how consumers shop has changed but the the retailers haven't adapted, " says Peter Evans-Greenwood. "Their sole virtue was to be the last step in a supply chain delivering somebody else's products to the consumer. However, being the last step in the supply chain is no longer a virtue when consumers skip across channels and can reach around the globe, no longer dependant on or limited to what they can find locally." Thought for the Day "Brains require stimulation. If you're locked into a pattern of work, work, and more work, your brain soon habituates - the same way that it lets you stop hearing a clock ticking. So, if you want to be more effective at work, you must, paradoxically, be less single-minded in your devotion to work. Anything you do—anything—that stimulates new segments of your brain will make you a more effective programmer or analyst. I promise, with a money-back guarantee." — Gerald M. Weinberg Source: SoftwareQuotes.com

    Read the article

  • How to properly do weapon cool-down reload timer in multi-player laggy environment?

    - by John Murdoch
    I want to handle weapon cool-down timers in a fair and predictable way on both client on server. Situation: Multiple clients connected to server, which is doing hit detection / physics Clients have different latency for their connections to server ranging from 50ms to 500ms. They want to shoot weapons with fairly long reload/cool-down times (assume exactly 10 seconds) It is important that they get to shoot these weapons close to the cool-down time, as if some clients manage to shoot sooner than others (either because they are "early" or the others are "late") they gain a significant advantage. I need to show time remaining for reload on player's screen Clients can have clocks which are flat-out wrong (bad timezones, etc.) What I'm currently doing to deal with latency: Client collects server side state in a history, tagged with server timestamps Client assesses his time difference with server time: behindServerTimeNs = (behindServerTimeNs + (System.nanoTime() - receivedState.getServerTimeNs())) / 2 Client renders all state received from server 200 ms behind from his current time, adjusted by what he believes his time difference with server time is (whether due to wrong clocks, or lag). If he has server states on both sides of that calculated time, he (mostly LERP) interpolates between them, if not then he (LERP) extrapolates. No other client-side prediction of movement, e.g., to make his vehicle seem more responsive is done so far, but maybe will be added later So how do I properly add weapon reload timers? My first idea would be for the server to send each player the time when his reload will be done with each world state update, the client then adjusts it for the clock difference and thus can estimate when the reload will be finished in client-time (perhaps considering also for latency that the shoot message from client to server will take as well?), and if the user mashes the "shoot" button after (or perhaps even slightly before?) that time, send the shoot event. The server would get the shoot event and consider the time shot was made as the server time when it was received. It would then discard it if it is nowhere near reload time, execute it immediately if it is past reload time, and hold it for a few physics cycles until reload is done in case if it was received a bit early. It does all seem a bit convoluted, and I'm wondering whether it will work (e.g., whether it won't be the case that players with lower ping get better reload rates), and whether there are more elegant solutions to this problem.

    Read the article

  • download and process a file by ftp at set intervals, with error handling, rescheduling and status messages

    - by compound eye
    I want to download a data file from a remote ftp server to my machine at regular intervals. Once the file is downloaded I want to call another script which will process the file. My development machine is mac os x, the eventual deployment environment is linux. What's would be the stock standard way to automate this? I know I can use cron to schedule curl to download and to run a script that will process the downloaded file at regular intervals, and I know could write a slightly more complex script or an application that would do this and add error handling, rescheduling and sending status emails. But one of my requirements for this project is to write as little custom code as possible, instead I should try to use standard, tried and true existing tools, and if I do have to write code, to try and write the most straightforward code possible. The reason for this is the code will potentially be installed on a large number of machines, all of which will need to be tweaked, customised and maintained by different people, long after I am gone from the project, so the intention is to use well documented, well supported tools as much as possible. This seems such a common task, there must be tools and scripts all over the internet, written by people who have carefully considered everything that could possibly go wrong when you need to download and process a file from a remote server at regular intervals, with error handling, rescheduling and sending status messages. Is that what Expect is for? What would you recommend? (the system will be downloading weather prediction data every six hours, so that the system can prepare in the event of bad weather warnings)

    Read the article

  • Alright, I'm still stuck on this homework problem. C++

    - by Josh
    Okay, the past few days I have been trying to get some input on my programs. Well I decided to scrap them for the most part and try again. So once again, I'm in need of help. For the first program I'm trying to fix, it needs to show the sum of SEVEN numbers. Well, I'm trying to change is so that I don't need the mem[##] = ####. I just want the user to be able to input the numbers and the program run from there and go through my switch loop. And have some kind of display..saying like the sum is?.. Here's my code so far. #include <iostream> #include <iomanip> #include <ios> using namespace std; int main() { const int READ = 10; const int WRITE = 11; const int LOAD = 20; const int STORE = 21; const int ADD = 30; const int SUBTRACT = 31; const int DIVIDE = 32; const int MULTIPLY = 33; const int BRANCH = 40; const int BRANCHNEG = 41; const int BRANCHZERO = 42; const int HALT = 43; int mem[100] = {0}; //Making it 100, since simpletron contains a 100 word mem. int operation; //taking the rest of these variables straight out of the book seeing as how they were italisized. int operand; int accum = 0; // the special register is starting at 0 int counter; for ( counter=0; counter < 100; counter++) mem[counter] = 0; // This is for part a, it will take in positive variables in //a sent-controlled loop and compute + print their sum. Variables from example in text. mem[0] = 1009; mem[1] = 1109; mem[2] = 2010; mem[3] = 2111; mem[4] = 2011; mem[5] = 3100; mem[6] = 2113; mem[7] = 1113; mem[8] = 4300; counter = 0; //Makes the variable counter start at 0. while(true) { operand = mem[ counter ]%100; // Finds the op codes from the limit on the mem (100) operation = mem[ counter ]/100; //using a switch loop to set up the loops for the cases switch ( operation ){ case READ: //reads a variable into a word from loc. Enter in -1 to exit cout <<"\n Input a positive variable: "; cin >> mem[ operand ]; counter++; break; case WRITE: // takes a word from location cout << "\n\nThe content at location " << operand << " is " << mem[operand]; counter++; break; case LOAD:// loads accum = mem[ operand ];counter++; break; case STORE: //stores mem[ operand ] = accum;counter++; break; case ADD: //adds accum += mem[operand];counter++; break; case SUBTRACT: // subtracts accum-= mem[ operand ];counter++; break; case DIVIDE: //divides accum /=(mem[ operand ]);counter++; break; case MULTIPLY: // multiplies accum*= mem [ operand ];counter++; break; case BRANCH: // Branches to location counter = operand; break; case BRANCHNEG: //branches if acc. is < 0 if (accum < 0) counter = operand; else counter++; break; case BRANCHZERO: //branches if acc = 0 if (accum == 0) counter = operand; else counter++; break; case HALT: // Program ends break; } } return 0; } part B int main() { const int READ = 10; const int WRITE = 11; const int LOAD = 20; const int STORE = 21; const int ADD = 30; const int SUBTRACT = 31; const int DIVIDE = 32; const int MULTIPLY = 33; const int BRANCH = 40; const int BRANCHNEG = 41; const int BRANCHZERO = 41; const int HALT = 43; int mem[100] = {0}; int operation; int operand; int accum = 0; int pos = 0; int j; mem[22] = 7; // loop 7 times mem[25] = 1; // increment by 1 mem[00] = 4306; mem[01] = 2303; mem[02] = 3402; mem[03] = 6410; mem[04] = 3412; mem[05] = 2111; mem[06] = 2002; mem[07] = 2312; mem[08] = 4210; mem[09] = 2109; mem[10] = 4001; mem[11] = 2015; mem[12] = 3212; mem[13] = 2116; mem[14] = 1101; mem[15] = 1116; mem[16] = 4300; j = 0; while ( true ) { operand = memory[ j ]%100; // Finds the op codes from the limit on the memory (100) operation = memory[ j ]/100; //using a switch loop to set up the loops for the cases switch ( operation ){ case 1: //reads a variable into a word from loc. Enter in -1 to exit cout <<"\n enter #: "; cin >> memory[ operand ]; break; case 2: // takes a word from location cout << "\n\nThe content at location " << operand << "is " << memory[operand]; break; case 3:// loads accum = memory[ operand ]; break; case 4: //stores memory[ operand ] = accum; break; case 5: //adds accum += mem[operand];; break; case 6: // subtracts accum-= memory[ operand ]; break; case 7: //divides accum /=(memory[ operand ]); break; case 8: // multiplies accum*= memory [ operand ]; break; case 9: // Branches to location j = operand; break; case 10: //branches if acc. is < 0 break; case 11: //branches if acc = 0 if (accum == 0) j = operand; break; case 12: // Program ends exit(0); break; } j++; } return 0; }

    Read the article

  • Upgrading PHP on MacOSX without config_vars.mk ?

    - by Ken
    Hi everyone, I want to upgrade my php version running on MAMP to 5.3. I've copied the ./configure statement from phpinfo() and downloaded the 5.3 branch source i wish to compile. However, when i try to compile it i get an error about a missing config_vars.mk file from apxs. How can i solve this issue if i do not have the config_vars.mk? can one be deprecated? can i copy the one from the stock apache that comes installed on OS X (SL)? What will happen if i remove --with-apxs from the configure line? Thanks in advance for any help. It is greatly appreciated. Ken.

    Read the article

  • "GEOM_MIRROR: cancelling unmapped because of ada0"

    - by antiduh
    Recently upgraded my FreeBSD install to FreeBSD 9.1-stable from the 9-stable branch. I have two SATA hard drives (/dev/ada0, /dev/ada1) in a geom mirror, with nothing between : [/dev/ada0, /dev/ada1] --> /dev/mirror/gm0, which I then partition for root etc. After upgrading from 9.0-stable to 9.1-stable, I found these messages on the console: GEOM_MIRROR: cancelling unmapped because of ada0 GEOM_MIRROR: cancelling unmapped because of ada1 GEOM_MIRROR: Device mirror/gm0 launched (2/2). Everything still seems to work, the mirror seems healthy and the machine works just fine, peformance is fine.

    Read the article

  • Installing Xen 4.0.1 from Source on Ubuntu 10.10

    - by markus
    I'm trying to install Xen 4.0.1 from Source on Ubuntu 10.10 Server Edition. I started with a clean machine and followed the instructions from https://help.ubuntu.com/community/Xen. So I installed the packages mentioned there with: sudo apt-get install gettext bin86 bcc libc6-dev-i386 iasl texinfo git When making the source with make world I get this error: + git clone -o xen -n git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git linux-2.6-pvops.git.tmp Initialized empty Git repository in /home/homer/xen/linux-2.6-pvops.git.tmp/.git/ remote: Counting objects: 1855434, done. remote: Compressing objects: 100% (291939/291939), done. Receiving objects: 100% (1855434/1855434), 368.49 MiB | 11.00 MiB/s, done. remote: Total 1855434 (delta 1553214), reused 1847760 (delta 1546656) Resolving deltas: 100% (1553214/1553214), done. + cd linux-2.6-pvops.git.tmp + git checkout -b xen/stable-2.6.32.x xen/xen/stable-2.6.32.x fatal: git checkout: branch xen/stable-2.6.32.x already exists make[3]: *** [linux-2.6-pvops.git/.valid-src] Error 128 Does anybody have an idea what i can do?

    Read the article

  • Setting up Github post-receive webhook with private Jenkins and private repo

    - by Joseph S.
    I'm trying to set up a private GitHub project to send a post-receive request to a private Jenkins instance to trigger a project build on branch push. Using latest Jenkins with the GitHub plugin. I believe I set up everything correctly on the Jenkins side because when sending a request from a public server with curl like this: curl http://username:password@ipaddress:port/github-webhook/ results in: Stacktrace: net.sf.json.JSONException: null object which is fine because the JSON payload is missing. Sending the wrong username and password in the URI results in: Exception: Failed to login as username I interpret this as a correct Jenkins configuration. Both of these requests also result in entries in the Jenkins log. However, when pasting the exact same URI from above into the Github repository Post-Receive URLs Service Hook and clicking on Test Hook, absolutely nothing seems to happen on my server. Nothing in the Jenkins log and the GitHub Hook Log in the Jenkins project says Polling has not run yet. I have run out of ideas and don't know how to proceed further.

    Read the article

  • Does an alternative to regedit.exe exist?

    - by Peter Mortensen
    Is there something better than regedit.exe for searching and editing the Windows registry? Update 1: Registrar Registry Manager 6.50 from Resplendence Software Projects, free lite edition meets the two requirements listed below. Direct download URL (2.7 MB). Search can also be restricted to a particular branch in the registry. However exporting and sorting on columns in the search window is disabled in the lite version (and there may be other limitations). I haven't yet evaluated the other candidates. In particular I would like to be able to batch search instead of having to step through with F3. And a go to function that will open a particular registry key, e.g. HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\Explorer, instead of having to go through from the top, Platform: Windows XP.

    Read the article

  • disparity between `top`'s given CPU % and process CPU usage total

    - by intuited
    I've noticed that there are sometimes (large) differences between the reported total CPU usage and a summation of the per-process CPU utilization given by apps like top and wmtop. As an example: I recently ran a git filter-branch --index-filter on a fairly large repo, with the index-filter command piping git ls-files through a grep filter and into xargs git rm --cached. This took a few minutes to run; while it was going I noticed that both wmtop and top were displaying a high (above 50% on my 2-core machine) total CPU usage, but that neither showed any individual processes which were using a significant amount of CPU time. Are some processes not shown in the process list? What sorts of processes are these, and is there a way to find out how much CPU time they are using?

    Read the article

  • Safari corrupting downloads?

    - by Kaji
    First off, a bit of background: I had to do an erase and install about 2-3 weeks ago, so this is a fresh, up-to-date installation of Snow Leopard we're dealing with. That said, I decided recently to branch out from simply programming PHP in a text editor and explore some of the other technologies I keep hearing about, and picked up Drupal, Joomla, and the Zend Framework from their respective official sites. Latest complete, stable builds for all 3. Drupal and Joomla downloaded without a problem, but when I put them in my /~username/Sites folder, XAMPP pretends they're not there, even if I restart Apache or the laptop itself. Zend's archive won't open at all. Is Safari corrupting the downloads, or are there other issues in play that can be investigated?

    Read the article

  • Large svn external

    - by MPelletier
    I have a project which uses a large library residing in its own repository. Using: Tortoise-SVN, the server is running an enterprise edition of VisualSVN The project itself has the "standard" structure: trunk tags branches In each branch, tag, and trunk is the library, set as an external (svn:external property). If I get the entire tree, I get the library several times, which is just getting too ridiculously repetitive. Is there a recommended structure for this? Or perhaps a way not to get all externals (because other externals are much smaller, easier to manipulate)?

    Read the article

  • Git version control with multiple users

    - by ignatius
    Hello, i am a little bit lost with this issue, let me explain you my problem: I want to setup a git repository, three of four users will contribute, so they need to download the code and shall be able to upload their changes to the server or update their branch with the latest modifications. So, i setup a linux machine, install git, setup the repository, then add the users in order to enable the acces throught ssh. Now my question is, What's next?, the git documentation is a little bit confusing, i.e. when i try from a dummy user account to clone the repository i got: xxx@xxx-desktop:~/Documentos/git/test$ git clone -v ssh://[email protected]/pub.git Initialized empty Git repository in /home/xxx/Documentos/git/test/pub/.git/ [email protected]'s password: fatal: '/pub.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly is that a problem of privileges? need any special configuration? i want to avoid using git-daemon or gitosis, sorry, maybe my question sound silly but git is powerfull but i admit not so user friendly. Thanks Br

    Read the article

  • Authenticating AIX Users Against OID (Oracle Internet Directory)

    - by mwilkes
    We have a need to authenticate local users on an AIX server against OID using LDAP. We have a branch within OID where we've placed and synchronized Active Directory users. We've also configured external authentication on OID so that it verifies username/passwords against AD. Has anyone configured authentication for AIX in this type of environment? We believe we need to populate unix specific attributes on the user's directory entry in OID, but are unsure which attributes are needed. Additionally, we are looking to authenticate Oracle database users against OID but because of external authentication we are unable to populate the ORCLPASSWORD attribute on the user's directory entry on OID (which is the attribute Oracle is looking for password in). Help with either or both are welcome.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >