Search Results

Search found 32130 results on 1286 pages for 'local search'.

Page 140/1286 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • Capistrano asks for SSH password when deploying from local machine to server

    - by GhostRider
    When I try to ssh to a server, I'm able to do it as my id_rsa.pub key is added to the authorized keys in the server. Now when I try to deploy my code via Capistrano to the server from my local project folder, the server asks for a password. I'm unable to understand what could be the issue if I'm able to ssh and unable to deploy to the same server. $ cap deploy:setup "no seed data" triggering start callbacks for `deploy:setup' * 13:42:18 == Currently executing `multistage:ensure' *** Defaulting to `development' * 13:42:18 == Currently executing `development' * 13:42:18 == Currently executing `deploy:setup' triggering before callbacks for `deploy:setup' * 13:42:18 == Currently executing `db:configure_mongoid' * executing "mkdir -p /home/deploy/apps/development/flyingbird/shared/config" servers: ["dev1.noob.com", "176.9.24.217"] Password: Cap script: # gem install capistrano capistrano-ext capistrano_colors begin; require 'capistrano_colors'; rescue LoadError; end require "bundler/capistrano" # RVM bootstrap # $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require 'rvm/capistrano' set :rvm_ruby_string, 'ruby-1.9.2-p290' set :rvm_type, :user # or :user # Application setup default_run_options[:pty] = true # allow pseudo-terminals ssh_options[:forward_agent] = true # forward SSH keys (this will use your SSH key to get the code from git repository) ssh_options[:port] = 22 set :ip, "dev1.noob.com" set :application, "flyingbird" set :repository, "repo-path" set :scm, :git set :branch, fetch(:branch, "master") set :deploy_via, :remote_cache set :rails_env, "production" set :use_sudo, false set :scm_username, "user" set :user, "user1" set(:database_username) { application } set(:production_database) { application + "_production" } set(:staging_database) { application + "_staging" } set(:development_database) { application + "_development" } role :web, ip # Your HTTP server, Apache/etc role :app, ip # This may be the same as your `Web` server role :db, ip, :primary => true # This is where Rails migrations will run # Use multi-staging require "capistrano/ext/multistage" set :stages, ["development", "staging", "production"] set :default_stage, rails_env before "deploy:setup", "db:configure_mongoid" # Uncomment if you use any of these databases after "deploy:update_code", "db:symlink_mongoid" after "deploy:update_code", "uploads:configure_shared" after "uploads:configure_shared", "uploads:symlink" after 'deploy:update_code', 'bundler:symlink_bundled_gems' after 'deploy:update_code', 'bundler:install' after "deploy:update_code", "rvm:trust_rvmrc" # Use this to update crontab if you use 'whenever' gem # after "deploy:symlink", "deploy:update_crontab" if ARGV.include?("seed_data") after "deploy", "db:seed" else p "no seed data" end #Custom tasks to handle resque and redis restart before "deploy", "deploy:stop_workers" after "deploy", "deploy:restart_redis" after "deploy", "deploy:start_workers" after "deploy", "deploy:cleanup" 'Create symlink for public uploads' namespace :uploads do task :symlink do run <<-CMD rm -rf #{release_path}/public/uploads && mkdir -p #{release_path}/public && ln -nfs #{shared_path}/public/uploads #{release_path}/public/uploads CMD end task :configure_shared do run "mkdir -p #{shared_path}/public" run "mkdir -p #{shared_path}/public/uploads" end end namespace :rvm do desc 'Trust rvmrc file' task :trust_rvmrc do run "rvm rvmrc trust #{current_release}" end end namespace :db do desc "Create mongoid.yml in shared path" task :configure_mongoid do db_config = <<-EOF defaults: &defaults host: localhost production: <<: *defaults database: #{production_database} staging: <<: *defaults database: #{staging_database} EOF run "mkdir -p #{shared_path}/config" put db_config, "#{shared_path}/config/mongoid.yml" end desc "Make symlink for mongoid.yml" task :symlink_mongoid do run "ln -nfs #{shared_path}/config/mongoid.yml #{release_path}/config/mongoid.yml" end desc "Fill the database with seed data" task :seed do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake db:seed" end end namespace :bundler do desc "Symlink bundled gems on each release" task :symlink_bundled_gems, :roles => :app do run "mkdir -p #{shared_path}/bundled_gems" run "ln -nfs #{shared_path}/bundled_gems #{release_path}/vendor/bundle" end desc "Install bundled gems " task :install, :roles => :app do run "cd #{release_path} && bundle install --deployment" end end namespace :deploy do task :start, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Restart the app" task :restart, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Start the workers" task :stop_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:stop_workers" end desc "Restart Redis server" task :restart_redis do "/etc/init.d/redis-server restart" end desc "Start the workers" task :start_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:start_workers" end end

    Read the article

  • IIS7 web farm - local or shared content?

    - by rbeier
    We're setting up an IIS7 web farm with two servers. Should each server have its own local copy of the content, or should they pull content directly from a UNC share? What are the pros and cons of each approach? We currently have a single live server WEB1, with content stored locally on a separate partition. A job periodically syncs WEB1 to a standby server WEB2, using robocopy for content and msdeploy for config. If WEB1 goes down, Nagios notifies us, and we manually run a script to move the IP addresses to WEB2's network interface. Both servers are actually VMs running on separate VMWare ESX 4 hosts. The servers are domain-joined. We have around 50-60 live sites on WEB1 - mostly ASP.NET, with a few that are just static HTML. Most are low-traffic "microsites". A few have moderate traffic, but none are massive. We'd like to change this so both WEB1 and WEB2 are actively serving content. This is mainly for reliability - if WEB1 goes down, we don't want to have to manually intervene to fail things over. Spreading the load is also nice, but the load is not high enough right now for us to need this. We're planning to configure our firewall to balance traffic across the two servers. It will detect when a server goes down and will send all the traffic to the remaining live server. We're planning to use sticky sessions for now... eventually we may move to SQL Server session state and stateless load balancing. But we need a way for the servers to share content. We were originally planning to move all the content to a UNC share. Our storage provider says they can set up a highly available SMB share for us. So if we go the UNC route, the storage shouldn't be a single point of failure. But we're wondering about the downsides to this approach: We'll need to change the physical paths for each site and virtual directory. There are also some projects that have absolute paths in their web.config files - we'll have to update those as well. We'll need to create a domain user for the web servers to access the share, and grant that user appropriate permissions. I haven't looked into this yet - I'm not sure if the application pool identity needs to be changed to this user, or if there's another way to tell IIS to use this account when connecting to the share. Sites will no longer be able to access their content if there's ever an Active Directory problem. In general, it just seems a lot more complicated, with more moving parts that could break. Our storage provider would create a volume for us on their redundant SAN. If I understand correctly, this SAN volume would be mounted on a VM running in their redundant VMWare environment; this VM would then expose the SMB share to our web servers. On the other hand, a benefit of the shared content approach is that we'd only need to deploy code to one place, and there would never be a temporary inconsistency between multiple copies of the content. This thread is pretty interesting, though some of these people are working at a much larger scale. I've just been discussing content so far, but we also need to think about configuration. I don't know if we can just use DFS replication for the applicationHost.config and other files, or if it's best to use the shared configuration feature with the config on a UNC share. What do you think? Thanks for your help, Richard

    Read the article

  • Local DNS server (bind) and the router DHCP

    - by Luca
    I just set up an internal http server for internal use (I set up Redmine), in a small network (30 or so PCs). I set up the http server on a virtual box ubuntu, that runs also the DNS server (bind). In the DNS lookup I added the Redmine server name (redmine.engserver <- 192.168.1.14) and as forwarders the outside ISP DNS IP adresses. I am using a small wi-fi router (ASUS RT-N66U) as DHCP (and as gateway). In the DHCP config page I set up as DNS the ubuntu server IP (it is fixed 192.168.1.14). Now when I connect a new PC to the network, the DHCP router issues its new IP and as DNS servers it issues: primary: 192.168.1.14 (ubuntu machine) and seconary 192.168.1.1 (the router itself). ipconfig /all Default Gateway . . . . . . . . . : 192.168.1.1 DHCP Server . . . . . . . . . . . : 192.168.1.1 DHCPv6 IAID . . . . . . . . . . . : 248539109 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-17-15-AA-3F-D0-67-E5-49-A7-EF DNS Servers . . . . . . . . . . . : 192.168.1.14 192.168.1.1 NetBIOS over Tcpip. . . . . . . . : Enabled Before changing the DHCP setting on the router, I would always get only one DNS server: 192.168.1.1 (which uses probably DNS forwarding to external public DNS services). The problem is this: If in my browser I type www.google.com, it works all the time. If in the browser I type http://redmine.engserver/ it works most of the time, but sometimes it ends up with a yahoo page search or something else. In the DNS cache it shows as (Server not found). ipconfig /displaydns I looked with wireshark and it seems like sometimes the client PC interrogates the secondary DNS (192.168.1.1) instead of the first 192.168.1.14. Obviously this one is a public domain and it does not have the redmine.engserver entry. What is wrong in this configuration? Is it even legitimate to have 2 DNS (one internal and one forwarded by the router) which are inconsistent? Is there another way to have a local name service in a small office network? Why is the router DHCP issuing itself as DNS?

    Read the article

  • ASA 5505 stops local internet when connected to VPN

    - by g18c
    Hi I have a Cisco ASA router running firmware 8.2(5) which hosts an internal LAN on 192.168.30.0/24. I have used the VPN Wizard to setup L2TP access and I can connect in fine from a Windows box and can ping hosts behind the VPN router. However, when connected to the VPN I can no longer ping out to my internet or browse web pages. I would like to be able to access the VPN, and also browse the internet at the same time - I understand this is called split tunneling (have ticked the setting in the wizard but to no effect) and if so how do I do this? Alternatively, if split tunneling is a pain to setup, then making the connected VPN client have internet access from the ASA WAN IP would be OK. Thanks, Chris names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.30.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address 208.74.158.58 255.255.255.252 ! ftp mode passive access-list inside_nat0_outbound extended permit ip any 10.10.10.0 255.255.255.128 access-list inside_nat0_outbound extended permit ip 192.168.30.0 255.255.255.0 192.168.30.192 255.255.255.192 access-list DefaultRAGroup_splitTunnelAcl standard permit 192.168.30.0 255.255.255.0 access-list DefaultRAGroup_splitTunnelAcl_1 standard permit 192.168.30.0 255.255.255.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool LANVPNPOOL 192.168.30.220-192.168.30.249 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 192.168.30.0 255.255.255.0 route outside 0.0.0.0 0.0.0.0 208.74.158.57 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.30.0 255.255.255.0 inside snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set transform-set ESP-AES-128-SHA ESP-AES-128-MD5 ESP-AES-192-SHA ESP-AES-192-MD5 ESP-AES-256-SHA ESP-AES-256-MD5 ESP-3DES-SHA ESP-3DES-MD5 ESP-DES-SHA ESP-DES-MD5 TRANS_ESP_3DES_SHA crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.30.3 vpn-tunnel-protocol l2tp-ipsec split-tunnel-policy tunnelspecified split-tunnel-network-list value DefaultRAGroup_splitTunnelAcl_1 username user password Cj7W5X7wERleAewO8ENYtg== nt-encrypted privilege 0 tunnel-group DefaultRAGroup general-attributes address-pool LANVPNPOOL default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key ***** tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global prompt hostname context : end

    Read the article

  • Mac OS X roaming profile from Samba with OpenLDAP backend on Ubuntu 11.10

    - by Sam Hammamy
    I have been battling for a week now to get my Mac (Mountain Lion) to authenticate on my home network's OpenLDAP and Samba. From several sources, like the Ubuntu community docs, and other blogs, and after a hell of a lot of trial and error and piecing things together, I have created a samba.ldif that will pass the smbldap-populate when combined with apple.ldif and I have a fully functional OpenLDAP server and a Samba PDC that uses LDAP to authenticate the OS X Machine. The problem is that when I login, the home directory is not created or pulled from the server. I get the following in system.log Sep 21 06:09:15 Sams-MacBook-Pro.local SecurityAgent[265]: User info context values set for sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got ruser: (null) Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got service: authorization Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): no authauth availale for user. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): failed: 7 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Failed to determine Kerberos principal name. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Done cleanup3 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Kerberos 5 refuses you Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): pam_sm_authenticate: ntlm Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_acct_mgmt(): OpenDirectory - Membership cache TTL set to 1800. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_record_check_pwpolicy(): retval: 0 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Establishing credentials Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Context initialised Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): pam_sm_setcred: ntlm user sam doesn't have auth authority All that's great and good and I authenticate. Then I get CFPreferences: user home directory for user kCFPreferencesCurrentUser at /Network/Servers/172.17.148.186/home/sam is unavailable. User domains will be volatile. Failed looking up user domain root; url='file://localhost/Network/Servers/172.17.148.186/home/sam/' path=/Network/Servers/172.17.148.186/home/sam/ err=-43 uid=9000 euid=9000 If you're wondering where /Network/Servers/IP/home/sam comes from, it's from a couple of blogs that said the OpenLDAP attribute apple-user-homeDirectory should have that value and the NFSHomeDirectory on the mac should point to apple-user-homeDirectory I also set the attr apple-user-homeurl to <home_dir><url>smb://172.17.148.186/sam/</url><path></path></home_dir> which I found on this forum. Any help is appreciated, because I'm banging my head against the wall at this point. By the way, I intend to create a blog on my vps just for this, and create an install script in python that people can download so no one has to go through what I've had to go through this week :) After some sleep I am going to try to login from a windows machine and report back here. Thanks Sam

    Read the article

  • Site Review: Yahoo.com - Forms Evaluation

    Yahoo uses Ajax to suggest search terms to users when they are entering a search phrase into the search text box. Once the user has entered a search term and then presses the search button, the browser will post the search form to the search results page. I think that Yahoo is making great use of Ajax in this situation because they are helping users find information as well as suggesting alternative search terms for them to try based on what has already been added.

    Read the article

  • Disk Search / Sort Algorithm

    - by AlgoMan
    Given a Range of numbers say 1 to 10,000, Input is in random order. Constraint: At any point only 1000 numbers can be loaded to memory. Assumption: Assuming unique numbers. I propose the following efficient , "When-Required-sort Algorithm". We write the numbers into files which are designated to hold particular range of numbers. For example, File1 will have 0 - 999 , File2 will have 1000 - 1999 and so on in random order. If a particular number which is say "2535" is being searched for then we know that the number is in the file3 (Binary search over range to find the file). Then file3 is loaded to memory and sorted using say Quick sort (which is optimized to add insertion sort when the array size is small ) and then we search the number in this sorted array using Binary search. And when search is done we write back the sorted file. So in long run all the numbers will be sorted. Please comment on this proposal.

    Read the article

  • [Android] How to search and Highlight Text within an EditText

    - by marc
    I've searched high and low for something that seems to be a simple task. Forgive me, I am coming to Android from other programming languages and am new to this platform and Java. What I want to do is create a dialog pop-up where a user enters text to search for and the code would take that text and search for it within all the text in an EditText control and if it's found, highlight it. I've done this before, for example in VB and it went something similar to this pseudo code: grab the text from the (EditText) assign it to a string search the length of that string (character by character) for the substring, if it's found return the position (index) of the substring within the string. if found, start the (EditText).setSelection highlight beginning on the returned position for the length of Does this make sense? I just want to search a EditText for and when found, scroll to it and it'll be highlighted. Maybe there's something in Android/Java equivalent to what I need here? Any help / pointers would be greatly appreciated

    Read the article

  • Inspecting Lucene.NET index with Luke want to replicate NHibernate.Search view

    - by Tim Peel
    Hi, I am trying to put together an index using terms, which I specify as a comma separated list. I want to replicate the display in Luke as seen here: http://ayende.com/Blog/archive/2009/05/03/nhibernate-search-again.aspx But my index value just shows as a single field with the comma separate list value. For example: Tags term,anotherterm When I search my index, it will return results if I search with "term" but will not return anything if I search with "anotherterm" I thought the indexing process would break the comma separate list apart into separate values but this does not seem to be the case. Anyone got any ideas? Thanks

    Read the article

  • Comma seprated search in mysql query

    - by Ravi Kotwani
    I have search mechanism in my site. For that I have written a large conditional query. $sql = "select * from users where keyword like '%".$_POST['search']."%' OR name like '%".$_POST['search']."%'"; Now, I suppose I have following data on the site: ID Name Keyword 1 Sanjay sanjay, surani 2 Ankit ankit, shah 3 Ravi ravi, kotwani Now, I need the result such that when user writes "sanjay, shah" ($_POST['search'] = 'sanjay, shah') then records 1 and 2 should be displayed. Can I achive this in single mysql query?

    Read the article

  • Lucene search and underscores

    - by Matt
    When I use Luke to search my Lucene index using a standard analyzer, I can see the field I am searchng for contains values of the form MY_VALUE. When I search for field:"MY_VALUE" however, the query is parsed as field:"my value" Is there a simple way to escape the underscore (_) character so that it will search for it?

    Read the article

  • How Can we implement search functionality in AxAcroPDFLib.AxAcroPDF( Pdf ) using C#

    - by V G S Naidu
    Hai, I am using the AxAcroPDFLib.AxAcroPDF library to Display the files in the winforms Control using the line, "AxAcroPDFLib.AxAcroPDF.src = path; " it's loaded the file well and when we click CTRL+F it showing search box and searching the searched string well. But we need to implement the search funtionality programatically using the Dotnet Code to automatically search the string in pdf file.*To do so i didn't find any supported methods to find the string programatically. please provide the solution to to implement the search functionality in pdf files. Thank you.

    Read the article

  • XenServer 5.5 local storage problem

    - by Jason Nerer
    Hi community, I have the following problem with a Citrix XenServer 5.5. I had to physically move the host, so I shut down all machines via console: xe vm-shutdown force=true vm=my-machine-uuis-s After that I shut down the machine itself by issuing: halt After the reboot today the local storage repository is unplugged. I was trying to repair it via XenCenter, but I don't trust this one. So I tried: [root@xenserver ~]# xe pbd-list uuid ( RO) : ef6e2f3b-5825-393c-23e1-391d105c87ec host-uuid ( RO): c4bcf09c-2e52-448f-8210-df5d13bd33a9 sr-uuid ( RO): 2fb3be9c-075c-53ed-acb6-42f0c4ad0614 device-config (MRO): device: /dev/disk/by-id/scsi-SATA_WDC_WD5001ABYS-_WD-WCAS83698154,/dev/disk/by-id/scsi-SATA_WDC_WD5001ABYS-_WD-WCAS83694262 currently-attached ( RO): false To reattach the storage I issued: xe pbd-plug uuid=ef6e2f3b-5825-393c-23e1-391d105c87ec That one is running now for a while but not talking to me. The local repo has around 1TB. Should I wait, or are there any other options to reattach the local repo? What could have caused this problem? Any ideas? Thx. J

    Read the article

  • Google Ajax search API

    - by jAndy
    Hi Folks, I'm wondering, is it possible to receive google results over their own ajax API in a way like, 100 results per page? Without a visible search field, I'd like to get the results in the background to create a progression for some search phrases. My basic question is, what are the restrictions of the google search api ? Kind Regards --Andy

    Read the article

  • Multithreaded search with UISearchDisplayController

    - by Kulpreet
    I'm sort of new to any sort of multithreading and simply can't seem to get a simple search method working on a background thread properly. Everything seems to be in order with an NSAutoreleasePool and the UI being updated on the main thread. The app doesn't crash and does perform a search in the background but the search results yield the several of the same items several times depending on how fast I type it in. The search works properly without the multithreading (which is commented out), but is very slow because of the large amounts of data I am working with. Here's the code: - (void)filterContentForSearchText:(NSString*)searchText { NSAutoreleasePool *apool = [[NSAutoreleasePool alloc] init]; isSearching = YES; //[self.filteredListContent removeAllObjects]; // First clear the filtered array. for (Entry *entry in appDelegate.entries) { NSComparisonResult result = [entry.item compare:searchText options:(NSCaseInsensitiveSearch|NSDiacriticInsensitiveSearch) range:NSMakeRange(0, [searchText length])]; if (result == NSOrderedSame) { [self.filteredListContent addObject:entry]; } } isSearching=NO; [self.searchDisplayController.searchResultsTableView performSelectorOnMainThread:(@selector(reloadData)) withObject:nil waitUntilDone:NO]; //[self.searchDisplayController.searchResultsTableView reloadData]; [apool drain]; } - (BOOL)searchDisplayController:(UISearchDisplayController *)controller shouldReloadTableForSearchString:(NSString *)searchString { [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(filteredListContent:) object:searchString]; [self.filteredListContent removeAllObjects]; // First clear the filtered array. [self performSelectorInBackground:(@selector(filterContentForSearchText:)) withObject:searchString]; //[self filterContentForSearchText:searchString]; // Return YES to cause the search result table view to be reloaded. return NO; }

    Read the article

  • Algorithm for performing decentralized search in social networks

    - by Jack
    I want to find out all the existing decentralized algorithms that exploit the structural properties of social networks. So far I know the following algorithms - 1) Best connected search - Adamic et al 2) Random Walk (does not exploit any structural property but still it is decentralized) 3) Hamming distance search 4) Weak/Strong tie search Any help would be appreciated

    Read the article

  • How can I index content within a Content Editor web part?

    - by Hirvox
    I'm using MOSS 2007 v12.0.0.6529, and the the Shared Services crawler is ignoring content inside Content Editor Web Parts. The page itself is a Publishing page, and content within the Page Content field is indexed properly and shows up in search results. How can I ensure that content within Content Editor webparts is also indexed? Or do I have to use other methods like additional content fields in the page?

    Read the article

  • Outlook Archive Mail Searching

    - by Jeff Beck
    I would like to take my old outlook archives and push them to a central location that is full text searchable. I would think something like this is out there some where but I cant seem to find it. If something that could be put on a webserver all the better so I could always have access. Notes on what I don't want: Plugin to Outlook Desktop Search More steps then uploading pst file

    Read the article

  • express+jade: provided local variable is undefined in view (node.js + express + jade)

    - by Jake
    Hello. I'm implementing a webapp using node.js and express, using the jade template engine. Templates render fine, and can access helpers and dynamic helpers, but not local variables other than the "body" local variable, which is provided by express and is available and defined in my layout.jade. This is some of the code: app.set ('view engine', 'jade'); app.get ("/test", function (req, res) { res.render ('test', { locals: { name: "jake" } }); }); and this is test.jade: p hello =name when I remove the second line (referencing name), the template renders correctly, showing the word "hello" in the web page. When I include the =name, it throws a ReferenceError: 500 ReferenceError: Jade:2 NaN. 'p hello' NaN. '=name' name is not defined NaN. 'p hello' NaN. '=name' I believe I'm following the jade and express examples exactly with respect to local variables. Am I doing something wrong, or could this be a bug in express or jade?

    Read the article

  • Search for short words with SOLR

    - by Carsten Gehling
    I am using SOLR along with NGramTokenizerFactory to help create search tokens for substrings of words NGramTokenizer is configured with a minimum word length of 3 This means that I can search for e.g. "unb" and then match the word "unbelievable". However I have a problem with short words like "I" and "in". These are not indexed by SOLR (I suspect it is because of NGramTokenizer) and therefore I cannot search for them. I don't want to reduce the minimum word length to 1 or 2, since this creates a huge search index. But I would like SOLR to include whole words whose length is already below this minimum. How can I do that? /Carsten

    Read the article

  • Binary Search Help

    - by aloh
    Hi, for a project I need to implement a binary search. This binary search allows duplicates. I have to get all the index values that match my target. I've thought about doing it this way if a duplicate is found to be in the middle: Target = G Say there is this following sorted array: B, D, E, F, G, G, G, G, G, G, Q, R S, S, Z I get the mid which is 7. Since there are target matches on both sides, and I need all the target matches, I thought a good way to get all would be to check mid + 1 if it is the same value. If it is, keep moving mid to the right until it isn't. So, it would turn out like this: B, D, E, F, G, G, G, G, G, G (MID), Q, R S, S, Z Then I would count from 0 to mid to count up the target matches and store their indexes into an array and return it. That was how I was thinking of doing it if the mid was a match and the duplicate happened to be in the mid the first time and on both sides of the array. Now, what if it isn't a match the first time? For example: B, D, E, F, G, G, J, K, L, O, Q, R, S, S, Z Then as normal, it would grab the mid, then call binary search from first to mid-1. B, D, E, F, G, G, J Since G is greater than F, call binary search from mid+1 to last. G, G, J. The mid is a match. Since it is a match, search from mid+1 to last through a for loop and count up the number of matches and store the match indexes into an array and return. Is this a good way for the binary search to grab all duplicates? Please let me know if you see problems in my algorithm and hints/suggestions if any. The only problem I see is that if all the matches were my target, I would basically be searching the whole array but then again, if that were the case I still would need to get all the duplicates. Thank you BTW, my instructor said we cannot use Vectors, Hash or anything else. He wants us to stay on the array level and get used to using them and manipulating them.

    Read the article

  • php - correct search expression for bbpress

    - by billn
    I have a very purdy styled search box in my wordpress install, and looking for continuity I want the same for my bbpress forum - currently I'm failing. For WP I have: <div id="search"> <form action="<?php bloginfo('home') ?>" method="get"> <div class="hiddenFields"></div> <p><input name="s" id="s" maxlength="100" size="18" /><input class="submit" id="submit-button" type="submit" value="submit" /></p></form> </div> For bbPress The form 'method="get"' gets kicked out and of-course I need another expression. bbpress generally seems to use: <div class="search"><?php search_form(); ?></div> This wrecks everything - I get no styling attributes at all other than position... I can make it look fine (and consistent) with by removing the - but then it's obviously just a dummy with no search - any ideas to fix the latter? Thanks <div id="search"> <div class="hiddenFields"></div> <p><input name="s" id="s" maxlength="100" size="18" /><input class="submit" id="submit-button" type="submit" value="submit" /></p> </div>

    Read the article

  • SharePoint Search Service is not working

    - by George2
    Hello everyone, I am using SharePoint Server 2007 with collaboration portal template on Windows Server 2008. When I use the following function from Central Administration from Application Management - search - Manage Search Service, I met with the following error message, any ideas what is wrong? The search service is currently offline. Visit the Services on Server page in SharePoint Central Administration to verify whether the service is enabled. This might also be because an indexer move is in progress. thanks in advance, George

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >