Search Results

Search found 14407 results on 577 pages for 'business rules'.

Page 287/577 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • Low CPU/Memory/Memory-bandwith Pathfinding (maybe like in Warcraft 1)

    - by Valmond
    Dijkstra and A* are all nice and popular but what kind of algorithm was used in Warcraft 1 for pathfinding? I remember that the enemy could get trapped in bowl-like caverns which means there were (most probably) no full-path calculations from "start to end". If I recall correctly, the algorithm could be something like this: A) Move towards enemy until success or hitting a wall B) If blocked by a wall, follow the wall until you can move towards the enemy without being blocked and then do A) But I'd like to know, if someone knows :-) [edit] As explained to Byte56, I'm searching for a low cpu/mem/mem-bandwidth algo and wanted to know if Warcraft had some special secrets to deliver (never seen that kind of pathfinding elsewhere), I hope that that is more concordant with the stackexchange rules.

    Read the article

  • Redirect Google crawler to different robots.txt via .htaccess

    - by user3474818
    I have googled for the answer all day and still couldn't find an answer. I have a virtual subdomain www.static.example.com which is a mirror site of www.example.com. It means I have just one root folder for subdomain and domain aswell. I want to redirect crawlers to different robots.txt file - robots_static.txt when they see .static in url in which I will forbid indexing via /disallow command. I want to do this because I have duplicated content in Google search results. Subdomain is showing the exact same content as the main domain. Does anyone know how could I achieve that crawlers sees robots_static.txt instead of robots.txt? What I have managed to find so far is this: RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] but when I check in webmaster tools, it still sees robots.txt as my robots file instead of robots_static.txt, so it crawls and index everything twice. What did I do wrong? Thanks EDIT: This is my .htaccess file ## # @package Joomla # @copyright Copyright (C) 2005 - 2013 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / RewriteCond %{THE_REQUEST} ^GET.*index\.php [NC] RewriteCond %{THE_REQUEST} !/system/.* RewriteRule (.*?)index\.php/*(.*) /$1$2 [R=301,L] RewriteCond %{THE_REQUEST} ^GET ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. <FilesMatch "\.(ico|pdf|flv|jpg|ttf|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Wed, 15 Apr 2020 20:00:00 GMT" Header set Cache-Control "public" </FilesMatch> <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> ########## Begin - Remove Etags # FileETag none # ########## End - Remove Etags

    Read the article

  • Looking for a 24 Hour project for multiple languages [closed]

    - by Daan Timmer
    Right two friends and I came up with this idea of having a 24h programming competition. Where we are going to meet at one place and program away for 24hours long. Though we need a 'project'. Something that needs to be made within 24h. Doesn't have to be a real thing, just a nice learning 'thing'. The rules that we setup for ourselves is that the project can be programmed in any language of our own choice. What I know is that one guy is a PHP enthousiastic, we've got a C#/.NET person. And I am quite easy in languages and speak quite a few (PHP/C#.net/C++STL/Python/JavaScript/Java). Anything really language specific is out of the question. Is there anyone who happens to have a great idea for this?

    Read the article

  • Is the structure used for these web pages a design pattern?

    - by aspdotnetuser
    I want to know if the structure for an ASP.NET website I'm working on uses a design pattern for it's web pages. If it is a design pattern, what is it called? The web pages have the following structure: UserDetails page (UserDetails.aspx) - includes UserDetailsController.ascx user control. UserDetailsController.ascx includes sub user controls like UserAccountDetails.ascx and UserLoginDetails.ascx etc Each sub user control contains a small amount of code/logic, the 'controller' user controls that host these sub user controls (i.e UserDetailsController.ascx) appear to call the business rules code and pass the data to the sub user controls. Is this a design pattern? What is it called?

    Read the article

  • Launchpad fails to build a package for my PPA

    - by AZorin
    I'm trying to build a package on Launchpad's Debian build system for PPAs but I'm having some issues with a certain package. The package I'm trying to build (zorin-xwinwrap) contains a source C file which I'm trying to get to compile and build on Launchpad's server so that it would install and work on 32 bit (i386) and 64 bit (amd64) systems. Unfortunately I keep on getting an Error code 2 with the debian/rules file and I have no clue how to fix this issue. The following link is the source package of the software I'm trying to add to my PPA: http://goo.gl/GjZvd The following link is the buildlog for the failed package on Launchpad: http://goo.gl/6A2rQ I would greatly appreciate any suggestions if anyone may have any. Thank you for your time.

    Read the article

  • Virtual hosting

    - by H3llGhost
    Hello, I want to use domains like xxx.abc.domain.tld. The xxx is my folder to access. I tried it with the rewrite rules, but I can't get it working, because I don't know how to get the part xxx from the SERVER_NAME into my RewriteRule. This was my try: UseCanonicalName Off # include the IP address in the logs so they may be split LogFormat "%A %h %l %u %t \"%r\" %s %b" vcommon CustomLog /var/log/apache2/vaccess.log vcommon RewriteEngine On # a ServerName derived from a Host: header may be any case at all RewriteMap lowercase int:tolower ## deal with normal documents first: # do the magic RewriteCond ${lowercase:%{SERVER_NAME}} ^.+\.abc\.domain\.tld$ RewriteRule ^(.*)$ /var/www/abc.domain.tld/[xxx-part]/$1 [L] Perhaps there is a better solution. In generally I want to create a dynamic login system with mod_auth_mysql and for each xxx is a seperate user database. I would prefer the domain/address syntax abc.domain.tld/xxx, but I don't know how to realize it. Thanks for any advices.

    Read the article

  • Content Optimization only?

    - by danie7L T
    There are tons of discussions around tips&tricks to improve Search Engines "ranking" and SEOs. What if the focus of the webmaster/client is 100% set on the quality of the content with precise keywords in meta tags, clean design, regular articles updates, clean URLs and highly filtered external links leading to pages on websites dealing on the same,or related subjects; isn't it the job of a good search engine like Google to catch this website and show it in its front-page ? Or does Search Engines count on us to help them find us, and webmasters will always have to be up-to-date regarding SEO tools and rules updates on top of websites' design, browsers customization, progressive enhancement etc ?

    Read the article

  • Running an Application on a Different Domain

    - by Mark Flory
    Were I am contracting at right now has a new development domain.  Because of IT security rules it is fairly isolated from the domain my computer normally logs into (for e-mail and such).  I do use a VM to log directly into the domain but one of my co-workers found this command to run things on your box but in the other domain.  Pretty cool. For example this runs SQL Server Management Tool for SQL Server 2008: runas /netonly /user:{domain}\{username} "C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\ssms.exe" And this runs visual studios: runas /netonly /user:{domain}\{username} "C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe" It does not solve the problem I wanted to solve which would be to be able to assign Users/Groups in Team Explorer.  It instead still uses the domain I am logged into's groups.

    Read the article

  • Logic behind crawling an webpages like that of Screaming Frog? [on hold]

    - by sree
    I would like to know what is the parameters to be considered while developing a crawler like that of Screaming Frog. Am looking forward for information on do's and dont's of webpage crawling. What are the problems the crawler may infuse on the webpages like loadtime (maybe?) or anything that effects webpage during crawling. What are the rules the crawler needs to follow etc. Basically anything info that makes the crawler look good and accurate. Just point me in a right direction to achieve it.. Hope my requirement is clear this time.. :)

    Read the article

  • Silicon Valley Code Camp 2012 - Submit Your Talks

    - by arungupta
    Silicon Valley Code Camp follows three rules: Given by/for the community Always free Never occur during work hours I've spoken there at 2011, 2010, 2009, 2008, and 2007 and have again submitted a talk this year as well, and will submit more! Its one of the best organically grown code camps with the attendance constantly growing over the past 6 years. Here is a chart that shows how the number of conferences attendees that registered and attended and the sessions delivered over past 6 years. If you wonder why there is such a big gap between "registered" and "attended" that's because this event is FREE! Yes, 100% free. If you are in and around Silicon Valley then you have no reason to not participate/speak at SVCC. You have the opportunity to meet all the local JUG leaders and the community "rockstars" :-) Date: Oct 6/7, 2012 Venue: Foothill College, 12345, El Monte Road, Los Altos Hills, CA Submit today or register!

    Read the article

  • Is age a factor when looking for internships? [closed]

    - by user786362
    Possible Duplicate: Is it ever too old to learn how to become a programmer? I'm 30 years old going back to school for a 2nd degree in Computer Science. I will be transferring to my local state university this fall and would like to know if my age will be a factor when applying for internships. I have already read a few threads about age and careers: Is it too late to start your career as a programmer at the age of 30? Does it matter that you started developing at 26? While it is reassuring to know that people are getting entry-level programming jobs at 30+, what about internships? Should I even bother with bigger companies like Google, Microsoft, or Apple? I know we have laws against age-discrimination but lets not pretend we live in a perfect world where everyone follows the rules.

    Read the article

  • How can I use smbclient to connect to Windows shares by hostname when a firewall is enabled?

    - by skyblue
    I can't connect to file shares on Windows computers using smbclient -L //hostname when the firewall is enabled. This occurs whether I'm using ufw (which allows outgoing traffic and replies back in with the default configuration) or iptables (where I'm allowing outgoing traffic and replies back in with iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT). However, smbclient -L //ip-address works whether the firewall is enabled or not. I also tested this against a Samba server running on Ubuntu and again smbclient -L //hostname does not work when the firewall is enabled, but smbclient -L //ip-address works whether the firewall is enabled or not. For reference, here are the iptables rules I used during testing: *filter :INPUT DROP [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -d 224.0.0.251/32 -p udp -m udp --dport 5353 -j ACCEPT COMMIT

    Read the article

  • TechEd 2014 Day 2

    - by John Paul Cook
    Today people asked me about backing up older versions of SQL Server to Azure. Older versions back to SQL Server 2005 can be easily backed up to Azure Storage by installing Microsoft SQL Server Backup to Windows Azure Tool. It installs a service of the same name that applies rules to SQL Server backups. You can tell the tool to backup or encrypt your SQL Server backups. You can have it automatically upload your backups to Azure Storage. Even if you don’t want to upload your backups to Azure, you might...(read more)

    Read the article

  • Web pages with mixed ownership photos

    - by dstonek
    I have a photo website. 15% of the photos belong to approved registered users. They agree my terms about uploading their images in my web pages. I include a photographer credit on right bottom corner. About identifying the site with google, every page contains a google+ button to MY google+ page it also contains <link href="https://plus.google.com/nnnnnnnnnn/" rel="publisher" /> I need some advice in order to respect google rules about my pages containing other photographers images not to be penalized because of possible duplicated or interpreted as stolen content. My concern is also about adding G+ links (to MY photo page) and Google publisher id would harm my site rank because of pages containing third-party photos.

    Read the article

  • Kindle for PC via Wine

    - by Mollikins
    I've been checking out all the free books on Amazon and I really want to get the Kindle PC program running on my computer so I can download, read, and manage my e-books. I realize that Calibre can be used to manage e-books and I do have that downloaded. However, Amazon won't let me download ebooks unless my PC is registered in their little Kindle program and I'd like to just keep things streamlined by playing by the rules for once. I've viewed so many tutorials and advice for setting it up. I have Wine, I have Kindle for PC, I set it to Win98, all of that. However, when I try to open KindlePC from the Wine menu or from the desktop icon, nothing happens. No error messages, nothing. I don't know what might be wrong. Please let me know if you have any experience with this and any suggestions!

    Read the article

  • In the world of .Net, managed code and the web is there still a place for VBA?

    - by MrTelly
    Microsoft has moved away from the COM stack, VB6 is so last century and .Net rules the (MS) roost. Yet I find myself still banging out reams of VBA code - for a new project automating Excel seeing as you ask. I've tried to doing the same kind of thing using VSTO and it was just too damn buggy/hard/inefficient with a broken development model. I can't get rid of the feeling that I'm missing something, OTOH I really can't see a better way of solving this problem. What are your thoughts?

    Read the article

  • Is there a way to learn why Google penalized a site?

    - by pawelbrodzinski
    Is there any way to learn for sure why Google penalized a specific site? I think about situation when webmaster/site administrator is aware about Google rules and is sure they aren't breaking any, but the site is penalized nevertheless. The only information you get from Google is that they processed your reconsideration request but they neither say what is the result nor what is the penalty reason if they keep the site penalized. You can try to get information on Google webmasters forum or here but most of the time these are only speculations. Considering the site administrator tried to find out what's wrong but failed, is there a source which can tell what is the problem?

    Read the article

  • FxCop / Code Analysis with VS2010 Ultimate

    - by Cuartico
    I've getting some information about this, but I still can find a proper answer, I was asked recently in my company for this : "run a fxcop analysis on that code and tell me the results". Ok, I have VS2010 Ultimate which has code analysis, but before making any comment, I browse it on the internet cause I want to implement the best choice... So, let's say I'm gonna use the same rules on both analyzers: Should I recommend using one above the other? Should I say "hey, thats kinda old, let's use code analysis!" Should I get the same results on different computers? (for what I undersand, fxcop gives you some "points" and for what I've read, sometimes it gives you diff points on diff computers, I don't know about this with code analysis Thanks, any help would be appreciated

    Read the article

  • Security for LDAP authentication for Collabnet

    - by Robert May
    In a previous post, I wrote about how to get LDAP authentication working in Collabnet. By default, all LDAP users are put into the Users role on the server.  For most purposes, this is just fine, and I don’t have a way to change this.  The documentation gives hints that you can add them to other roles, but for now, I don’t have the need. However, adding permissions to different repositories is a different question. To add them, go to the repositories list, select Access Rules and then you can enter in their username, as it sits in Active Directory to the lists for the repositories or for the predefined groups that you have created.  To my knowledge, you cannot use the Active Directory groups in collabnet, which is a big problem.  Needing to micromanage users really limits the usefulness of the LDAP integration. Technorati Tags: subversion,collabnet

    Read the article

  • Rsyslog problem after ubuntu upgrade 10.4 to 12.4

    - by Oxymoron
    I was using Ubuntu 10.4 until last week for storing the log informations of a external device with rsyslog. After upgrading to ubuntu 12.4 the logging of TCP doesn't works anymore. (There are just no pakets visible - not even with tcpdump - aold ubuntu machine still sees the pakets.) UDP works with the identical configuration on the ubuntu machine and a "use UDP" on the external device. Are there any changes in rsyslog, that could explain this? My rsyslog.conf file looks like this (with more comments): $ModLoad imuxsock # provides support for local system logging $ModLoad imklog # provides kernel logging support (previously done by rklogd) #$ModLoad immark # provides --MARK-- message capability $KLogPath /proc/kmsg # provides UDP syslog reception $ModLoad imudp $UDPServerRun 514 # provides TCP syslog reception $ModLoad imtcp $InputTCPServerRun 514 ########################### #### GLOBAL DIRECTIVES #### ########################### $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat # Set the default permissions for all log files. # $FileOwner syslog $FileGroup adm $FileCreateMode 0640 $DirCreateMode 0755 $Umask 0022 $PrivDropToUser syslog $PrivDropToGroup syslog if $fromhost-ip startswith '192.168.0.10' then /var/log/caliDevice.log & ~ # local/regular rules, like '.' /var/log/syslog.log $IncludeConfig /etc/rsyslog.d/*.conf

    Read the article

  • Start script when connecting phone through usb

    - by choel
    Trying to run a script when my phone is plugged in via USB, a made a udev rule looks like this in /etc/udev/rules.d/85-lazydroid.rule ATTRS{idVendor}=="22b8", ATTRS{idProduct}=="428c", RUN+="/home/joel/.lazydroid" And the script .lazydroid looks like this: #!/bin/bash exec adb forward tcp:8080 tcp:8080 & exec chromium-browser 127.0.0.1:8080 --new-window & The script itself runs fine. The trick is I can't get the script to run up on insertion of the phone. And it's the right ID according to: lsusb | grep Motorola Bus 002 Device 042: ID 22b8:428c Motorola PCS Any ideas?

    Read the article

  • TechEd 2014 Day 2

    - by John Paul Cook
    Today people asked me about backing up older versions of SQL Server to Azure. Older versions back to SQL Server 2005 can be easily backed up to Azure Storage by installing Microsoft SQL Server Backup to Windows Azure Tool. It installs a service of the same name that applies rules to SQL Server backups. You can tell the tool to backup or encrypt your SQL Server backups. You can have it automatically upload your backups to Azure Storage. Even if you don’t want to upload your backups to Azure, you might...(read more)

    Read the article

  • What is the SEO-recommended method for using underscores and dashes in URLs that contain geographic locations?

    - by ElHaix
    In reading through this article: In Subfolder & File Names, Use Dashes, Not Underscores Good: Good: http://www.domain.com/sub-folder/file-name.htm Bad: http://www.domain.com/sub_folder/file_name.htm In my URL's, I may have one or two city names, ending with the province/state: Burnaby_New_Westminister-BC/[some search term]. My URL rules currently are defined such that everything after the dash is the prov/state. Some geographic locations already contain dashes: Notre-Dame-de-Grâce (in QC), which I would convert to ~/Notre_Dame_de_Grace-QC/ I thought of placing the prov/state after another "/", however in some cases the province/state name may not exist, thus ~/Notre_Dame_de_Grace/, so the first term after the domain name contains the geo location {city, city_name-state}. I am now revisiting this, and wondering if this rule set should change, and if so, what is the recommended way of implementing this? -- UPDATE -- After reviewing this video, I see that I should be using the dashes, rather than underscores. However since I still want to have my geo locations in the first URL section, is there anything wrong with using a double-dash separator - ie: /city-name--state/ ?

    Read the article

  • How to hide keyboard layout shortcut from Unity top panel?

    - by user67715
    I'm using Ubuntu 12.04 together with X Neural Switcher, which is a program for the automatic switching of the keyboard layout. The switcher goes with a GUI called gXNeur. The GUI places an applet icon to the Unity top panel (the gXNeur had to be whitelisted for the icon to become visible) that help a lot to configure and make urgent changes in the rules that the program uses. But after the icon is whitelisted I'm having two keyboard layout indicators in the panel (one - native, the second one - gXNeur). The native is the one I'd like to hide while gxNeur is more intuitive. Is there a way to do that? Thanks a lot for your help!

    Read the article

  • Enable 'mod_rewrite' Using .htaccess File On cPanel Shared Hosting Server

    - by zulhfreelancer
    I'm using cPanel to host my website. I need to enable 'mod_rewrite' on this Shared Hosting cPanel account to run my script. I've tried to Google the solutions high and low but did not find any luck yet. Those tutorials that I found only work well with VPS and some of them said that, only hosting provider can change and enable it. But, some of them said that, it can be done easily by editing the .htaccess file. My question: If I want to edit the .htaccess file, what should I include in that file? What the 'rules' and 'conditions' that should be included?

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >