Search Results

Search found 1125 results on 45 pages for 'bhoomi nature'.

Page 18/45 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Limiting calls to WCF services from BizTalk

    - by IntegrationOverload
    ** WORK IN PROGRESS ** This is just a placeholder for the full article that is in progress. The problem My BTS solution was receiving thousands of messages at once. After processing by BTS I needed to send them on via one of several WCF services depending on the message content. The problem is that due to the asynchronous nature of BizTalk the WCF services were getting hammered and could not cope with the load. Note: It is possible to limit the SOAP calls in the BtsNtSvc.exe.Config file but that does not have the desired results for Net-TCP WCF services. The solution So I created a new MessageType for the messages in question and posted them to the BTS messaeg box. This schema included the URL they were being sent to as a promoted property. I then subscribed to the message type from a new orchestraton (that does just the WCF send) using the URL as a correlation ID. This created a singleton orchestraton that was instantiated when the first message hit the message box. It then waits for further messages with the same correlation ID and type and processs them one at a time using a loop shape with a timer (A pretty standard pattern for processing related messages) Image to go here This limits the number of calls to the individual WCF services to 1. Which is a good start but the service can handle more than that and I didn't want to create a bottleneck. So I then constructed the Correlation ID using the URL concatinated with a random number between 1 and 10. This makes 10 possible correlation IDs per URL and so 10 instances of the singleton Orchestration per WCF service. Just what I needed and the upper random number is a configuration value in SSO so I can change the maximum connections without touching the code.

    Read the article

  • Scripting language for filling out web form

    - by ityler22
    I have a job as an intern at a technology company, I was given the unfortunate job of performing some data entry into our web management system. The information entered into the web form is stored in a MySQL DB. Upon receiving the data I realized I would have to submit this online form about 1000 different times all consisting of about 10 different text fields / check boxes per form. (So in other words, would be completely mind numbing and be a ridiculous waste of time and resources, or so I thought...) Having used databases a good bit prior to this, my immediate reaction was to just write a short MySQL script to bulk import all of the data, especially since it was already presented to me in an excel spreadsheet ready to go. Thought it may have been some sort of a test since it seemed too obvious. I wrote the script which consisted of about 10 lines of code but was then informed I couldn't be trusted with MySQL Admin privileges to run said script. So my next thought would be to write a script to just enter the information through the web form (Which will take ten times longer but it's what I have to) Being unfamiliar with scripting of this nature (seems like I would need something similar to a bot, but the good kind) I was unsure of how to proceed to do this. Is there a preferred language to use to enter the data i have into the web form I do have access to? I'm not particularly looking for this to be done for me by any means just a nice point in the right direction as far as what scripting language to use and how to pair that with the data I have that needs to be entered. Thanks for the help/ valuable input! EDIT: Is there a way to perform this using perl without having access to place any files on the server? Would I be able to run some Javascript loops to pull the data out of .csv or just a .txt format with line delimiters and insert it into the web form?

    Read the article

  • What determines which Javascript functions are blocking vs non-blocking?

    - by Sean
    I have been doing web-based Javascript (vanilla JS, jQuery, Backbone, etc.) for a few years now, and recently I've been doing some work with Node.js. It took me a while to get the hang of "non-blocking" programming, but I've now gotten used to using callbacks for IO operations and whatnot. I understand that Javascript is single-threaded by nature. I understand the concept of the Node "event queue". What I DON'T understand is what determines whether an individual javascript operation is "blocking" vs. "non-blocking". How do I know which operations I can depend on to produce an output synchronously for me to use in later code, and which ones I'll need to pass callbacks to so I can process the output after the initial operation has completed? Is there a list of Javascript functions somewhere that are asynchronous/non-blocking, and a list of ones that are synchronous/blocking? What is preventing my Javascript app from being one giant race condition? I know that operations that take a long time, like IO operations in Node and AJAX operations on the web, require them to be asynchronous and therefore use callbacks - but who is determining what qualifies as "a long time"? Is there some sort of trigger within these operations that removes them from the normal "event queue"? If not, what makes them different from simple operations like assigning values to variables or looping through arrays, which it seems we can depend on to finish in a synchronous manner? Perhaps I'm not even thinking of this correctly - hoping someone can set me straight. Thanks!

    Read the article

  • Using Clojure instead of Python for scalability (multi core) reasons, good idea?

    - by Vandell
    After reading http://clojure.org/rationale and other performance comparisons between Clojure and many languages, I started to think that apart from ease of use, I shouldn't be coding in Python anymore, but in Clojure instead. Actually, I began to fill irresponsisble for not learning clojure seeing it's benefits. Does it make sense? Can't I make really efficient use of all cores using a more imperative language like Python, than a lisp dialect or other functional language? It seems that all the benefits of it come from using immutable data, can't I do just that in Python and have all the benefits? I once started to learn some Common Lisp, read and done almost all exercices from a book I borrowod from my university library (I found it to be pretty good, despite it's low popularity on Amazon). But, after a while, I got myself struggling to much to do some simple things. I think there's somethings that are more imperative in their nature, that makes it difficult to model those thins in a functional way, I guess. The thing is, is Python as powerful as Clojure for building applications that takes advantages of this new multi core future? Note that I don't think that using semaphores, lock mechanisms or other similar concurrency mechanism are good alternatives to Clojure 'automatic' parallelization.

    Read the article

  • two-part dice pool mechanic

    - by bythenumbers
    I'm working on a dice mechanic/resolution system based off of the Ghost/Echo (hereafter shortened to G/E) tabletop RPG. Specifically, since G/E can be a little harsh with dealing out consequences and failure, I was hoping to soften the system and add a little more player control, as well as offer the chance for players to evolve their characters into something unique, right from creation. So, here's the mechanic: Players roll 2d12 against the two statistics for their character (each is a number from 2-11, and may be rolled above or below depending on the nature of the action attempted, rolling your stat exactly always fails). Depending on the success for that roll, they add dice to the pool rolled for a modified G/E style action. The acting player gets two dice anyhow, and I am debating offering a bonus die for each success, or a single bonus die for succeeding on both of the statistic-compared rolls. One the size of the dice pool is set, the entire pool is rolled, and the players are allowed to assign rolled dice to a goal and a danger. Assigned results are judged as follows: 1-4 means the attempted goal fails, or the danger comes true. 5-8 is a partial success at the goal, or partially avoiding the danger. 9-12 means the goal is achieved, or the danger avoided. My concerns are twofold: Firstly, that the two-stage action is too complicated, with two rolls to judge separately before anything can happen. Secondly, that the statistics involved go too far in softening the game. I've run some basic simulations, and the approximate statistics follow: 2 dice (up to) 3 dice (up to) 4 dice failure ~33% ~25% ~20% partial ~33% ~35% ~35% success ~33% ~40% ~45% I'd appreciate any advice that addresses my concerns or offers to refine my simulation (right now the first roll is statistically modeled as sign(1d12-1d12), where 0 is a success).

    Read the article

  • configuration issue with respect to .htaccess file on ubuntu

    - by Registered User
    I am building an application tshirtshop I have following configuration in /etc/apache2/sites-enabled/tshirtshop <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/tshirtshop <Directory /var/www/tshirtshop> Options Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> and following in .htaccess file in location /var/www/tshirtshop/.htaccess <IfModule mod_rewrite.c> # Enable mod_rewrite RewriteEngine On # Specify the folder in which the application resides. # Use / if the application is in the root. RewriteBase /tshirtshop #RewriteBase / # Rewrite to correct domain to avoid canonicalization problems # RewriteCond %{HTTP_HOST} !^www\.example\.com # RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] # Rewrite URLs ending in /index.php or /index.html to / RewriteCond %{THE_REQUEST} ^GET\ .*/index\.(php|html?)\ HTTP RewriteRule ^(.*)index\.(php|html?)$ $1 [R=301,L] # Rewrite category pages RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2&Page=$3 [L] RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2 [L] # Rewrite department pages RewriteRule ^.*-d([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&Page=$2 [L] RewriteRule ^.*-d([0-9]+)/?$ index.php?DepartmentId=$1 [L] # Rewrite subpages of the home page RewriteRule ^page-([0-9]+)/?$ index.php?Page=$1 [L] # Rewrite product details pages RewriteRule ^.*-p([0-9]+)/?$ index.php?ProductId=$1 [L] </IfModule> the site is working on localhost and is working as if there is no .htaccess rule specified i.e. if I were to view a page as http://localhost/tshirtshop/nature-d2 then I get a 404 Error but if I view the same page as http://localhost/tshirtshop/index.php?DepartmentId=2 then I can view it. sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) status_module (shared) Syntax OK What is the mistake if any one can point out in above configuration, or else I need to check any thing else?

    Read the article

  • SEO with duplicate content

    - by user16831
    I have a nature photography site with multiple types of photo galleries. Each photo and associated caption on my site appears in several galleries. For instance, a photo of a goldfinch that was taken on a trip to New Mexico in 2008 will appear in the "goldfinch.php" gallery, in the "finches.php" gallery, and in the "New_Mexico_2008.php" gallery. This duplication is useful for my site visitors - User A may want to see goldfinch photos, whereas User B wants to see photos from New Mexico - but I am concerned about the SEO implications. The typical suggestions to deal with duplicate content, such as 301 redirects and canonical tags, probably won't work in this case, because the page content is substantially different (ranging from ~1% to ~90% duplication, depending on the specific example chosen). The obvious solution to me would be to edit robots.txt to only allow search engines to crawl one type of gallery - for instance, if they crawled only the galleries organized by species(e.g. goldfinch.php), all the photos on my site would be found exactly once. However, the Google content guidelines recommend against blocking crawler access to duplicate information. Should I go ahead and use robots.txt anyway? Or is there a better solution?

    Read the article

  • Thoughts on Technical Opinions

    - by Joe Mayo
    Nearly every day, people send email from the C# Station contact form with feedback on the tutorial.  The overwhelming majority is positive and “Thank You” notes.  Some feedback identifies problems such as typos, grammatical errors, or a constructive explanation of an item that was confusing.  It’s pretty rare, but I even get emails that are not very nice at all – no big deal because it comes with the territory and is sometimes humorous.  Sometimes I get questions related to the content that is more of a general nature, referring to best practices or approaches. It’s these more general questions that are sometimes interesting because there’s often no right or wrong answer. There was a time when I was more opinionated about these general scenarios, but not so much anymore. Sure, people who are learning are wanting to know the “right” way to do something and general guidance is good to help them get started.  However, just because a certain practice is the way you or your clique does things, doesn’t mean that another approach is wrong.  These days, I think that a more open-minded approach when providing technical guidance is more constructive. By the way, to all the people who consistently send kind emails each day:  You’re very welcome. :) @JoeMayo

    Read the article

  • Python Web Applications: What is the way and the method to handle Registrations, Login-Logouts and Cookies? [on hold]

    - by Phil
    I am working on a simple Python web application for learning purposes. I have chosen a very minimalistic and simple framework. I have done a significant amount of research but I couldn't find a source clearly explaining what I need, which is as follows: I would like to learn more about: User registration User Log-ins User Log-outs User auto-logins I have successfully handled items 1 and 3 due to their simple nature. However, I am confused with item 2 (log-ins) and item 4 (auto-logins). When a user enters username and password, and after hashing with salts and matching it in the DB; What information should I store in the cookies in order to keep the user logged in during the session? Do I keep username+password but encrypt them? Both or just password? Do I keep username and a generated key matching their password? If I want the user to be able to auto-login (when they leave and come back to the web page), what information then is kept in the cookies? I don't want to use modules or libraries that handle these things automatically. I want to learn basics and why something is the way it is. I would also like to point out that I do not mind reading anything you might offer on the topic that explains hows and whys. Possibly with algorithm diagrams to show the process. Some information: I know about setting headers, cookies, encryption (up to some level, obviously not an expert!), request objects, SQLAlchemy etc. I don't want any data kept in a single web application server's store. I want multiple app-servers to be handle a user, and whatever needs to be kept on the server to be done with a Postgres/MySQL via SQLAlchemy (I think, this is called stateless?) Thank you.

    Read the article

  • Storing data offline with javascript

    - by Walker
    My question is about storing data offline and potentially whether I will need to bring in an outside programmer or could this be learned within a few weeks? The website I am working on will have an interface where users will login and go through a series of quizzes in the form of checkbox, drop down menus, and others. Each page/quiz area could have 20-100 total checkboxes in a series of 3-5 rows because of the comprehensive nature of course. This I can do - I know how to code the quiz and return a correct or incorrect answer based on each individual checkbox and present a cumulative score (ie: you got 57% correct). The issue lies in the fact that I would like to save the users results and keep them informed of their progress. When they complete all of the quizzes, I would like to have a visual output of their performance in each area. Storing the output from their results offline is where I think I may run into a problem with my lack of coding experience. I would also like to have a sidebar with their progress of each section (10-15) with a green percentage completion bar or a % correct which would draw from this. I have never had to code something that stores information like this offline - so back to my question - would it be better to learn the language needed or bring in a coder/developer for the back end stuff.

    Read the article

  • Want Google to index redirect urls

    - by Dave Goten
    I'm having issues with users who think that Google Search is the address bar. Some of the sites that link to my site use user friendly addresses with 301 redirects to pages that have less friendly URLs. So, for example if I enter www.foo.com/bar it goes to www.bar.com/page.php?some-parameters-and-utm-codes-etc usually this is done by a 301 redirect in order to keep the SEO from foo.com on bar.com and so on, which I believe is standard practice. However, lately there have been more and more people searching www.foo.com/bar instead of going to www.foo.com/bar directly and because the page /bar is nothing more than a redirect it has no SEO that I know of. Things I've thought of but haven't been able to test, because Google takes forever to update :) (and I'm lazy like that), include using Google sitemaps and having them enter their redirects as entries there. (I could see this working if they were the top search entry all the time, and it might appear as a sitelink, but I don't know if that'll make the url itself show up in searches) Using Canonical tags on my pages to the redirects they set up. Which is a nightmare in itself because of the nature of my pages. One week the www.foo.com/bar might go to www.bar.com/pageA.php the next it might goto www.bar.com/pageB.php and having to remember to take the canonical tag off of pageA, so that it doesn't get confused with pageB would be a pain. Using 302 redirects -.- So I guess the question here is, does anyone have any experience or knowledge about this? What should I do to make www.foo.com/bar show up when someone 'searches' for this redirect url?

    Read the article

  • How to switch off? [closed]

    - by Xophmeister
    While I've programmed software for many years, I've only recently started doing so professionally and have noticed a bit of a problematic pattern. I hope this is the best place to pose such a question, as I am interested in others' experiences and solutions... Writing software is, by its nature, a cerebral exercise. When coding for my own sake, I would do so until I was satisfied; even if that meant going all night. Now I'm coding in exchange for goods and services, on projects that are inherently uninteresting to me, I want to 'switch off' when it's time to go home. Maybe you consider that to be a 'bad attitude', but I just don't feel that whatever I'm working on is worth caring about after-hours. Besides, my employer doesn't exactly have the infrastructure required to make out-of-office changes; I can't just clone a repo and even remote login is a PITA. Anyway, the problem I'm experiencing is that, while I'm not particularly overworked or stressed, if I'm faced with a problem, my brain will work on a solution. Generally, it won't give up. Hence I can't switch off and, sometimes, the problem or the solution is significant enough that it disrupts my sleep. While, paradoxically, this doesn't seem to affect my coding ability, it can have a profound impact of the rest of my life. I get increasingly low as I get tired. So far, the best solutions I've found are writing little notes on the matter (and, say, e-mailing them back to my work address) and exercise. Neither of these can switch me off entirely and, as the week progresses, exercise especially becomes untenable due to tiredness. TL;DR How can you stop from being a coding zombie?

    Read the article

  • Is browser and bot whitelisting a practical approach?

    - by Sn3akyP3t3
    With blacklisting it takes plenty of time to monitor events to uncover undesirable behavior and then taking corrective action. I would like to avoid that daily drudgery if possible. I'm thinking whitelisting would be the answer, but I'm unsure if that is a wise approach due to the nature of deny all, allow only a few. Eventually someone out there will be blocked unintentionally is my fear. Even so, whitelisting would also block plenty of undesired traffic to pay per use items such as the Google Custom Search API as well as preserve bandwidth and my sanity. I'm not running Apache, but the idea would be the same I'm assuming. I would essentially be depending on the User Agent identifier to determine who is allowed to visit. I've tried to take into account for accessibility because some web browsers are more geared for those with disabilities although I'm not aware of any specific ones at the moment. The need to not depend on whitelisting alone to keep the site away from harm is fully understood. Other means to protect the site still need to be in place. I intend to have a honeypot, checkbox CAPTCHA, use of OWASP ESAPI, and blacklisting previous known bad IP addresses.

    Read the article

  • How to represent a graph with multiple edges allowed between nodes and edges that can selectively disappear

    - by Pops
    I'm trying to figure out what sort of data structure to use for modeling some hypothetical, idealized network usage. In my scenario, a number of users who are hostile to each other are all trying to form networks of computers where all potential connections are known. The computers that one user needs to connect may not be the same as the ones another user needs to connect, though; user 1 might need to connect computers A, B and D while user 2 might need to connect computers B, C and E. Image generated with the help of NCTM Graph Creator I think the core of this is going to be an undirected cyclic graph, with nodes representing computers and edges representing Ethernet cables. However, due to the nature of the scenario, there are a few uncommon features that rule out adjacency lists and adjacency matrices (at least, without non-trivial modifications): edges can become restricted-use; that is, if one user acquires a given network connection, no other user may use that connection in the example, the green user cannot possibly connect to computer A, but the red user has connected B to E despite not having a direct link between them in some cases, a given pair of nodes will be connected by more than one edge in the example, there are two independent cables running from D to E, so the green and blue users were both able to connect those machines directly; however, red can no longer make such a connection if two computers are connected by more than one cable, each user may own no more than one of those cables I'll need to do several operations on this graph, such as: determining whether any particular pair of computers is connected for a given user identifying the optimal path for a given user to connect target computers identifying the highest-latency computer connection for a given user (i.e. longest path without branching) My first thought was to simply create a collection of all of the edges, but that's terrible for searching. The best thing I can think to do now is to modify an adjacency list so that each item in the list contains not only the edge length but also its cost and current owner. Is this a sensible approach? Assuming space is not a concern, would it be reasonable to create multiple copies of the graph (one for each user) rather than a single graph?

    Read the article

  • Should I use a separate 'admin' user as my "root sudo" or grant sudo to my 'app' user?

    - by AJB
    I'm still wrapping my brain around the Ubuntu 'nullify root' user management philosophy (and Linux in general) and I'm wondering if I should 'replace' my root user with a user called 'admin' (which basically has all the powers of the root, when using sudo) and create another user called 'app' that will be the primary user for my app. Here's the context: I'll be running a LNMP stack on Ubuntu 12.04 Server LTS. There will be only one app running on the server. The 'app' user needs to have SUPER privileges for MySQL. PHP will need to be able to exec() shell commands. The 'app' user will need to be able to transfer files via SFTP. And I'm thinking this would be the best approach: nullify 'root' user create a user called 'admin' that will be a full sudoer of root, this will be the new "root" user of NGINX, PHP, and MySQL (and all system software) grant SUPER privileges to 'app' in MySQL Grant SFTP privileges to only the 'app' user. As I'm new to this, and the information I've found in researching it tends to be of a more general nature, I'm wondering if this is a solid approach, or if it's unorthodox in a way that would cause issues down the road. Thanks in advance for any help.

    Read the article

  • Best Practices PHP mvc routing

    - by dukeofweatherby
    I have a custom MVC framework that is in a constant state of evolution. There's a long standing debate with a co-worker how the routing should work. Considering the following directory structure: /core/Router.php /mvc/Controllers/{Public controllers} /mvc/Controllers/Private/{Controllers requiring valid user} /mvc/Controllers/CMS/{Controllers requiring valid user and specific roles} The question is: "Where should the current User's authentication be established: in the Router, when choosing which controller/directory to load, or in each Controller?" My argument is that when authenticating in the Router, an Error Controller is created instead of the requested Controller, informing you of your mishap; And the directory structure clearly indicates the authentication required. His argument is that a router should do routing and only routing. Leave it to the Controller to handle it on a case by case basis. This is more modular and allows more flexibility should changes need to be made by the router. PHP MVC - Custom Routing Mechanism alluded to it, but the topic was of a different nature. Alternative suggestions would be welcomed as well.

    Read the article

  • Advice on reconciling discordant data

    - by Justin
    Let me support my question with a quick scenario. We're writing an app for family meal planning. We'll produce daily plans with a target calorie goal and meals to achieve it for our nuclear family. Our calorie goal will be calculated for each person from their attributes (gender, age, weight, activity level). The weight attribute is the simplest example here. When Dad (the fascist nerd who is inflicting this on his family) first uses the application he throws approximate values into it for Daughter. He thinks she is 5'2" (157 cm) and 125 lbs (56kg). The next day Mom sits down to generate the menu and looks back over what the bumbling Dad did, quietly fumes that he can never recall anything about the family, and says the value is really 118 lbs! This is the first introduction of the discord. It seems, in this scenario, Mom is probably more correct that Dad. Though both are only an approximation of the actual value. The next day the dear Daughter decides to use the program and sees her weight listed. With the vanity only a teenager could muster she changes the weight to 110 lbs. Later that day the Mom returns home from a doctor's visit the Daughter needed and decides that it would be a good idea to update her Daughter's weight in the program. Hooray, another value, this time 117 lbs. Now how do you reconcile these data points? Measurement error, confidence in parties, bias, and more all confound the data. In some idealized world we'd have a weight authority of some nature providing the one and only truth. How about in our world though? And the icing on the cake is that this single data point changes over time. How have you guys solved or managed this conflict?

    Read the article

  • Looking to create website that can have custom GUI and database per user

    - by riley3131
    I have developed an MS Access database for a company to track data in regards to production of a certain commodity. It has many many tables, forms, reports, etc. These were all done as the user requested, and resemble the users previously used system, mostly printed worksheets and excel workbooks. This has created a central location for all information and has allowed the company to compare data in a new way. I am now looking to do this for other companies, but would like to switch it to a web application. Here is my question. What is the best way to create unique solutions for individual companies that can have around 100 users each? I would love to create one site that would serve all parties, but that would ruin the customizable nature of what I am developing. I love the ability to create reports, excel sheets, pdf, graphs, etc with access, but am tired of relying on my customers software, servers, etc. I have some experience with WAMP, but I am far better at VBA. I was okay at PHP, and was getting a grasp on JavaScript a few years back. I am also trying to decide whether to go with WAMP or LAMP, if web is the best choice. Also, should I try set up one site for all users that after log-in goes to company specific pages, or individual sites for each company? Should I host or use a service?

    Read the article

  • Ternary and Artificial Intelligence

    - by user2957844
    Not much of a programmer myself, however I have been thinking about the future of AI. If a fully functional AI is programmed in a binary environment as is used in current computing, would that create a bit of a black and white personality? As in just yes/no, on/off, 1/0? I will use the Skynet computer from the Terminator series as a bad analogy; it is brought online and comes to the conclusion that humanity should just be destroyed so the problem is resolved, basically its only options were; fire the missiles or not. (The films do not really go into what its moves would be after doing such a thing, but that goes into the realms of AI evolution so does not really fit with this question.) It may also have been badly programmed. Now, the human mind has been akin to a ternary system which allows our "out of the box" thinking along with all the other wonderful things our minds can do. So, would it not be more prudent to create a functional ternary system and program an AI using it so the resulting personality would be able to benefit from the third "maybe" (so to speak) option? I understand that in binary there are ways to get around the whole yes/no etc. way of things, however the basic operations are still just 1's and 0's. Again with using the above bad Skynet analogy; if it could have had that third "maybe" option as part of its core system, it may have decided to not launch due to being able to make sense of the intricacies of human nature and the politics of such a move etc. In effect, my question is; Would an AI benefit more from ternary computing as opposed to binary due to the inclusion of -1, or 2, dependent on the system ("maybe," as I call it)?

    Read the article

  • Content light website and Google - Tell google it's a listings site (as opposed shop, reviews or restaurants)

    - by Doug Firr
    I have a listings style website. Due to the nature of this (listings) the site is content light. Each page is typically less that 50 words but there are many pages. The site in question has had a ton of media coverage and so has some great inbound links from places like Wired, Fast Company, Canada Broadcasting Corporation and many many other bloggers, media websites and recycle related niche authors (It's a recycling site). But Google really ignores it. Traffic from search is very very low - less than 5% of all traffic. I know that using markup you can tell Google whether your site is a restaurant, article, review, shop, local business and a few other categories (https://www.google.com/webmasters/markup-helper/u/0/). Is there a way to tell Google that my site is a listings site? I suspect, but do not know for sure, that part of the problem is that Google simply does not know what my site is? It's a crowdmap where people post curbalerts. The information is useful to people but it is presented in a short, concise way - a pin on a map, a picture and a short description. Adding anything further is not necessary for the site's intended purpose. 1st question - how best to tell the search engines what y site is - listings and not some spammy website? Any recommendations in improving our site's Search presence? You can take a look here if interested: http://tinyurl.com/lxg4hn7

    Read the article

  • what will EcmaScript 6 bring to the table for us

    - by user697296
    Our company ported moderate chunks of business logic to JavaScript. We compile the code with a minifier, which further improves performance. Since the language is dynamically typed, it lends itself well to obfuscation, which occurs as a byproduct of minification. We went to great efforts to ensure it positively screams, performance-wise. We can now do what we did before, faster, better, with less code, on more platforms. In summary, we are very satisfied with the current state of the language. I personally love the language especially for its cross-platform nature. So naturally, I read up a lot about the state of JavaScript compilers, performance and compatibility across as many browsers and platforms as I have time to research. The one theme which has been growing louder and louder these days, is the news about ECMAScript 6. So far, what I have been able to gather is that ES6 promises a better development experience; firstly by enabling new ways to do things, secondly by reporting errors early. This sounds great for those who are still waiting for the language to meet their needs before jumping on board. But we have already jumped on board in a big way. Sure, I expect that we will have to do ongoing maintenance and feature revisions on our code through the years, and that we would obviously make use of best practices at the time. But I don't see us refactoring major portions of it to take advantage of language features that are mostly intended to boost developer productivity. I keep wondering, what impact will the language advances ultimately have on our existing, well-written, well-performing code base? Is there something I am missing? Is there something we ought to look out for? Does anyone have tips or guidance on how we should approach the ecmascript.next finalization? Should we care?

    Read the article

  • What do you feel are characteristics of a mature programmer?

    - by blparker
    Hey all. As part of ongoing research that I am conducting, I would like to ask a crowd, that I feel is the most qualified, to answer a question. What does the community feel are characteristics of a mature programmer? I'm not asking the question because I'm looking to hire or anything of that nature. A colleague and I repeatedly hear a trend throughout universities and specifically computer science departments. The students generally ask questions of the form: How can I become a mature programmer? How can I become a world class programmer? What steps should a new programmer take to become more skilled? So, with that, we are conducting research to attempt to identify an optimized path that would allow an introductory programmer to advance to that of a skilled/mature programmer. Now, I understand that there are many "it depends" out there depending on what vertical industry one works in, but I feel we can determine many common characteristics irrespective of the industry. Any thing you can offer is greatly appreciated. Thanks.

    Read the article

  • Windows Phone 7 v. Windows 8 Metro &ldquo;Same but Different&rdquo;

    - by ryanabr
    I have been doing development on both the Windows Phone 7 and Windows 8 Metro style applications over the past month and have really been enjoying doing both. What is great is that Silverlight is used for both development platforms. What is frustrating is the "Same but Different" nature of both platforms. Many similar services and ways of doing things are available on both platforms, but the objects, namespaces, and ways of handling certain cases are different. I almost had a heart attack when I thought that XmlDocument had been removed from the new WinRT. I was relived (but a little annoyed)  when I found out that it had shifted from the "System.Xml" namespace to the "Windows.Data.Xml.Dom" namespace. In my opinion this is worse than deprecating and reintroducing it since there isn't the lead time to know that the change is coming, maker changes and adjust. I also think the breaks the compatibility that is advertised between the WinRT and .NET framework from a programming perspective, as the code base will have to be physically different if compiled for one platform versus the other. Which brings up another issue, the need for separate DLLs with for the different platforms that contain the same C# code behind them which seems like the beginning of a code maintenance headache. Historically, I have kept source files "co-located" with the projects that they are compiled into. After doing some research, I think I will end up keeping "common" files that need to be compiled in to DLLs for the different platforms in a seperate location in TFS, not directly included in any one Visual Studio project, but added as links in the project that would get compiled into the windows 7 phone, or Windows 8. This will work fine, except for the case where dependencies don't line up for each platform as described above, but will work fine for base classes that do the raw work at the most basic programming level.

    Read the article

  • Is it justified to use project-wide unique function and variable names to help future refactoring?

    - by kahoon
    Refactoring tools (like ReSharper) often can't be sure whether to rename a given identifier when, for example refactoring a JavaScript function. I guess this is a consequence of JavaScript's dynamic nature. ReSharper solves this problem by offering to rename reasonable lexical matches too. The developer can opt out of renaming certain functions, if he finds that the lexical match is purely accidental. This means that the developer has to approve every instance that will be affected by the renaming. For example let's consider we have two Backbone classes which are used completely independently from each other in our application: var Header = Backbone.View.extend({ close: function() {...} }) var Dialog = Backbone.View.extend({ close: function() {...} }) If you try to rename Dialog's close method to for example closeAndNotify, then ReSharper will offer to rename all occurences of Header's close method just because they are the same lexically prior to the renaming. To avoid this problem, please consider this: var Header = Backbone.View.extend({ closeHeader: function() {...} }) var Dialog = Backbone.View.extend({ closeDialog: function() {...} }) Now you can rename closeDialog unambiguously - given that you have no other classes having a method with the same name. Is it worth it to name my functions this way to help future refactoring?

    Read the article

  • How do you quantify competency in terms of time (years)?

    - by o.k.w
    While looking for a job via agencies some time ago, I kept having questions from the recuitment agents or in the application forms like: How many years of experience do you have in: Oracle ASP.NET J2EE etc etc etc.... At first I answered faithfully... 5yrs, 7yrs, 2 yrs, none, few months etc etc.. Then I thought; I can be doing something shallow for 7 years and not being competent at it simply because I am just doing a minor support for a legacy system running SQL2000 which requires 10 days of my time for the past 7 years. Eventualy I declined to answers such questions. I wonder why do they ask these questions anymore. Anyone who just graduated with a computer science can claim 3 to 4 years experience in anything they 'touched' in the cirriculum, which to me can be equivalent to zero or 10 years depending how you look at it. It might hold true decades ago where programmers and IT skills are of very different nature. I might be wrong but I really doubt 'time' or 'years' are a good gauge of competency or experience anymore. Any opinion/rebuttal are welcome!

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >