Search Results

Search found 1146 results on 46 pages for 'jikan the useless'.

Page 45/46 | < Previous Page | 41 42 43 44 45 46  | Next Page >

  • How to properly deny Railo directory access through Apache

    - by Sn3akyP3t3
    I've been battle tested on this and failed to achieve my goal which is to deny all access to all directories except the Public directory and only allow access to all all other directories with specific IP addresses. To get Railo+Apache+Tomcat installed I pretty much followed this script: https://github.com/talltroym/Railo-Ubuntu-Installer-Script then verified settings with this tutorial: http://blog.nictunney.com/2012/03/railo-tomcat-and-apache-on-amazon-ec2.html From the installation script these mods are enabled: sudo a2enmod ssl sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod rewrite sudo a2ensite default-ssl Outside of the script I copied the sites-available to sites-enabled then reloaded Apache. I have a directory created for Railo cmfl located at /var/www/Railo/ Navigating the browser to http ://Server_IP_Address/Railo forces ssl and relocates to https ://Server_IP_Address/Railo which shows off index.cfm. Not providing index.cfm and omitting https indicates that the DirectoryIndex directive and RewriteCond of Apache appears to be working for the sites-enabled VirtualHost. The problem I'm encountering is that I cannot seem to deny access to all directories except Public. My directory structure is rather simple and looks like this: Railo error Public NotPublic Sandbox These are my sites-enabled configurations: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #Default Deny All to prevent walking backwards in file system Alias /Railo/ "/var/www/Railo/" <Directory ~ ".*/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html index.cfc RewriteEngine on RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R] </VirtualHost> and <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www Alias /Railo/ "/var/www/Railo/" <Directory ~ "/var/www/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html #Proxy .cfm and cfc requests to Railo ProxyPassMatch ^/(.+.cf[cm])(/.*)?$ http://127.0.0.1:8888/$1 ProxyPassReverse / http://127.0.0.1:8888/ #Deny access to admin except for local clients <Location /railo-context/admin/> Order deny,allow Deny from all # Allow from <Omitted> # Allow from <Omitted> Allow from 127.0.0.1 </Location> </VirtualHost> </IfModule> The apache2.conf includes the following: # Include the virtual host configurations: Include sites-enabled/ <IfModule !mod_jk.c> LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so </IfModule> <IfModule mod_jk.c> JkMount /*.cfm ajp13 JkMount /*.cfc ajp13 JkMount /*.do ajp13 JkMount /*.jsp ajp13 JkMount /*.cfchart ajp13 JkMount /*.cfm/* ajp13 JkMount /*.cfml/* ajp13 # Flex Gateway Mappings # JkMount /flex2gateway/* ajp13 # JkMount /flashservices/gateway/* ajp13 # JkMount /messagebroker/* ajp13 JkMountCopy all JkLogFile /var/log/apache2/mod_jk.log </IfModule> I believe I understand most of this except the jk_module inclusion which I've noticed has an error that shows up in the logs that I can't sort out: [warn] No JkShmFile defined in httpd.conf. Using default /etc/apache2/logs/jk-runtime-status I've checked my Regular expression against the paths of the directories with RegexBuddy just to be sure that I wasn't correct. The problem doesn't appear to be Regex related although I may have something incorrect in the Directory directive. The Location directive seems to be working correctly for blocking out Railo admin site access.

    Read the article

  • SVN: Release branch headaches, how to merge in website revisions as and when cleared to go live?

    - by Pete Duncanson
    I need a sanity check here if we can, any ideas on correcting/changing the following are very welcome! We've been getting ourselves in knots of late with our SVN and are trying to correct it by putting a Trunk/Release system in place. We have a large website that we develop on and we store it all in SVN. Heres what we had in mind: We have trunk and a release branch All work gets checked into Trunk. When a feature is deemed ready for the next release it is merged into a Release branch. We only have one release branch and just tag "Latest" when we do a push to live We hope to be able to get all the files changed from Latest to Head to give us a zip that we can upload (any ideas on an easy way to do this via scripting?) So we set all this up and where very happy with ourselves. Except its not working and heres why. We work on lots a different features/fixes/problems at once and they don't all get nicely checked in feature complete (but always working at least). Then sometimes you have to wait for Clients to sign off. As a result you end up with revisions which are "ready for live" being scattered with ones which are "still being worked on" in trunk. That means that the completed revisions are not getting merged in sequentially but out of order. I thought SVN could handle this, clever little thing it is, but apparently not. Heres an example: Pete changes some CSS to make a new button look pretty (Revision 1) Dave add some CSS to the bottom of the same CSS file as Pete's for a new feature (Revision 2) Dave's mod gets the nod so he merges it into Release and commits it with a log message mentioning revision number and bug tracking id. Pete adds more buttons to finish this mod, no CSS changes here though (Revision 3) Pete then merges his mods (Revision 1 and 3) into the Head of Release (which has Daves merge in it) but this over-writes Daves CSS additions which now dissapear completely. This leads to the site being broken and the Release branch being pretty much useless. So we tried some other ideas like reverting Release back to "Latest" and then just merging in all the Revisions 1,2 and 3 in order. This worked fine until we had Revision 4 which was not ready for live and Revision 5 which was. Suddenly we are getting ourselves in knots again with exactly the same problem! Ok so take three. Revert to Latest, merge in Revision 5, then do any update back to Head. Tree conflicts galore! So thats a no no. I cracked in the end and built it all up manaually but its not something I want to do regular, ideally I want to script our deployment but can't while Release is in such a mess. HELP! What the heck are we doing wrong? I can't seem to find any solutions to this problem of wanting different none sequential Revisions in Release. If its not possible thats fine but how the heck are we meant to get stuff live easily. We can't branch for every single change, the site takes 30 minutes+ to check out it would take too long. Side note, we are using TortoiseSVN so can we keep command line examples to a minimum in any answers? Latest version of TSVN and SVN Version 1.6 so we have the funky merge tracking etc. EDIT: An excellent blog post which deals with the dev/release cycle (although using GIT but still relivant) thought everyone would like to read it if they found this question interesting. (http://nvie.com/git-model) EDIT 2: I wrote a blog post on how to show which branch you are working on in your website which others have asked me about (http://www.offroadcode.com/2010/5/14/which-svn-branch-are-you-working-on.aspx). Hope that helps. In the meantime we are looking at Kiln and hoping to make the switch next month (gulp!)

    Read the article

  • Is SHA-1 secure for password storage?

    - by Tgr
    Some people throw around remarks like "SHA-1 is broken" a lot, so I'm trying to understand what exactly that means. Let's assume I have a database of SHA-1 password hashes, and an attacker whith a state of the art SHA-1 breaking algorithm and a botnet with 100,000 machines gets access to it. (Having control over 100k home computers would mean they can do about 10^15 operations per second.) How much time would they need to find out the password of any one user? find out the password of a given user? find out the password of all users? find a way to log in as one of the users? find a way to log in as a specific user? How does that change if the passwords are salted? Does the method of salting (prefix, postfix, both, or something more complicated like xor-ing) matter? Here is my current understanding, after some googling. Please correct in the answers if I misunderstood something. If there is no salt, a rainbow attack will immediately find all passwords (except extremely long ones). If there is a sufficiently long random salt, the most effective way to find out the passwords is a brute force or dictionary attack. Neither collision nor preimage attacks are any help in finding out the actual password, so cryptographic attacks against SHA-1 are no help here. It doesn't even matter much what algorithm is used - one could even use MD5 or MD4 and the passwords would be just as safe (there is a slight difference because computing a SHA-1 hash is slower). To evaluate how safe "just as safe" is, let's assume that a single sha1 run takes 1000 operations and passwords contain uppercase, lowercase and digits (that is, 60 characters). That means the attacker can test 1015*60*60*24 / 1000 ~= 1017 potential password a day. For a brute force attack, that would mean testing all passwords up to 9 characters in 3 hours, up to 10 characters in a week, up to 11 characters in a year. (It takes 60 times as much for every additional character.) A dictionary attack is much, much faster (even an attacker with a single computer could pull it off in hours), but only finds weak passwords. To log in as a user, the attacker does not need to find out the exact password; it is enough to find a string that results in the same hash. This is called a first preimage attack. As far as I could find, there are no preimage attacks against SHA-1. (A bruteforce attack would take 2160 operations, which means our theoretical attacker would need 1030 years to pull it off. Limits of theoretical possibility are around 260 operations, at which the attack would take a few years.) There are preimage attacks against reduced versions of SHA-1 with negligible effect (for the reduced SHA-1 which uses 44 steps instead of 80, attack time is down from 2160 operations to 2157). There are collision attacks against SHA-1 which are well within theoretical possibility (the best I found brings the time down from 280 to 252), but those are useless against password hashes, even without salting. In short, storing passwords with SHA-1 seems perfectly safe. Did I miss something?

    Read the article

  • Writing a code example

    - by Stefano Borini
    I would like to have your feedback regarding code examples. One of the most frustrating experiences I sometimes have when learning a new technology is finding useless examples. I think an example as the most precious thing that comes with a new library, language, or technology. It must be a starting point, a wise and unadulterated explanation on how to achieve a given result. A perfect example must have the following characteristics: Self contained: it should be small enough to be compiled or executed as a single program, without dependencies or complex makefiles. An example is also a strong functional test if you correctly installed the new technology. The more issues could arise, the more likely is that something goes wrong, and the more difficult is to debug and solve the situation. Pertinent: it should demonstrate one, and only one, specific feature of your software/library, involving the minimal additional behavior from external libraries. Helpful: the code should bring you forward, step by step, using comments or self-documenting code. Extensible: the example code should be a small “framework” or blueprint for additional tinkering. A learner can start by adding features to this blueprint. Recyclable: it should be possible to extract parts of the example to use in your own code Easy: An example code is not the place to show your code-fu skillz. Keep it easy. helpful acronym: SPHERE. Prototypical examples of violations of those rules are the following: Violation of self-containedness: an example spanning multiple files without any real need for it. If your example is a python program, keep everything into a single module file. Don’t sub-modularize it. In Java, try to keep everything into a single class, unless you really must partition some entity into a meaningful object you need to pass around (and java mandates one class per file, if I remember correctly). Violation of Pertinency: When showing how many different shapes you can draw, adding radio buttons and complex controls with all the possible choices for point shapes is a bad idea. You de-focalize your example code, introducing code for event handling, controls initialization etc., and this is not part the feature you want to demonstrate, they are unnecessary noise in the understanding of the crucial mechanisms providing the feature. Violation of Helpfulness: code containing dubious naming, wrong comments, hacks, and functions longer than one page of code. Violation of Extensibility: badly factored code that have everything into a single function, with potentially swappable entities embedded within the code. Example: if an example reads data from a file and displays it, create a method getData() returning a useful entity, instead of opening the file raw and plotting the stuff. This way, if the user of the library needs to read data from a HTTP server instead, he just has to modify the getData() module and use the example almost as-is. Another violation of Extensibility comes if the example code is not under a fully liberal (e.g. MIT or BSD) license. Violation of Recyclability: when the code layout is so intermingled that is difficult to easily copy and paste parts of it and recycle them into another program. Again, licensing is also a factor. Violation of Easiness: Yes, you are a functional-programming nerd and want to show how cool you are by doing everything on a single line of map, filter and so on, but that could not be helpful to someone else, who is already under pressure to understand your library, and now has to understand your code as well. And in general, the final rule: if it takes more than 10 minutes to do the following: compile the code, run it, read the source, and understand it fully, it means that the example is not a good one. Please let me know your opinion, either positive or negative, or experience on this regard.

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • How do you stop scripters from slamming your website hundreds of times a second?

    - by davebug
    [update] I've accepted an answer, as lc deserves the bounty due to the well thought-out answer, but sadly, I believe we're stuck with our original worst case scenario: CAPTCHA everyone on purchase attempts of the crap. Short explanation: caching / web farms make it impossible for us to actually track hits, and any workaround (sending a non-cached web-beacon, writing to a unified table, etc.) slows the site down worse than the bots would. There is likely some pricey bit of hardware from Cisco or the like that can help at a high level, but it's hard to justify the cost if CAPTCHAing everyone is an alternative. I'll attempt to do a more full explanation in here later, as well as cleaning this up for future searchers (though others are welcome to try, as it's community wiki). I've added bounty to this question and attempted to explain why the current answers don't fit our needs. First, though, thanks to all of you who have thought about this, it's amazing to have this collective intelligence to help work through seemingly impossible problems. I'll be a little more clear than I was before: This is about the bag o' crap sales on woot.com. I'm the president of Woot Workshop, the subsidiary of Woot that does the design, writes the product descriptions, podcasts, blog posts, and moderates the forums. I work in the css/html world and am only barely familiar with the rest of the developer world. I work closely with the developers and have talked through all of the answers here (and many other ideas we've had). Usability of the site is a massive part of my job, and making the site exciting and fun is most of the rest of it. That's where the three goals below derive. CAPTCHA harms usability, and bots steal the fun and excitement out of our crap sales. To set up the scenario a little more, bots are slamming our front page tens of times a second screenscraping (and/or scanning our rss) for the Random Crap sale. The moment they see that, it triggers a second stage of the program that logs in, clicks I want One, fills out the form, and buys the crap. In current (2/6/2009) order of votes: lc: On stackoverflow and other sites that use this method, they're almost always dealing with authenticated (logged in) users, because the task being attempted requires that. On Woot, anonymous (non-logged) users can view our home page. In other words, the slamming bots can be non-authenticated (and essentially non-trackable except by IP address). So we're back to scanning for IPs, which a) is fairly useless in this age of cloud networking and spambot zombies and b) catches too many innocents given the number of businesses that come from one IP address (not to mention the issues with non-static IP ISPs and potential performance hits to trying to track this). Oh, and having people call us would be the worst possible scenario. Can we have them call you? BradC Ned Batchelder's methods look pretty cool, but they're pretty firmly designed to defeat bots built for a network of sites. Our problem is bots are built specifically to defeat our site. Some of these methods could likely work for a short time until the scripters evolved their bots to ignore the honeypot, screenscrape for nearby label names instead of form ids, and use a javascript-capable browser control. lc again "Unless, of course, the hype is part of you

    Read the article

  • c - dereferencing issue

    - by Joe
    Hi, I have simplified an issue that I've been having trying to isolate the problem, but it is not helping. I have a 2 dimensional char array to represent memory. I want to pass a reference to that simulation of memory to a function. In the function to test the contents of the memory I just want to iterate through the memory and print out the contents on each row. The program prints out the first row and then I get seg fault. My program is as follows: #include <stdio.h> #include <stdlib.h> #include <ctype.h> #include <string.h> void test_memory(char*** memory_ref) { int i; for(i = 0; i < 3; i++) { printf("%s\n", *memory_ref[i]); } } int main() { char** memory; int i; memory = calloc(sizeof(char*), 20); for(i = 0; i < 20; i++) { memory[i] = calloc(sizeof(char), 33); } memory[0] = "Mem 0"; memory[1] = "Mem 1"; memory[2] = "Mem 2"; printf("memory[1] = %s\n", memory[1]); test_memory(&memory); return 0; } This gives me the output: memory[1] = Mem 1 Mem 0 Segmentation fault If I change the program and create a local version of the memory in the function by dereferencing the memory_ref, then I get the right output: So: #include <stdio.h> #include <stdlib.h> #include <ctype.h> #include <string.h> void test_memory(char*** memory_ref) { char** memory = *memory_ref; int i; for(i = 0; i < 3; i++) { printf("%s\n", memory[i]); } } int main() { char** memory; int i; memory = calloc(sizeof(char*), 20); for(i = 0; i < 20; i++) { memory[i] = calloc(sizeof(char), 33); } memory[0] = "Mem 0"; memory[1] = "Mem 1"; memory[2] = "Mem 2"; printf("memory[1] = %s\n", memory[1]); test_memory(&memory); return 0; } gives me the following output: memory[1] = Mem 1 Mem 0 Mem 1 Mem 2 which is what I want, but making a local version of the memory is useless because I need to be able to change the values of the original memory from the function which I can only do by dereferencing the pointer to the original 2d char array. I don't understand why I should get a seg fault on the second time round, and I'd be grateful for any advice. Many thanks Joe

    Read the article

  • Filtering List Data with a jQuery-searchFilter Plugin

    - by Rick Strahl
    When dealing with list based data on HTML forms, filtering that data down based on a search text expression is an extremely useful feature. We’re used to search boxes on just about anything these days and HTML forms should be no different. In this post I’ll describe how you can easily filter a list down to just the elements that match text typed into a search box. It’s a pretty simple task and it’s super easy to do, but I get a surprising number of comments from developers I work with who are surprised how easy it is to hook up this sort of behavior, that I thought it’s worth a blog post. But Angular does that out of the Box, right? These days it seems everybody is raving about Angular and the rich SPA features it provides. One of the cool features of Angular is the ability to do drop dead simple filters where you can specify a filter expression as part of a looping construct and automatically have that filter applied so that only items that match the filter show. I think Angular has single handedly elevated search filters to first rate, front-row status because it’s so easy. I love using Angular myself, but Angular is not a generic solution to problems like this. For one thing, using Angular requires you to render the list data with Angular – if you have data that is server rendered or static, then Angular doesn’t work. Not all applications are client side rendered SPAs – not by a long shot, and nor do all applications need to become SPAs. Long story short, it’s pretty easy to achieve text filtering effects using jQuery (or plain JavaScript for that matter) with just a little bit of work. Let’s take a look at an example. Why Filter? Client side filtering is a very useful tool that can make it drastically easier to sift through data displayed in client side lists. In my applications I like to display scrollable lists that contain a reasonably large amount of data, rather than the classic paging style displays which tend to be painful to use. So I often display 50 or so items per ‘page’ and it’s extremely useful to be able to filter this list down. Here’s an example in my Time Trakker application where I can quickly glance at various common views of my time entries. I can see Recent Entries, Unbilled Entries, Open Entries etc and filter those down by individual customers and so forth. Each of these lists results tends to be a few pages worth of scrollable content. The following screen shot shows a filtered view of Recent Entries that match the search keyword of CellPage: As you can see in this animated GIF, the filter is applied as you type, displaying only entries that match the text anywhere inside of the text of each of the list items. This is an immediately useful feature for just about any list display and adds significant value. A few lines of jQuery The good news is that this is trivially simple using jQuery. To get an idea what this looks like, here’s the relevant page layout showing only the search box and the list layout:<div id="divItemWrapper"> <div class="time-entry"> <div class="time-entry-right"> May 11, 2014 - 7:20pm<br /> <span style='color:steelblue'>0h:40min</span><br /> <a id="btnDeleteButton" href="#" class="hoverbutton" data-id="16825"> <img src="images/remove.gif" /> </a> </div> <div class="punchedoutimg"></div> <b><a href='/TimeTrakkerWeb/punchout/16825'>Project Housekeeping</a></b><br /> <small><i>Sawgrass</i></small> </div> ... more items here </div> So we have a searchbox txtSearchPage and a bunch of DIV elements with a .time-entry CSS class attached that makes up the list of items displayed. To hook up the search filter with jQuery is merely a matter of a few lines of jQuery code hooked to the .keyup() event handler: <script type="text/javascript"> $("#txtSearchPage").keyup(function() { var search = $(this).val(); $(".time-entry").show(); if (search) $(".time-entry").not(":contains(" + search + ")").hide(); }); </script> The idea here is pretty simple: You capture the keystroke in the search box and capture the search text. Using that search text you first make all items visible and then hide all the items that don’t match. Since DOM changes are applied after a method finishes execution in JavaScript, the show and hide operations are effectively batched up and so the view changes only to the final list rather than flashing the whole list and then removing items on a slow machine. You get the desired effect of the list showing the items in question. Case Insensitive Filtering But there is one problem with the solution above: The jQuery :contains filter is case sensitive, so your search text has to match expressions explicitly which is a bit cumbersome when typing. In the screen capture above I actually cheated – I used a custom filter that provides case insensitive contains behavior. jQuery makes it really easy to create custom query filters, and so I created one called containsNoCase. Here’s the implementation of this custom filter:$.expr[":"].containsNoCase = function(el, i, m) { var search = m[3]; if (!search) return false; return new RegExp(search, "i").test($(el).text()); }; This filter can be added anywhere where page level JavaScript runs – in page script or a seperately loaded .js file.  The filter basically extends jQuery with a : expression. Filters get passed a tokenized array that contains the expression. In this case the m[3] contains the search text from inside of the brackets. A filter basically looks at the active element that is passed in and then can return true or false to determine whether the item should be matched. Here I check a regular expression that looks for the search text in the element’s text. So the code for the filter now changes to:$(".time-entry").not(":containsNoCase(" + search + ")").hide(); And voila – you now have a case insensitive search.You can play around with another simpler example using this Plunkr:http://plnkr.co/edit/hDprZ3IlC6uzwFJtgHJh?p=preview Wrapping it up in a jQuery Plug-in To make this even easier to use and so that you can more easily remember how to use this search type filter, we can wrap this logic into a small jQuery plug-in:(function($, undefined) { $.expr[":"].containsNoCase = function(el, i, m) { var search = m[3]; if (!search) return false; return new RegExp(search, "i").test($(el).text()); }; $.fn.searchFilter = function(options) { var opt = $.extend({ // target selector targetSelector: "", // number of characters before search is applied charCount: 1 }, options); return this.each(function() { var $el = $(this); $el.keyup(function() { var search = $(this).val(); var $target = $(opt.targetSelector); $target.show(); if (search && search.length >= opt.charCount) $target.not(":containsNoCase(" + search + ")").hide(); }); }); }; })(jQuery); To use this plug-in now becomes a one liner:$("#txtSearchPagePlugin").searchFilter({ targetSelector: ".time-entry", charCount: 2}) You attach the .searchFilter() plug-in to the text box you are searching and specify a targetSelector that is to be filtered. Optionally you can specify a character count at which the filter kicks in since it’s kind of useless to filter at a single character typically. Summary This is s a very easy solution to a cool user interface feature your users will thank you for. Search filtering is a simple but highly effective user interface feature, and as you’ve seen in this post it’s very simple to create this behavior with just a few lines of jQuery code. While all the cool kids are doing Angular these days, jQuery is still useful in many applications that don’t embrace the ‘everything generated in JavaScript’ paradigm. I hope this jQuery plug-in or just the raw jQuery will be useful to some of you… Resources Example on Plunker© Rick Strahl, West Wind Technologies, 2005-2014Posted in jQuery  HTML5  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Tuesday, June 25, 2013

    CodePlex Daily Summary for Tuesday, June 25, 2013Popular ReleasesCrmRestKit.js: CrmRestKit-2.6.1.js: Improvments for "ByQueryAll": Instead of defering unit all records are loaded, the progress-handler is invoked for each 50 records Added "progress-handler" support for "ByQueryAll" See: http://api.jquery.com/deferred.notify/* See Unit-tests for details ('Retrieve all with progress notification')VG-Ripper & PG-Ripper: VG-Ripper 2.9.44: changes NEW: Added Support for "ImgChili.net" links FIXED: Auto UpdaterMinesweeper by S. Joshi: Minesweeper 1.0.0.1: Zipped!Alfa: Alfa.Common 1.0.4923: Removed useless property ErrorMessage from StringRegularExpressionValidationRule Repaired ErrorWindow and disabled report button after click action Implemented: DateTimeRangeValidationRule DateTimeValidationRule DateValidationRule NotEmptyValidationRule NotNullValidationRule RequiredValidationRule StringMaxLengthValidationRule StringMinLengthValidationRule TimeValidationRule DateTimeToStringConverter DecimalToStringConverter StringToDateTimeConverter StringToDecima...Document.Editor: 2013.25: What's new for Document.Editor 2013.25: Improved Spell Check support Improved User Interface Minor Bug Fix's, improvements and speed upsStyleMVVM: 3.0.2: This is a minor feature and bug fix release Features: ExportWhenDebuggerIsAttacedAttribute - new attribute that marks an attribute to only be exported when the debugger is attahced InjectedFilterAttributeFilterProvider - new Attribute Filter provider for MVC that injects the attributes Performance Improvements - minor speed improvements all over, and Import collections is now 50% faster Bug Fixes: Open Generic Constraints are now respected when finding exports Fix for fluent registrat...Base64 File Converter: First release: - One file can be converted at a time. - Bytes to Base64 and Base64 to Bytes conversions are availableADExtractor - work with Active-Directory: V1.3.0.2: 1.3.0.2- added optional value-index to properties - added new aggregation "Literal" to pass-through values - added new SQL commandline-switch --db to override the connection string - added new global commandline-switch --quiet to reduce output to minimum 1.3.0.1- sub-property resolution now working with aggregation (ie. All(member.objectGuid)) 1.3.0.0- added correct formatting for guids (GuidFormatter)EMICHAG: EMICHAG NETMF 4.2: Alpha release of the 4.2 version.WPF Composites: Version 4.3.0: In this Beta release, I broke my code out into two separate projects. There is a core FasterWPF.dll with the minimal required functionality. This can run with only the Aero.dll and the Rx .dll's. Then, I have a FasterWPFExtras .dll that requires and supports the Extended WPF Toolkit™ Community Edition V 1.9.0 (including Xceed DataGrid) and the Thriple .dll. This is for developers who want more . . . Finally, you may notice the other OPTIONAL .dll's available in the download such as the Dyn...Windows.Forms.Controls Revisited (SSTA.WinForms): SSTA.WinForms 1.0.0: Latest stable releaseAscend 3D: Ascend 2.0: Release notes: Implemented bone/armature animation Refactored class hierarchy and naming Addressed high CPU usage issue during animation Updated the Blender exporter and Ascend model format (now XML) Created AscendViewer, a tool for viewing Ascend modelsIndent Guides for Visual Studio: Indent Guides v13: ImportantThis release does not support Visual Studio 2010. The latest stable release for VS 2010 is v12.1. Version History Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not appearing in newly opened files Fixed some potential crashes Fixed lines going through pragma statements Various updates for VS 2012 and VS 2013 Removed VS 2010 support Changed in v12.1: Fixed crash when unable to start...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.1.0 - Prerelease d: Fluent Ribbon Control Suite 2.1.0 - Prerelease d(supports .NET 3.5, 4.0 and 4.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (not for .NET 3.5) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery *Walkthrough (do...Magick.NET: Magick.NET 6.8.5.1001: Magick.NET compiled against ImageMagick 6.8.5.10. Breaking changes: - MagickNET.Initialize has been made obsolete because the ImageMagick files in the directory are no longer necessary. - MagickGeometry is no longer IDisposable. - Renamed dll's so they include the platform name. - Image profiles can now only be accessed and modified with ImageProfile classes. - Renamed DrawableBase to Drawable. - Removed Args part of PathArc/PathCurvetoArgs/PathQuadraticCurvetoArgs classes. The...Three-Dimensional Maneuver Gear for Minecraft: TDMG 1.1.0.0 for 1.5.2: CodePlex???(????????) ?????????(???1/4) ??????????? ?????????? ???????????(??????????) ??????????????????????? ↑????、?????????????????????(???????) ???、??????????、?????????????????????、????????1.5?????????? Shift+W(????)??????????????????10°、?10°(?????????)???Hyper-V Management Pack Extensions 2012: HyperVMPE2012 (v1.0.1.126): Hyper-V Management Pack Extensions 2012 Beta ReleaseOutlook 2013 Add-In: Email appointments: This new version includes the following changes: - Ability to drag emails to the calendar to create appointments. Will gather all the recipients from all the emails and create an appointment on the day you drop the emails, with the text and subject of the last selected email (if more than one selected). - Increased maximum of numbers to display appointments to 30. You will have to uninstall the previous version (add/remove programs) if you had installed it before. Before unzipping the file...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.5.2: v1.5.2 - This is a service release. We've fixed a number of issues with Tasks and IoC. We've made some consistency improvements across platforms and fixed a number of minor bugs. See changes.txt for details. Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro inversion of control container (IoC); source code...SQL Compact Query Analyzer: 1.0.1.1511: Beta build of SQL Compact Query Analyzer Bug fixes: - Resolved issue where the application crashes when loading a database that contains tables without a primary key Features: - Displays database information (database version, filename, size, creation date) - Displays schema summary (number of tables, columns, primary keys, identity fields, nullable fields) - Displays the information schema views - Displays column information (database type, clr type, max length, allows null, etc) - Support...New Projectsalgorithm interview: Write some code for programming interview questionsAmbiTrack: AmbiTrack provides marker-free localization and tracking using an array of cameras, e.g. the PlayStation Eye.DSeX DragonSpeak eXtended Editor: Furcadia DragonSpeak EditorEasyProject: easy project is finel engineering project this sys made for easy way to manage emploees of a small business or organization .EDIVisualizer SDK: EDIVisualizer SDK is a software developpement kit (SDK) for creating plugin used by EDIVisualizer program.Flame: Interactive shell for scripting in IronRuby IronPython Powershell C#, with syntax highlighting, autocomplete,and for easily work within a .NET framework consoleFusion-io MS SQL Server scripts: Fusion-io's Sumeet Bansal shows how Microsoft SQL Server users can quickly determine whether their database will benefit from Fusion-io ioMemory products.hotelprimero: Proyecto de hotelImage filters: It's a simple image filter :)JSLint.NET: JSLint.NET is a wrapper for Douglas Crockford's JSLint, the JavaScript code quality tool. It can validate JavaScript anywhere .NET runs.KZ.Express.H: KZ.Express.Hmoleshop: moleshop???asp.net ???????B2C????,????????.net???????????。moleshop??asp.net mvc 4.0???,???.net 4.0??。 moleshop???????,??AL2.0 ????,????????????????,????????。PDF Generator Net Free Xml Parser: PDF Generator Net Free Xml Parser?XML?????????????????????。?????????????????????CSV?????????PDF???????????????????????。ScreenCatcher: ScreenCatcher is a program for creating screenshots.SharePoint Configuration Guidance for 21 CFR Part 11 Compliance – SP 2013: This CodePlex project provides the Paragon Solutions configurations and code to support 21 CFR Part 11 compliance with SharePoint 2013.Sky Editor: Save file editor for Pokémon Mystery Dungeon: Explorers of Sky with limited support for Explorers of Time and DarknessTRANCIS: For foundation and runnning online operations for Trancis.UGSF GRAND OUEST: UGSF GRAND OUESTWP.JS: WP.JS (Windows Phone (dot) Javascript) aims to be a compelling library for developing HTML5-based applications for Windows Phone 8.X-Aurora: X-Aurora ?? MVC3

    Read the article

  • Why are there so many man-made edge cases in IT, and is there any hope for simplification / unificat

    - by Hamish Grubijan
    This question is meant to generate discussion and so it is marked as community wiki. My observation is that the field of information technology grows so rapidly and randomly, that for many it takes a lot of time to learn many intricacies of some tools that will be obsolete in just short 3 years. If you look at the questions asked on StackOverflow ... at least half of them stem from the fact that some language / tool / API / protocol was poorly designed, is backwards and has gotchas. There are so many things which distract developers from converting English into machine code; instead they spend their time configuring stuff and gluing together things that do not really fit. How many times do you pick up somebody else's project (or someone picks up yours :) ) and realize that this program does not need half of the dialogs that it has, and that the logic can be simplified a great deal? But, it had to be made and sold here before a better thing is made and sold elsewhere, and hence all this rush. I often wish that things would just slow down. I do not want Microsoft Windows to run on my car's computer, my watch, my table, my toaster oven, and my toilet seat. I'd rather have Windows that DOES NOT HAVE WINDOWS REGISTRY, I'd rather have Windows that allows two different programs to work on the same file at the same time, the way it works on Unix systems. Microsoft is just an example. I am looking forward to the day when I do not have to worry about Windows vs Unix new line break, when System32 actually means that this directory contains 32-bit binaries, and not 64-bit ones, the day when dll hell and manifest hell are no longer an issue, the day when it takes me a lot less than 3 months on a new job to learn the system. I do not mean learning the entire code base of a product (depending on the size of it, it can take a long time). I mean - remembering which build-assisting scripts are written in Perl and which version of it, and which ones are done through .bat files, when do I need to manually make every file in some directory writable before running a script, or else a critical step of a database maintenance home-grown tool will bomb, and it will take 2 days to clean that up. Makes me wonder if humans enslaved computers, or if it is the other way around. The key is that improving those things will not bring extra revenue, and hence those taking the time to fix crap like that are not "business focused". However, these imperfections irritate me immensely, particularly because my memory is limited - I can hold only a small portion of that useless knowledge of a system in my head at any given point in time. I must not be alone. Did you also happen to notice that a programmer can waste a lot of time on things that should have been a lot more straight-forward? Is there hope? Will things get better/simpler in the future, or will there be a lot more IT crap floating around? I suppose I see diversity of tools, protocols, etc. as a bad thing. Thank you for participation.

    Read the article

  • Help to restructure my Doc/View more correctly

    - by Harvey
    Edited by OP. My program is in need of a lot of cleanup and restructuring. In another post I asked about leaving the MFC DocView framework and going to the WinProc & Message Loop way (what is that called for short?). Well at present I am thinking that I should clean up what I have in Doc View and perhaps later convert to non-MFC it that even makes sense. My Document class currently has almost nothing useful in it. I think a place to start is the InitInstance() function (posted below). In this part: POSITION pos=pDocTemplate->GetFirstDocPosition(); CLCWDoc *pDoc=(CLCWDoc *)pDocTemplate->GetNextDoc(pos); ASSERT_VALID(pDoc); POSITION vpos=pDoc->GetFirstViewPosition(); CChildView *pCV=(CChildView *)pDoc->GetNextView(vpos); This seem strange to me. I only have one doc and one view. I feel like I am going about it backwards with GetNextDoc() and GetNextView(). To try to use a silly analogy; it's like I have a book in my hand but I have to look up in it's index to find out what page the Title of the book is on. I'm tired of feeling embarrassed about my code. I either need correction or reassurance, or both. :) Also, all the miscellaneous items are in no particular order. I would like to rearrange them into an order that may be more standard, structured or straightforward. ALL suggestions welcome! BOOL CLCWApp::InitInstance() { InitCommonControls(); if(!AfxOleInit()) return FALSE; // Initialize the Toolbar dll. (Toolbar code by Nikolay Denisov.) InitGuiLibDLL(); // NOTE: insert GuiLib.dll into the resource chain SetRegistryKey(_T("Real Name Removed")); // Register document templates CSingleDocTemplate* pDocTemplate; pDocTemplate = new CSingleDocTemplate( IDR_MAINFRAME, RUNTIME_CLASS(CLCWDoc), RUNTIME_CLASS(CMainFrame), RUNTIME_CLASS(CChildView)); AddDocTemplate(pDocTemplate); // Parse command line for standard shell commands, DDE, file open CCmdLineInfo cmdInfo; ParseCommandLine(cmdInfo); // Dispatch commands specified on the command line // The window frame appears on the screen in here. if (!ProcessShellCommand(cmdInfo)) { AfxMessageBox("Failure processing Command Line"); return FALSE; } POSITION pos=pDocTemplate->GetFirstDocPosition(); CLCWDoc *pDoc=(CLCWDoc *)pDocTemplate->GetNextDoc(pos); ASSERT_VALID(pDoc); POSITION vpos=pDoc->GetFirstViewPosition(); CChildView *pCV=(CChildView *)pDoc->GetNextView(vpos); if(!cmdInfo.m_Fn1.IsEmpty() && !cmdInfo.m_Fn2.IsEmpty()) { pCV->OpenF1(cmdInfo.m_Fn1); pCV->OpenF2(cmdInfo.m_Fn2); pCV->DoCompare(); // Sends a paint message when complete } // enable file manager drag/drop and DDE Execute open m_pMainWnd->DragAcceptFiles(TRUE); m_pMainWnd->ShowWindow(SW_SHOWNORMAL); m_pMainWnd->UpdateWindow(); // paints the window background pCV->bDoSize=true; //Prevent a dozen useless size calculations return TRUE; } Thanks

    Read the article

  • A very strange problem related to Flash and Java

    - by Nitesh Panchal
    Hello, I have created one simple program for showing webcam of the user. It works absolutely fine when i am running without integrating it in my java web application. But as soon as i copy paste the same files in Netbeans 6.8 and try to run it. My swf file is visible but the buttons in it are totally unclickable. I get the message of allow or deny showing webcam but the buttons are not clickable. It makes my application totally useless. I thought it's a minor issue but now it's getting on my nerves. I am trying it since 3 hours but didn't find any solution. I don't think anybody faced this issue till now that's why can't seem to find any thread in any forum on it in google. I am using Netbeans 6.8, Ubuntu, Glassfish V3 and JSF. Thanks in advance :). UPDATE:- It works fine on Windows XP. Then what can be the problem with Ubuntu? Any kind of permission rights that i need to give? 2nd UPDATE:- I think this is bug with Flash. I found that this code doesn't work :- <object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0" id="VideoConferenceAdmin" align="middle"> <param name="allowScriptAccess" value="sameDomain" /> <param name="wmode" value="opaque"/> <param name="allowFullScreen" value="false" /> <param NAME="FlashVars" VALUE="rootIp=localhost&amp;userName=admin&amp;roomName=test" /> <param name="movie" value="Conference/VideoConference.swf" /><param name="quality" value="high" /> <embed src="Conference/VideoConference.swf" wmode="window" FlashVars="rootIp=localhost&amp;userName=test&amp;roomName=test" quality="high" width="600" height="500" name="VideoConference" align="middle" allowScriptAccess="sameDomain" allowFullScreen="false" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" /> </object> And this one works :- <object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0" id="VideoConferenceAdmin" align="middle"> <param name="allowScriptAccess" value="sameDomain" /> <param name="wmode" value="opaque"/> <param name="allowFullScreen" value="false" /> <param NAME="FlashVars" VALUE="rootIp=localhost&amp;userName=admin&amp;roomName=test" /> <param name="movie" value="Conference/VideoConference.swf" /><param name="quality" value="high" /> <embed src="Conference/VideoConference.swf" wmode="window" FlashVars="rootIp=localhost&amp;userName=test&amp;roomName=test" quality="high" name="VideoConference" align="middle" allowScriptAccess="sameDomain" allowFullScreen="false" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" /> </object> As you can see, just by removing the height and width attribute the panels of Flash settings work fine. I know you will find me funny but i am not. You can check out on your own if you don't trust me. Now, my question would be, can anybody tell me how do i set height and width of swf? I want it to be 600px width and 500px height. Even the style attribute doesn't work. Can i set height and width from within Flash? because without explicitly setting height and width, it looks very small. I think this should be very easy for all now. Thanks in advance :)

    Read the article

  • Need .Net method to compute a Google Pagerank request checksum.

    - by Steve K
    The company I work for is currently developing a SEO tool which needs to include a domain or url Pagerank. It is possible to retrieve such data directly from Google by sending a request to the url called by the Google ToolBar. On of the parameters send to that url is a checksum of the domain whose pagerank is being requested. I have found multiple .Net methods for calculating that check sum; however, every one randomly returns corrupt values every so often. I can only handle errors to a certain point before my final data set becomes useless. I know that there are countless tools out there, from browser plugins to desktop applications, that can process page rank, so it can't be impossible. My question, then, is two fold: 1) Any anyone heard of the problem I am having? (specifically in .Net) If so, how can it (or has it) be resolved? 2) Is there a better source for retrieving Pagerank data? Below is the Url and checksum code I have been using. "http://toolbarqueries.google.com/search?client=navclient-auto&ie=UTF-8&oe=UTF-8&features=Rank:&q=info:" & strUrl & "ch=" & strCheckSum where: strUrl = the url being queried strCheckSum = CheckHash(GetHash(url)) (see code below) Any help would be greatly appreciated. ''' <summary> ''' Returns a hash-string from the site's URL ''' </summary> ''' <param name="_SiteURL">full URL as indexed by Google</param> ''' <returns>HASH for site as a string</returns> Private Shared Function GetHash(ByVal _SiteURL As String) As String Try Dim _Check1 As Long = StrToNum(_SiteURL, 5381, 33) Dim _Check2 As Long = StrToNum(_SiteURL, 0, 65599) _Check1 >>= 2 _Check1 = ((_Check1 >> 4) And 67108800) Or (_Check1 And 63) _Check1 = ((_Check1 >> 4) And 4193280) Or (_Check1 And 1023) _Check1 = ((_Check1 >> 4) And 245760) Or (_Check1 And 16383) Dim T1 As Long = ((((_Check1 And 960) << 4) Or (_Check1 And 60)) << 2) Or (_Check2 And 3855) Dim T2 As Long = ((((_Check1 And 4294950912) << 4) Or (_Check1 And 15360)) << 10) Or (_Check2 And 252641280) Return Convert.ToString(T1 Or T2) Catch Return "0" End Try End Function ''' <summary> ''' Checks the HASH-string returned and adds check numbers as necessary ''' </summary> ''' <param name="_HashNum">generated HASH-string</param> ''' <returns>modified HASH-string</returns> Private Shared Function CheckHash(ByVal _HashNum As String) As String Try Dim _CheckByte As Long = 0 Dim _Flag As Long = 0 Dim _tempI As Long = Convert.ToInt64(_HashNum) If _tempI < 0 Then _tempI = _tempI * (-1) End If Dim _Hash As String = _tempI.ToString() Dim _Length As Integer = _Hash.Length For x As Integer = _Length - 1 To 0 Step -1 Dim _quick As Char = _Hash(x) Dim _Re As Long = Convert.ToInt64(_quick.ToString()) If 1 = (_Flag Mod 2) Then _Re += _Re _Re = CLng(((_Re \ 10) + (_Re Mod 10))) End If _CheckByte += _Re _Flag += 1 Next _CheckByte = _CheckByte Mod 10 If 0 <> _CheckByte Then _CheckByte = 10 - _CheckByte If 1 = (_Flag Mod 2) Then If 1 = (_CheckByte Mod 2) Then _CheckByte >>= 1 End If End If End If If _Hash.Length = 9 Then _CheckByte += 5 End If Return "7" + _CheckByte.ToString() + _Hash Catch Return "0" End Try End Function ''' <summary> ''' Converts the string (site URL) into numbers for the HASH ''' </summary> ''' <param name="_str">Site URL as passed by GetHash()</param> ''' <param name="_Chk">Necessary passed value</param> ''' <param name="_Magic">Necessary passed value</param> ''' <returns>Long Integer manipulation of string passed</returns> Private Shared Function StrToNum(ByVal _str As String, ByVal _Chk As Long, ByVal _Magic As Long) As Long Try Dim _Int64Unit As Long = Convert.ToInt64(Math.Pow(2, 32)) Dim _StrLen As Integer = _str.Length For x As Integer = 0 To _StrLen - 1 _Chk *= _Magic If _Chk >= _Int64Unit Then _Chk = (_Chk - (_Int64Unit * Convert.ToInt64(_Chk \ _Int64Unit))) _Chk = IIf((_Chk < -2147483648), (_Chk + _Int64Unit), _Chk) End If _Chk += CLng(Asc(_str(x))) Next Catch End Try Return _Chk End Function

    Read the article

  • Why is CDATA needed and not working everywhere the same way?

    - by baptx
    In Firefox's and Chrome's consoles, this works (alerts script content): var script = document.createElement("script"); script.textContent = ( function test() { var a = 1; } ); document.getElementsByTagName("head")[0].appendChild(script); alert(document.getElementsByTagName("head")[0].lastChild.textContent); Using this code as a Greasemonkey script for Firefox works too. Now, if want to add a "private method" do() to test() It is not working anymore, in neither Firefox/Chrome console nor in a Greasemonkey script: var script = document.createElement("script"); script.textContent = ( function test() { var a = 1; var do = function () { var b = 2; }; } ); document.getElementsByTagName("head")[0].appendChild(script); alert(document.getElementsByTagName("head")[0].lastChild.textContent); To make this work in a Greasemonkey script, I have to put all the code in a CDATA tag block: var script = document.createElement("script"); script.textContent = (<![CDATA[ function test() { var a = 1; var do = function() { var b = 2; }; } ]]>); document.getElementsByTagName("head")[0].appendChild(script); alert(document.getElementsByTagName("head")[0].lastChild.textContent); This is only works in a Greasemonkey script; it throws an error from the Firefox/Chrome console. I don't understand why I should use a CDATA tag, I have no XML rules to respect here because I'm not using XHTML. To make it work in Firefox console (or Firebug), I need to do put CDATA into tags like <> and </>: var script = document.createElement("script"); script.textContent = (<><![CDATA[ function test() { var a = 1; var do = function() { var b = 2; }; } ]]></>); document.getElementsByTagName("head")[0].appendChild(script); alert(document.getElementsByTagName("head")[0].lastChild.textContent); This doesn't working from the Chrome console. I've tried adding .toString() at the end like many people are doing (]]></>).toString();), but it's useless. I tried to replace <> and </> with a tag name <foo> </foo> but that didn't work either. Why doesn't my first code snippet work if I define var do = function(){} inside another function? Why should I use CDATA as a workaround even if I'm not using XHTML? And why should I add <> </> for Firefox console if it's working without in a Greasemonkey script? Finally, what is the solution for Chrome and other browsers? EDIT: My bad, I've never used do-while in JS and I've created this example in a simple text editor, so I didn't see "do" was a reserved keyword :p But problem is still here, I've not initialized the Javascript class in my examples. With this new example, CDATA is needed for Greasemonkey, Firefox need CDATA between E4X <> </> and Chrome fails: var script = document.createElement("script"); script.textContent = ( <><![CDATA[var aClass = new aClass(); function aClass() { var a = 1; var aPrivateMethod = function() { var b = 2; alert(b); }; this.aPublicMethod = function() { var c = 3; alert(c); }; } aClass.aPublicMethod();]]></> ); document.getElementsByTagName("head")[0].appendChild(script); Question: why?

    Read the article

  • window.open() in an iPad on load of a frame does not work

    - by user278859
    I am trying to modify a site that uses "Morten's JavaScript Tree Menu" to display PDFs in a frames set using the Adobe Reader plug-in. On the iPad the frame is useless, so I want to open the PDF in a new tab. Not wanting to mess with the tree menu I thought I could use JavaScript in the web page being opened in the viewer frame to open a new tab with the PDF. I am using window.open() in $(document).ready(function() to open the pdf in the new tab. The problem is that window.open() does not want to work in the iPad. The body of the HTML normally looks like this... <body> <object data="MypdfFileName.pdf#toolbar=1&amp;navpanes=1&amp;scrollbar=0&amp;page=1&amp;view=FitH" type="application/pdf" width="100%" height="100%"> </object> </body> I changed it to only have a div like this... <body> <div class="myviewer" ></div> </body> Then used the following script... $(document).ready(function() { var isMobile = { Android : function() { return navigator.userAgent.match(/Android/i) ? true : false; }, BlackBerry : function() { return navigator.userAgent.match(/BlackBerry/i) ? true : false; }, iOS : function() { return navigator.userAgent.match(/iPhone|iPad|iPod/i) ? true : false; }, Windows : function() { return navigator.userAgent.match(/IEMobile/i) ? true : false; }, any : function() { return (isMobile.Android() || isMobile.BlackBerry() || isMobile.iOS() || isMobile.Windows()); } }; if(isMobile.any()) { var file = "MypdfFileName.pdf"; window.open(file); }else { var markup = "<object data='MypdfFileName.pdf#toolbar=1&amp;navpanes=1&amp;scrollbar=0&amp;page=1&amp;view=FitH' type='application/pdf' width='100%' height='100%'></object>"; $('.myviewer').append(markup); }; }); Everthing works except for window.open() on the iPad. If I switch things around widow.open() works fine on a computer. In another project I am using window.open() successfully on the iPad from an onclick function. I tried using a timer function. I also tried adding an onclick function to the div and posting a click event. In both cases they worked on a computer but not an iPad. I am stumped. I know it would make more sense to handle the ipad in the tree menu frame, but that code is so complex I can't figure out where to put/modify the onclick event. Is there a way to change the object so that it opens in a new tab? Is anyone familiar enough with Mortens Tree Menu code that can tell me how to channge the on click event so that it opens the pdf in a new tab instead of opening a page in the frame? Thanks

    Read the article

  • Make wix installation set upgrade to same folder

    - by Magnus Akselvoll
    How can I make a major upgrade to an installation set (MSI) built with Wix install into the same folder as the original installation? The installation is correctly detected as an upgrade, but the directory selection screen is still shown and with the default value (not necessarily the current installation folder). Do I have to do manual work like saving the installation folder in a registry key upon first installing and then read this key upon upgrade? If so, is there any example? Or is there some easier way to achieve this in MSI / Wix? As reference I paste in my current Wix file below: <?xml version="1.0" encoding="utf-8"?> <!-- Package information --> <Package Keywords="Installer" Id="e85e6190-1cd4-49f5-8924-9da5fcb8aee8" Description="Installs MyCompany Integration Framework 1.0.0" Comments="Installs MyCompany Integration Framework 1.0.0" InstallerVersion="100" Compressed="yes" /> <Upgrade Id='9071eacc-9b5a-48e3-bb90-8064d2b2c45d'> <UpgradeVersion Property="PATCHFOUND" OnlyDetect="no" Minimum="0.0.1" IncludeMinimum="yes" Maximum="1.0.0" IncludeMaximum="yes"/> </Upgrade> <!-- Useless but necessary... --> <Media Id="1" Cabinet="MyCompany.cab" EmbedCab="yes" /> <!-- Precondition: .Net 2 must be installed --> <Condition Message='This setup requires the .NET Framework 2 or higher.'> <![CDATA[MsiNetAssemblySupport >= "2.0.50727"]]> </Condition> <Directory Id="TARGETDIR" Name="SourceDir"> <Directory Id="MyCompany" Name="MyCompany"> <Directory Id="INSTALLDIR" Name="Integrat" LongName="MyCompany Integration Framework"> <Component Id="MyCompanyDllComponent" Guid="4f362043-03a0-472d-a84f-896522ce7d2b" DiskId="1"> <File Id="MyCompanyIntegrationDll" Name="IbIntegr.dll" src="..\Build\MyCompany.Integration.dll" Vital="yes" LongName="MyCompany.Integration.dll" /> <File Id="MyCompanyServiceModelDll" Name="IbSerMod.dll" src="..\Build\MyCompany.ServiceModel.dll" Vital="yes" LongName="MyCompany.ServiceModel.dll" /> </Component> <!-- More components --> </Directory> </Directory> </Directory> <Feature Id="MyCompanyProductFeature" Title='MyCompany Integration Framework' Description='The complete package' Display='expand' Level="1" InstallDefault='local' ConfigurableDirectory="INSTALLDIR"> <ComponentRef Id="MyCompanyDllComponent" /> </Feature> <!-- Task scheduler application. It has to be used as a property --> <Property Id="finaltaskexe" Value="MyCompany.Integration.Host.exe" /> <Property Id="WIXUI_INSTALLDIR" Value="INSTALLDIR" /> <InstallExecuteSequence> <!-- command must be executed: MyCompany.Integration.Host.exe /INITIALCONFIG parameters.xml --> <Custom Action='PropertyAssign' After='InstallFinalize'>NOT Installed AND NOT PATCHFOUND</Custom> <Custom Action='LaunchFile' After='InstallFinalize'>NOT Installed AND NOT PATCHFOUND</Custom> <RemoveExistingProducts Before='CostInitialize' /> </InstallExecuteSequence> <!-- execute comand --> <CustomAction Id='PropertyAssign' Property='PathProperty' Value='[INSTALLDIR][finaltaskexe]' /> <CustomAction Id='LaunchFile' Property='PathProperty' ExeCommand='/INITIALCONFIG "[INSTALLDIR]parameters.xml"' Return='asyncNoWait' /> <!-- User interface information --> <UIRef Id="WixUI_InstallDir" /> <UIRef Id="WixUI_ErrorProgressText" />

    Read the article

  • -Java- Swing GUI - Moving around components specifically with layouts

    - by Xemiru Scarlet Sanzenin
    I'm making a little test GUI for something I'm making. However, problems occur with the positioning of the panels. public winInit() { super("Chatterbox - Login"); try { UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); } catch(ClassNotFoundException e) { } catch (InstantiationException e) { } catch (IllegalAccessException e) { } catch (UnsupportedLookAndFeelException e) { } setSize(300,135); pn1 = new JPanel(); pn2 = new JPanel(); pn3 = new JPanel(); l1 = new JLabel("Username"); l2 = new JLabel("Password"); l3 = new JLabel("Random text here"); l4 = new JLabel("Server Address"); l5 = new JLabel("No address set."); i1 = new JTextField(10); p1 = new JPasswordField(10); b1 = new JButton("Connect"); b2 = new JButton("Register"); b3 = new JButton("Set IP"); l4.setBounds(10, 12, getDim(l4).width, getDim(l4).height); l1.setBounds(10, 35, getDim(l1).width, getDim(l1).height); l2.setBounds(10, 60, getDim(l2).width, getDim(l2).height); l3.setBounds(10, 85, getDim(l3).width, getDim(l3).height); l5.setBounds(l4.getBounds().width + 14, 12, l5.getPreferredSize().width, l5.getPreferredSize().height); l5.setForeground(Color.gray); i1.setBounds(getDim(l1).width + 15, 35, getDim(i1).width, getDim(i1).height); p1.setBounds(getDim(l1).width + 15, 60, getDim(p1).width, getDim(p1).height); b1.setBounds(getDim(l1).width + getDim(i1).width + 23, 34, getDim(b2).width, getDim(b1).height - 5); b2.setBounds(getDim(l1).width + getDim(i1).width + 23, 60, getDim(b2).width, getDim(b2).height - 5); b3.setBounds(getDim(l1).width + getDim(i1).width + 23, 10, etDim(b2).width, getDim(b3).height - 5); b1.addActionListener(clickButton); b2.addActionListener(clickButton); b3.addActionListener(clickButton); pn1.setLayout(new FlowLayout(FlowLayout.RIGHT)); pn2.setLayout(new FlowLayout(FlowLayout.RIGHT)); pn1.add(l1); pn1.add(i1); pn1.add(b1); pn2.add(l2); pn2.add(p1); pn2.add(b2); add(pn1); add(pn2); } I am attempting to use FlowLayout to position the panels in the way desired. I'd use BorderLayout while adding, but the vertical spacing is too far away when I just use directions closest to one another. The output of this code is to create a window, 300,150, place whatever's in the two panels in the exact same spaces. Yes, I realize there's useless code there with setBounds(), but that was just me screwing around with Absolute Positioning, which wasn't working out for me either.

    Read the article

  • Opinions on Copy Protection / Software Licensing via phoning home?

    - by Jakobud
    I'm developing some software that I'm going to eventually sell. I've been thinking about different copy protection mechanisms, both custom and 3rd party. I know that no copy protection is 100% full-proof, but I need to at least try. So I'm looking for some opinions to my approach I'm thinking about: One method I'm thinking about is just having my software connect to a remote server when it starts up, in order to verify the license based off the MAC address of the ethernet port. I'm not sure if the server would be running a MySQL database that retrieves the license information, or what... Is there a more simple way? Maybe some type of encrypted file that is read? I would make the software still work if it can't connect to the server. I don't want to lock someone out just because they don't have internet access at that moment in time. In case you are wondering, the software I'm developing is extremely internet/network dependant. So its actually quite unlikely that the user wouldn't have internet access when using it. Actually, its pretty useless without internet/network access. Anyone know what I would do about computers that have multiple MAC addresses? A lot of motherboards these days have 2 ethernet ports. And most laptops have 1 ethernet, 1 wifi and Bluetooth MAC addresses. I suppose I could just pick a MAC port and run with it. Not sure if it really matters A smarty and tricky user could determine the server that the software is connecting to and perhaps add it to their host file so that it always trys to connect to localhost. How likely do you think this is? And do you think its possible for the software to check if this is being done? I guess parsing of the host file could always work. Look for your server address in there and see if its connecting to localhost or something. I've considered dongles, but I'm trying to avoid them just because I know they are a pain to work with. Keeping them updated and possibly requiring the customer to run their own license server is a bit too much for me. I've experienced that and it's a bit of a pain that I wouldn't want to put my customers through. Also I'm trying to avoid that extra overhead cost of using 3rd party dongles. Also, I'm leaning toward connecting to a remote server to verify authentication as opposed to just sending the user some sort of license file because what happens when the user buys a new computer? I have to send them a replacement license file that will work with their new computer, but they will still be able to use it on their old computer as well. There is no way for me to 'de-authorize' their old computer without asking them to run some program on it or something. Also, one important note, with the software I would make it very clear to the user in the EULA that the software connects to a remote server to verify licensing and that no personal information is sent. I know I don't care much for software that does that kinda stuff without me knowing. Anyways, just looking for some opinions for people who have maybe gone down this kinda road. It seems like remote-server-dependent-software would be one of the most effective copy-protection mechanisms, not just because of difficulty of circumventing, but also could be pretty easy to manage the licenses on the developers end.

    Read the article

  • I never really understood: what is Application Binary Interface (ABI)?

    - by claws
    I never clearly understood what is an ABI. I'm sorry for such a lengthy question. I just want to clearly understand things. Please don't point me to wiki article, If could understand it, I wouldn't be here posting such a lengthy post. This is my mindset about different interfaces: TV remote is an interface between user and TV. It is an existing entity but useless (doesn't provide any functionality) by itself. All the functionality for each of those buttons on the remote is implemented in the Television set. Interface: It is a "existing entity" layer between the functionality and consumer of that functionality. An, interface by itself is doesn't do anything. It just invokes the functionality lying behind. Now depending on who the user is there are different type of interfaces. Command Line Interface(CLI) commands are the existing entities, consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: commands consumer: user Graphical User Interface(GUI) window,buttons etc.. are the existing entities, again consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: window,buttons etc.. consumer: user Application Programming Interface(API) functions or to be more correct, interfaces (in interfaced based programming) are the existing entities, consumer here is another program not a user. and again functionality lies behind this layer. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: functions, Interfaces(array of functions). consumer: another program/application. Application Binary Interface (ABI) Here is my problem starts. functionality: ??? existing entities: ??? consumer: ??? I've wrote few softwares in different languages and provided different kind of interfaces (CLI, GUI, API) but I'm not sure, if I ever, provided any ABI. http://en.wikipedia.org/wiki/Application_binary_interface says: ABIs cover details such as data type, size, and alignment; the calling convention, which controls how functions' arguments are passed and return values retrieved; the system call numbers and how an application should make system calls to the operating system; Other ABIs standardize details such as the C++ name mangling,[2] . exception propagation,[3] and calling convention between compilers on the same platform, but do not require cross-platform compatibility. Who needs these details? Please don't say, OS. I know assembly programming. I know how linking & loading works. I know what exactly happens inside. Where did C++ name mangling come in between? I thought we are talking at the binary level. Where did languages come in between? anyway, I've downloaded the [PDF] System V Application Binary Interface Edition 4.1 (1997-03-18) to see what exactly it contains. Well, most of it didn't make any sense. Why does it contain 2 chapters (4th & 5th) which describe the ELF file format.Infact, these are the only 2 significant chapters that specification. Rest of all the chapters "Processor Specific". Anyway, I thought that it is completely different topic. Please don't say that ELF file format specs are the ABI. It doesn't qualify to be Interface according to the definition. I know, since we are talking at such low level it must be very specific. But I'm not sure how is it "Instruction Set Architecture(ISA)" specific? Where can I find MS Window's ABI? So, these are the major queries that are bugging me.

    Read the article

  • Using AsyncTask, but experiencing unexpected behaviour

    - by capcom
    Please refer to the following code which continuously calls a new AsyncTask. The purpose of the AsyncTask is to make an HTTP request, and update the string result. package room.temperature; import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import java.util.concurrent.ExecutionException; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.HttpClient; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import android.app.Activity; import android.os.AsyncTask; import android.os.Bundle; import android.util.Log; import android.widget.TextView; public class RoomTemperatureActivity extends Activity { String result = null; StringBuilder sb=null; TextView TemperatureText, DateText; ArrayList<NameValuePair> nameValuePairs; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); TemperatureText = (TextView) findViewById(R.id.temperature); DateText = (TextView) findViewById(R.id.date); nameValuePairs = new ArrayList<NameValuePair>(); for (int i = 0; i < 10; i++) { RefreshValuesTask task = new RefreshValuesTask(); task.execute(""); } } // The definition of our task class private class RefreshValuesTask extends AsyncTask<String, Integer, String> { @Override protected void onPreExecute() { super.onPreExecute(); } @Override protected String doInBackground(String... params) { InputStream is = null; try { HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost("http://mywebsite.com/roomtemp/tempscript.php"); httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); HttpResponse response = httpclient.execute(httppost); HttpEntity entity = response.getEntity(); is = entity.getContent(); } catch(Exception e) { Log.e("log_tag", "Error in http connection" + e.toString()); } try { BufferedReader reader = new BufferedReader(new InputStreamReader(is,"iso-8859-1"),8); sb = new StringBuilder(); sb.append(reader.readLine()); is.close(); result=sb.toString(); } catch(Exception e) { Log.e("log_tag", "Error converting result " + e.toString()); } return result; } @Override protected void onProgressUpdate(Integer... values) { super.onProgressUpdate(values); } @Override protected void onPostExecute(String result) { super.onPostExecute(result); //System.out.println(result); setValues(result); } } public void setValues(String resultValue) { System.out.println(resultValue); String[] values = resultValue.split("&"); TemperatureText.setText(values[0]); DateText.setText(values[1]); } } The problem I am experiencing relates to the AsyncTask in some way or the function setValues(), but I am not sure how. Essentially, I want each call to the AsyncTask to run, eventually in an infinite while loop, and update the TextView fields as I have attempted in setValues. I have tried since yesterday after asking a question which led to this code, for reference. Oh yes, I did try using the AsyncTask get() method, but that didn't work either as I found out that it is actually a synchronous call, and renders the whole point of AsyncTask useless.

    Read the article

  • Working with PivotTables in Excel

    - by Mark Virtue
    PivotTables are one of the most powerful features of Microsoft Excel.  They allow large amounts of data to be analyzed and summarized in just a few mouse clicks. In this article, we explore PivotTables, understand what they are, and learn how to create and customize them. Note:  This article is written using Excel 2010 (Beta).  The concept of a PivotTable has changed little over the years, but the method of creating one has changed in nearly every iteration of Excel.  If you are using a version of Excel that is not 2010, expect different screens from the ones you see in this article. A Little History In the early days of spreadsheet programs, Lotus 1-2-3 ruled the roost.  Its dominance was so complete that people thought it was a waste of time for Microsoft to bother developing their own spreadsheet software (Excel) to compete with Lotus.  Flash-forward to 2010, and Excel’s dominance of the spreadsheet market is greater than Lotus’s ever was, while the number of users still running Lotus 1-2-3 is approaching zero.  How did this happen?  What caused such a dramatic reversal of fortunes? Industry analysts put it down to two factors:  Firstly, Lotus decided that this fancy new GUI platform called “Windows” was a passing fad that would never take off.  They declined to create a Windows version of Lotus 1-2-3 (for a few years, anyway), predicting that their DOS version of the software was all anyone would ever need.  Microsoft, naturally, developed Excel exclusively for Windows.  Secondly, Microsoft developed a feature for Excel that Lotus didn’t provide in 1-2-3, namely PivotTables.  The PivotTables feature, exclusive to Excel, was deemed so staggeringly useful that people were willing to learn an entire new software package (Excel) rather than stick with a program (1-2-3) that didn’t have it.  This one feature, along with the misjudgment of the success of Windows, was the death-knell for Lotus 1-2-3, and the beginning of the success of Microsoft Excel. Understanding PivotTables So what is a PivotTable, exactly? Put simply, a PivotTable is a summary of some data, created to allow easy analysis of said data.  But unlike a manually created summary, Excel PivotTables are interactive.  Once you have created one, you can easily change it if it doesn’t offer the exact insights into your data that you were hoping for.  In a couple of clicks the summary can be “pivoted” – rotated in such a way that the column headings become row headings, and vice versa.  There’s a lot more that can be done, too.  Rather than try to describe all the features of PivotTables, we’ll simply demonstrate them… The data that you analyze using a PivotTable can’t be just any data – it has to be raw data, previously unprocessed (unsummarized) – typically a list of some sort.  An example of this might be the list of sales transactions in a company for the past six months. Examine the data shown below: Notice that this is not raw data.  In fact, it is already a summary of some sort.  In cell B3 we can see $30,000, which apparently is the total of James Cook’s sales for the month of January.  So where is the raw data?  How did we arrive at the figure of $30,000?  Where is the original list of sales transactions that this figure was generated from?  It’s clear that somewhere, someone must have gone to the trouble of collating all of the sales transactions for the past six months into the summary we see above.  How long do you suppose this took?  An hour?  Ten?  Probably. If we were to track down the original list of sales transactions, it might look something like this: You may be surprised to learn that, using the PivotTable feature of Excel, we can create a monthly sales summary similar to the one above in a few seconds, with only a few mouse clicks.  We can do this – and a lot more too! How to Create a PivotTable First, ensure that you have some raw data in a worksheet in Excel.  A list of financial transactions is typical, but it can be a list of just about anything:  Employee contact details, your CD collection, or fuel consumption figures for your company’s fleet of cars. So we start Excel… …and we load such a list… Once we have the list open in Excel, we’re ready to start creating the PivotTable. Click on any one single cell within the list: Then, from the Insert tab, click the PivotTable icon: The Create PivotTable box appears, asking you two questions:  What data should your new PivotTable be based on, and where should it be created?  Because we already clicked on a cell within the list (in the step above), the entire list surrounding that cell is already selected for us ($A$1:$G$88 on the Payments sheet, in this example).  Note that we could select a list in any other region of any other worksheet, or even some external data source, such as an Access database table, or even a MS-SQL Server database table.  We also need to select whether we want our new PivotTable to be created on a new worksheet, or on an existing one.  In this example we will select a new one: The new worksheet is created for us, and a blank PivotTable is created on that worksheet: Another box also appears:  The PivotTable Field List.  This field list will be shown whenever we click on any cell within the PivotTable (above): The list of fields in the top part of the box is actually the collection of column headings from the original raw data worksheet.  The four blank boxes in the lower part of the screen allow us to choose the way we would like our PivotTable to summarize the raw data.  So far, there is nothing in those boxes, so the PivotTable is blank.  All we need to do is drag fields down from the list above and drop them in the lower boxes.  A PivotTable is then automatically created to match our instructions.  If we get it wrong, we only need to drag the fields back to where they came from and/or drag new fields down to replace them. The Values box is arguably the most important of the four.  The field that is dragged into this box represents the data that needs to be summarized in some way (by summing, averaging, finding the maximum, minimum, etc).  It is almost always numerical data.  A perfect candidate for this box in our sample data is the “Amount” field/column.  Let’s drag that field into the Values box: Notice that (a) the “Amount” field in the list of fields is now ticked, and “Sum of Amount” has been added to the Values box, indicating that the amount column has been summed. If we examine the PivotTable itself, we indeed find the sum of all the “Amount” values from the raw data worksheet: We’ve created our first PivotTable!  Handy, but not particularly impressive.  It’s likely that we need a little more insight into our data than that. Referring to our sample data, we need to identify one or more column headings that we could conceivably use to split this total.  For example, we may decide that we would like to see a summary of our data where we have a row heading for each of the different salespersons in our company, and a total for each.  To achieve this, all we need to do is to drag the “Salesperson” field into the Row Labels box: Now, finally, things start to get interesting!  Our PivotTable starts to take shape….   With a couple of clicks we have created a table that would have taken a long time to do manually. So what else can we do?  Well, in one sense our PivotTable is complete.  We’ve created a useful summary of our source data.  The important stuff is already learned!  For the rest of the article, we will examine some ways that more complex PivotTables can be created, and ways that those PivotTables can be customized. First, we can create a two-dimensional table.  Let’s do that by using “Payment Method” as a column heading.  Simply drag the “Payment Method” heading to the Column Labels box: Which looks like this: Starting to get very cool! Let’s make it a three-dimensional table.  What could such a table possibly look like?  Well, let’s see… Drag the “Package” column/heading to the Report Filter box: Notice where it ends up…. This allows us to filter our report based on which “holiday package” was being purchased.  For example, we can see the breakdown of salesperson vs payment method for all packages, or, with a couple of clicks, change it to show the same breakdown for the “Sunseekers” package: And so, if you think about it the right way, our PivotTable is now three-dimensional.  Let’s keep customizing… If it turns out, say, that we only want to see cheque and credit card transactions (i.e. no cash transactions), then we can deselect the “Cash” item from the column headings.  Click the drop-down arrow next to Column Labels, and untick “Cash”: Let’s see what that looks like…As you can see, “Cash” is gone. Formatting This is obviously a very powerful system, but so far the results look very plain and boring.  For a start, the numbers that we’re summing do not look like dollar amounts – just plain old numbers.  Let’s rectify that. A temptation might be to do what we’re used to doing in such circumstances and simply select the whole table (or the whole worksheet) and use the standard number formatting buttons on the toolbar to complete the formatting.  The problem with that approach is that if you ever change the structure of the PivotTable in the future (which is 99% likely), then those number formats will be lost.  We need a way that will make them (semi-)permanent. First, we locate the “Sum of Amount” entry in the Values box, and click on it.  A menu appears.  We select Value Field Settings… from the menu: The Value Field Settings box appears. Click the Number Format button, and the standard Format Cells box appears: From the Category list, select (say) Accounting, and drop the number of decimal places to 0.  Click OK a few times to get back to the PivotTable… As you can see, the numbers have been correctly formatted as dollar amounts. While we’re on the subject of formatting, let’s format the entire PivotTable.  There are a few ways to do this.  Let’s use a simple one… Click the PivotTable Tools/Design tab: Then drop down the arrow in the bottom-right of the PivotTable Styles list to see a vast collection of built-in styles: Choose any one that appeals, and look at the result in your PivotTable:   Other Options We can work with dates as well.  Now usually, there are many, many dates in a transaction list such as the one we started with.  But Excel provides the option to group data items together by day, week, month, year, etc.  Let’s see how this is done. First, let’s remove the “Payment Method” column from the Column Labels box (simply drag it back up to the field list), and replace it with the “Date Booked” column: As you can see, this makes our PivotTable instantly useless, giving us one column for each date that a transaction occurred on – a very wide table! To fix this, right-click on any date and select Group… from the context-menu: The grouping box appears.  We select Months and click OK: Voila!  A much more useful table: (Incidentally, this table is virtually identical to the one shown at the beginning of this article – the original sales summary that was created manually.) Another cool thing to be aware of is that you can have more than one set of row headings (or column headings): …which looks like this…. You can do a similar thing with column headings (or even report filters). Keeping things simple again, let’s see how to plot averaged values, rather than summed values. First, click on “Sum of Amount”, and select Value Field Settings… from the context-menu that appears: In the Summarize value field by list in the Value Field Settings box, select Average: While we’re here, let’s change the Custom Name, from “Average of Amount” to something a little more concise.  Type in something like “Avg”: Click OK, and see what it looks like.  Notice that all the values change from summed totals to averages, and the table title (top-left cell) has changed to “Avg”: If we like, we can even have sums, averages and counts (counts = how many sales there were) all on the same PivotTable! Here are the steps to get something like that in place (starting from a blank PivotTable): Drag “Salesperson” into the Column Labels Drag “Amount” field down into the Values box three times For the first “Amount” field, change its custom name to “Total” and it’s number format to Accounting (0 decimal places) For the second “Amount” field, change its custom name to “Average”, its function to Average and it’s number format to Accounting (0 decimal places) For the third “Amount” field, change its name to “Count” and its function to Count Drag the automatically created field from Column Labels to Row Labels Here’s what we end up with: Total, average and count on the same PivotTable! Conclusion There are many, many more features and options for PivotTables created by Microsoft Excel – far too many to list in an article like this.  To fully cover the potential of PivotTables, a small book (or a large website) would be required.  Brave and/or geeky readers can explore PivotTables further quite easily:  Simply right-click on just about everything, and see what options become available to you.  There are also the two ribbon-tabs: PivotTable Tools/Options and Design.  It doesn’t matter if you make a mistake – it’s easy to delete the PivotTable and start again – a possibility old DOS users of Lotus 1-2-3 never had. We’ve included an Excel that should work with most versions of Excel, so you can download to practice your PivotTable skills. Download Our Practice Excel File Similar Articles Productive Geek Tips Magnify Selected Cells In Excel 2007Share Access Data with Excel in Office 2010Make Excel 2007 Print Gridlines In Workbook FileMake Excel 2007 Always Save in Excel 2003 FormatConvert Older Excel Documents to Excel 2007 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer

    Read the article

  • CodePlex Daily Summary for Thursday, April 01, 2010

    CodePlex Daily Summary for Thursday, April 01, 2010New ProjectsASP.NET Bing Maps: Extensible and easy to use, this is ASP.NET Bing Maps Control. Drag & Drop and is ready to go. You can configure map mode, map style, add a PushPin...Bricks' Bane: Bricks' Bane is a brick breaker game developed using XNA and published on XBox Live Indy Games. Source code includes a C# library useful for game d...cURL for dotnet: Another dotnet binding for libcurl see http://curl.haxx.se for more info about cURL/libcurlCustom Functoid que acessa o banco de dados SQL: Functoid para Biztalk Server 2006 utilizando dados do SQL Server 2005FEI STU Pharmacy e-shop: Elektronicky obchod s liekmi Vytvorte jednoduchú klient-server aplikáciu, ktorá bude realizovať elektronický obchod s liekmi. Moduly: 1. e-shop f...Flavours of Wix: Investigating building DSL's to create installers based on WIXFulcrum: Fulcrum is a code generation framework built on top of the T4 technology in Visual Studio. GreviousAngel: New team projectHabanero Inferno: Habanero Inferno coming soon.Kawo Pounga !: A useless game !!!LetsXNA!!: This is a project created by members of Linked In group Lets XNA!! to build a XNA game and have fun in the process. The goal is to build a simple ...Linq To Naver , Custom Linq Provider for Naver searchengine OpenAPI: <project name>Linq to Naver </project name> <programming language>C#, CSharp</programming language>LocoSync: LocoSync is a file Syncronization/Backup/Archiver program, which is easily extendable. It is easy to add new syncronization methods using C# code.Natural Language Processing: Natural Language ProcessingNop Commerce Azure: Ce projet vous permet de mettre en place rapidement et simplement votre site d'e-commerce en ligne en bénéficiant de tous les avantages de la plate...Nwinsock: Nwinsock is a component for network , Object Transfer, Pocket Compression, Support TCP,UDP Protocol, Thread Base OnTime: OnTime is a simple program from that matching game back in the day just to bring light to programming techniques. It's developed in C#.?OpenGL ES 2.0 Compact Framework Wrapper: OpenGL ES 2.0 wrapper for .NET Compact Framework. Developed on HTC HD 2 device but should run on any Windows Mobile device that has the correct lib...ortaknokta: bu proje: birkaç kişinin bir araya gelip, istedikleri konularda tartışma yapmalarına olanak saglamak icin hazırlanmaya çalışıl maktadır. P-Data: P-Data es una herramienta que permite obtener información procedente de archivos de datos (Data Profiling) a través de consultas SQL, automatizando...PowerAuras: Addon for World of Warcraft - Displays effects on screen at different conditionsPowerShell ToodleDo Module: PowerShell Module for interacting with toodledo.com online To-Do list site. RSS Reader for Windows Phone 7: This RSS Reader application for windows 7Streamlet Containers: This is my implement of STL-style containers, including a dynamic array, a double-linked list and an r-b-tree. Just for practice. Please feel free...Troav: Social encyclopedia built using c# and the Orchard frameworkUmbraco App_Code/Usercontrol Editor: Package for Umbraco to add App_Code and usercontrol editing to the Developer section of the Umbraco administration system. Will support GeSHi editi...Vczh Reactive Programming Library: Reactive programming library provide a stream or state machine view to use .NET eventsWhoIs XML API: The project uses the public WhoIs XML API service (http://www.whoisxmlapi.com/) to obtain detailed details. The project is written in C# and serial...WPF FlowDocument Examples for VS2008 and VS2010: WPF Text Samples (especially FlowDocument) on the various possible effects: sub- and super-script, ruby (a.k.a. furigana), and various others...You are here (for Windows Mobile): This sample shows you how to play a *.wav file on your device with Compact Framework 2.0. There is better support for playing music on Compact F...New Releases( λunula ): Lunula 0.4.0: Changelog Implemented a virtual machine. Implemented a compiler for the virtual machine. Added first-class continuations (call/cc) Removed co...Alter gear SQL index Management: Setup 1.0.1: Changes Test connection - successful message Connection string timeout property added Setup Project added to project source code Possible issu...ASP.NET Bing Maps: ASP.NET Bing Maps 0.1b: Project Description Extensible and easy to use, this is ASP.NET Bing Maps Control. Drag & Drop and is ready to go. You can configure map mode, map ...ASP.NET MVC Validation Library: ASP.NET MVC Validation Library 1.3: Changes since 1.2: - Support remote validation - Support custom server-side validation - The design of validation attribute is improved Note: test...BigDays 2010: HelfenHelfen - v1: PLEASE NOTE: This project is published under the Microsoft Public License (Ms-PL). http://bigdays10.codeplex.com/license IT IS A DEMO SOLUTION FOR...Caps - Manage your collection!: Caps Console 0.1.4.0 Alpha: This is preview release (Alpha quality). This release contains only limited amount of fixes and new features from user point of view. Major focus f...CSharpQuery: Version 1.0: This version is stable. Please report any possible bugs. The next release will include a sample project and index management tools. Until then pl...Custom Functoid que acessa o banco de dados SQL: Custom Functoid SQL Server: Solução do Visual Studio com código fonte e script SQL do functoid em BiztalkDawf: Dual Audio Workflow: Beta 3: Suppose if two good audio events overlap in time with a videoevent of interest. (This can only happen if PluralEyes isn't used on everything). Befo...Dirac codec user interface: Dirac User Interface (checkin 37132): Same as 36795 version, but done with the last source code.DotNetNuke® Blog: 04.00.00 RC 3: PLEASE NOTE: You may upgrade an RC 2 install. But please do not upgrade previous version of the Beta releases - please start from RC 2 or 03.05.0...DotNetNuke® Skinning Extensions: SimpleTitle Skin Object: This is an example skin object that only renders the "page name" if used in a skin and the "module title" if used in a container. No extra spans, c...Fulcrum: Fulcrum v0.9: Initial release of FulcrumHelloTipi Photos Uploader: Version 2010.03.31: De toute petites corrections : - Correction du bouton envoyer - Impossible d'interagir avec l'application quand on uploadkdar: KDAR 0.0.18: KDAR - Kernel Debugger Anti Rootkit - dispacth table's signature bases updated ( many driver's) - scripts refactored - some bug fixedLegend: Legend Libraries: The latest release.Linq To Naver , Custom Linq Provider for Naver searchengine OpenAPI: Linq to Naver: Linq to NaverLive at Education Meta Web-Service: Live at Education Meta Web Service v. 1.0: We're happy to publish final version of Live at Education Meta Web Serivce (LAEMWS). In this release: Huge list of Windows Live ID enabled servic...Live@edu SSO WebPart for MOSS 2007: WebPart 2.0: This release is based on Live@edu Meta Web Service (laemws - http://laemws.codeplex.com). It is highly recomended to use laemws version of webpart,...LocoSync: LocoSync v0.1r2010.03.31 installer: This is the first public release. Unzip and run setup. Or if you have .net 3.5 runtime available download the exetutable and try...Natural Language Processing: test1: testNop Commerce Azure: Nop Commerce Azure: Nop Commerce Full Sources with additionnals Azure Projects.Nwinsock: NWinsock: Nwinsock version 1.0 is hereOpen NFe: DANFe v1.9.8: Correção CSTOpenGL ES 2.0 Compact Framework Wrapper: v0.1 Sources: First rough release. It has a working sample application which renders a triangle with rotation. Don't expect anything great. Just a very early ...patterns & practices - Windows Azure Guidance: Code Drop 3: Second iteration of a-Expense on Azure. This release builds on the previous one and mainly focuses on replacing SQL Azure by Table Storage. We hav...Posh4DNN: Posh4DNN Scripts 2.0: This release greatly increases the speed of installation and incorporates the use of IIS and SQL Server Snap-ins for managing those services. Inst...Process Enactment Tool Framework: PET 1.1: PET Core new intermediate model with arbitrary "clean" relations among objects and several updates of the object fields (see DependencyInterfacesA...Project Tru Tiên: Elements-test V1-fix (v1): Là Elements-test V1 đã được fix các vấn đề sau: - Fix lỗi hiển thị thú cưỡi Hổ Kỳ Lân - Fix hiển thị tab tiếng trung --> sang tiếng việt - Fix hiể...Sentinel - Log Viewer: Sentinel 0.8.1 (nLog support): Build of the 0.8.1 code (svn revision 36823) which included support for both nLog and log4net that has been in SVN for a while but didn't have a bi...sgMotion Animation Library: SgMotion v1.1 (For Sunburn 1.3.1): SgMotion v1.1 (For Sunburn 1.3.1) This release includes both a Windows & Xbox sample. The sample is set to default at Forward rendering, but can e...sTASKedit: sTASKedit 44538 (Developer Alpha): + nearly all fields are viewed in this release for task verification and identifying of unknownsTest Project (ignore): asdf asdf asdf asdf asdf asdf asdf sadf sdf asdf a: ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ...Test Project (ignore): cdscs: csdcacacTroav: Traov20100331 Source Pre-Alpah: This is some experiements with implementing custom modules with Microsoft's Orchard frame work. This is very preliminary, and subject to change.Weather Report WebControls: WebWeatherReport: 主要文件的源代码WhoIs XML API: Initial Release: Initial ReleaseYou are here (for Windows Mobile): CAB file and Source Code: You can find more Controls and samples for Windows Mobile developers at: http://www.beemobile4.netMost Popular ProjectshmrEngineRawrWBFS ManagerASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)ASP.NETLiveUpload to FacebookMost Active ProjectsRawrGraffiti CMSBase Class LibrariesjQuery Library for SharePoint Web ServicesBlogEngine.NETMicrosoft Biology FoundationN2 CMSLINQ to TwitterManaged Extensibility FrameworkFarseer Physics Engine

    Read the article

  • Why should main() be short?

    - by Stargazer712
    I've been programming for over 9 years, and according to the advice of my first programming teacher, I always keep my main() function extremely short. At first I had no idea why. I just obeyed without understanding, much to the delight of my professors. After gaining experience, I realized that if I designed my code correctly, having a short main() function just sortof happened. Writing modularized code and following the single responsibility principle allowed my code to be designed in "bunches", and main() served as nothing more than a catalyst to get the program running. Fast forward to a few weeks ago, I was looking at Python's souce code, and I found the main() function: /* Minimal main program -- everything is loaded from the library */ ... int main(int argc, char **argv) { ... return Py_Main(argc, argv); } Yay Python. Short main() function == Good code. Programming teachers were right. Wanting to look deeper, I took a look at Py_Main. In its entirety, it is defined as follows: /* Main program */ int Py_Main(int argc, char **argv) { int c; int sts; char *command = NULL; char *filename = NULL; char *module = NULL; FILE *fp = stdin; char *p; int unbuffered = 0; int skipfirstline = 0; int stdin_is_interactive = 0; int help = 0; int version = 0; int saw_unbuffered_flag = 0; PyCompilerFlags cf; cf.cf_flags = 0; orig_argc = argc; /* For Py_GetArgcArgv() */ orig_argv = argv; #ifdef RISCOS Py_RISCOSWimpFlag = 0; #endif PySys_ResetWarnOptions(); while ((c = _PyOS_GetOpt(argc, argv, PROGRAM_OPTS)) != EOF) { if (c == 'c') { /* -c is the last option; following arguments that look like options are left for the command to interpret. */ command = (char *)malloc(strlen(_PyOS_optarg) + 2); if (command == NULL) Py_FatalError( "not enough memory to copy -c argument"); strcpy(command, _PyOS_optarg); strcat(command, "\n"); break; } if (c == 'm') { /* -m is the last option; following arguments that look like options are left for the module to interpret. */ module = (char *)malloc(strlen(_PyOS_optarg) + 2); if (module == NULL) Py_FatalError( "not enough memory to copy -m argument"); strcpy(module, _PyOS_optarg); break; } switch (c) { case 'b': Py_BytesWarningFlag++; break; case 'd': Py_DebugFlag++; break; case '3': Py_Py3kWarningFlag++; if (!Py_DivisionWarningFlag) Py_DivisionWarningFlag = 1; break; case 'Q': if (strcmp(_PyOS_optarg, "old") == 0) { Py_DivisionWarningFlag = 0; break; } if (strcmp(_PyOS_optarg, "warn") == 0) { Py_DivisionWarningFlag = 1; break; } if (strcmp(_PyOS_optarg, "warnall") == 0) { Py_DivisionWarningFlag = 2; break; } if (strcmp(_PyOS_optarg, "new") == 0) { /* This only affects __main__ */ cf.cf_flags |= CO_FUTURE_DIVISION; /* And this tells the eval loop to treat BINARY_DIVIDE as BINARY_TRUE_DIVIDE */ _Py_QnewFlag = 1; break; } fprintf(stderr, "-Q option should be `-Qold', " "`-Qwarn', `-Qwarnall', or `-Qnew' only\n"); return usage(2, argv[0]); /* NOTREACHED */ case 'i': Py_InspectFlag++; Py_InteractiveFlag++; break; /* case 'J': reserved for Jython */ case 'O': Py_OptimizeFlag++; break; case 'B': Py_DontWriteBytecodeFlag++; break; case 's': Py_NoUserSiteDirectory++; break; case 'S': Py_NoSiteFlag++; break; case 'E': Py_IgnoreEnvironmentFlag++; break; case 't': Py_TabcheckFlag++; break; case 'u': unbuffered++; saw_unbuffered_flag = 1; break; case 'v': Py_VerboseFlag++; break; #ifdef RISCOS case 'w': Py_RISCOSWimpFlag = 1; break; #endif case 'x': skipfirstline = 1; break; /* case 'X': reserved for implementation-specific arguments */ case 'U': Py_UnicodeFlag++; break; case 'h': case '?': help++; break; case 'V': version++; break; case 'W': PySys_AddWarnOption(_PyOS_optarg); break; /* This space reserved for other options */ default: return usage(2, argv[0]); /*NOTREACHED*/ } } if (help) return usage(0, argv[0]); if (version) { fprintf(stderr, "Python %s\n", PY_VERSION); return 0; } if (Py_Py3kWarningFlag && !Py_TabcheckFlag) /* -3 implies -t (but not -tt) */ Py_TabcheckFlag = 1; if (!Py_InspectFlag && (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0') Py_InspectFlag = 1; if (!saw_unbuffered_flag && (p = Py_GETENV("PYTHONUNBUFFERED")) && *p != '\0') unbuffered = 1; if (!Py_NoUserSiteDirectory && (p = Py_GETENV("PYTHONNOUSERSITE")) && *p != '\0') Py_NoUserSiteDirectory = 1; if ((p = Py_GETENV("PYTHONWARNINGS")) && *p != '\0') { char *buf, *warning; buf = (char *)malloc(strlen(p) + 1); if (buf == NULL) Py_FatalError( "not enough memory to copy PYTHONWARNINGS"); strcpy(buf, p); for (warning = strtok(buf, ","); warning != NULL; warning = strtok(NULL, ",")) PySys_AddWarnOption(warning); free(buf); } if (command == NULL && module == NULL && _PyOS_optind < argc && strcmp(argv[_PyOS_optind], "-") != 0) { #ifdef __VMS filename = decc$translate_vms(argv[_PyOS_optind]); if (filename == (char *)0 || filename == (char *)-1) filename = argv[_PyOS_optind]; #else filename = argv[_PyOS_optind]; #endif } stdin_is_interactive = Py_FdIsInteractive(stdin, (char *)0); if (unbuffered) { #if defined(MS_WINDOWS) || defined(__CYGWIN__) _setmode(fileno(stdin), O_BINARY); _setmode(fileno(stdout), O_BINARY); #endif #ifdef HAVE_SETVBUF setvbuf(stdin, (char *)NULL, _IONBF, BUFSIZ); setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ); setvbuf(stderr, (char *)NULL, _IONBF, BUFSIZ); #else /* !HAVE_SETVBUF */ setbuf(stdin, (char *)NULL); setbuf(stdout, (char *)NULL); setbuf(stderr, (char *)NULL); #endif /* !HAVE_SETVBUF */ } else if (Py_InteractiveFlag) { #ifdef MS_WINDOWS /* Doesn't have to have line-buffered -- use unbuffered */ /* Any set[v]buf(stdin, ...) screws up Tkinter :-( */ setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ); #else /* !MS_WINDOWS */ #ifdef HAVE_SETVBUF setvbuf(stdin, (char *)NULL, _IOLBF, BUFSIZ); setvbuf(stdout, (char *)NULL, _IOLBF, BUFSIZ); #endif /* HAVE_SETVBUF */ #endif /* !MS_WINDOWS */ /* Leave stderr alone - it should be unbuffered anyway. */ } #ifdef __VMS else { setvbuf (stdout, (char *)NULL, _IOLBF, BUFSIZ); } #endif /* __VMS */ #ifdef __APPLE__ /* On MacOS X, when the Python interpreter is embedded in an application bundle, it gets executed by a bootstrapping script that does os.execve() with an argv[0] that's different from the actual Python executable. This is needed to keep the Finder happy, or rather, to work around Apple's overly strict requirements of the process name. However, we still need a usable sys.executable, so the actual executable path is passed in an environment variable. See Lib/plat-mac/bundlebuiler.py for details about the bootstrap script. */ if ((p = Py_GETENV("PYTHONEXECUTABLE")) && *p != '\0') Py_SetProgramName(p); else Py_SetProgramName(argv[0]); #else Py_SetProgramName(argv[0]); #endif Py_Initialize(); if (Py_VerboseFlag || (command == NULL && filename == NULL && module == NULL && stdin_is_interactive)) { fprintf(stderr, "Python %s on %s\n", Py_GetVersion(), Py_GetPlatform()); if (!Py_NoSiteFlag) fprintf(stderr, "%s\n", COPYRIGHT); } if (command != NULL) { /* Backup _PyOS_optind and force sys.argv[0] = '-c' */ _PyOS_optind--; argv[_PyOS_optind] = "-c"; } if (module != NULL) { /* Backup _PyOS_optind and force sys.argv[0] = '-c' so that PySys_SetArgv correctly sets sys.path[0] to '' rather than looking for a file called "-m". See tracker issue #8202 for details. */ _PyOS_optind--; argv[_PyOS_optind] = "-c"; } PySys_SetArgv(argc-_PyOS_optind, argv+_PyOS_optind); if ((Py_InspectFlag || (command == NULL && filename == NULL && module == NULL)) && isatty(fileno(stdin))) { PyObject *v; v = PyImport_ImportModule("readline"); if (v == NULL) PyErr_Clear(); else Py_DECREF(v); } if (command) { sts = PyRun_SimpleStringFlags(command, &cf) != 0; free(command); } else if (module) { sts = RunModule(module, 1); free(module); } else { if (filename == NULL && stdin_is_interactive) { Py_InspectFlag = 0; /* do exit on SystemExit */ RunStartupFile(&cf); } /* XXX */ sts = -1; /* keep track of whether we've already run __main__ */ if (filename != NULL) { sts = RunMainFromImporter(filename); } if (sts==-1 && filename!=NULL) { if ((fp = fopen(filename, "r")) == NULL) { fprintf(stderr, "%s: can't open file '%s': [Errno %d] %s\n", argv[0], filename, errno, strerror(errno)); return 2; } else if (skipfirstline) { int ch; /* Push back first newline so line numbers remain the same */ while ((ch = getc(fp)) != EOF) { if (ch == '\n') { (void)ungetc(ch, fp); break; } } } { /* XXX: does this work on Win/Win64? (see posix_fstat) */ struct stat sb; if (fstat(fileno(fp), &sb) == 0 && S_ISDIR(sb.st_mode)) { fprintf(stderr, "%s: '%s' is a directory, cannot continue\n", argv[0], filename); fclose(fp); return 1; } } } if (sts==-1) { /* call pending calls like signal handlers (SIGINT) */ if (Py_MakePendingCalls() == -1) { PyErr_Print(); sts = 1; } else { sts = PyRun_AnyFileExFlags( fp, filename == NULL ? "<stdin>" : filename, filename != NULL, &cf) != 0; } } } /* Check this environment variable at the end, to give programs the * opportunity to set it from Python. */ if (!Py_InspectFlag && (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0') { Py_InspectFlag = 1; } if (Py_InspectFlag && stdin_is_interactive && (filename != NULL || command != NULL || module != NULL)) { Py_InspectFlag = 0; /* XXX */ sts = PyRun_AnyFileFlags(stdin, "<stdin>", &cf) != 0; } Py_Finalize(); #ifdef RISCOS if (Py_RISCOSWimpFlag) fprintf(stderr, "\x0cq\x0c"); /* make frontend quit */ #endif #ifdef __INSURE__ /* Insure++ is a memory analysis tool that aids in discovering * memory leaks and other memory problems. On Python exit, the * interned string dictionary is flagged as being in use at exit * (which it is). Under normal circumstances, this is fine because * the memory will be automatically reclaimed by the system. Under * memory debugging, it's a huge source of useless noise, so we * trade off slower shutdown for less distraction in the memory * reports. -baw */ _Py_ReleaseInternedStrings(); #endif /* __INSURE__ */ return sts; } Good God Almighty...it is big enough to sink the Titanic. It seems as though Python did the "Intro to Programming 101" trick and just moved all of main()'s code to a different function called it something very similar to "main". Here's my question: Is this code terribly written, or are there other reasons reasons to have a short main function? As it stands right now, I see absolutely no difference between doing this and just moving the code in Py_Main() back into main(). Am I wrong in thinking this?

    Read the article

  • C#/.NET Little Wonders: The Useful But Overlooked Sets

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  Today we will be looking at two set implementations in the System.Collections.Generic namespace: HashSet<T> and SortedSet<T>.  Even though most people think of sets as mathematical constructs, they are actually very useful classes that can be used to help make your application more performant if used appropriately. A Background From Math In mathematical terms, a set is an unordered collection of unique items.  In other words, the set {2,3,5} is identical to the set {3,5,2}.  In addition, the set {2, 2, 4, 1} would be invalid because it would have a duplicate item (2).  In addition, you can perform set arithmetic on sets such as: Intersections: The intersection of two sets is the collection of elements common to both.  Example: The intersection of {1,2,5} and {2,4,9} is the set {2}. Unions: The union of two sets is the collection of unique items present in either or both set.  Example: The union of {1,2,5} and {2,4,9} is {1,2,4,5,9}. Differences: The difference of two sets is the removal of all items from the first set that are common between the sets.  Example: The difference of {1,2,5} and {2,4,9} is {1,5}. Supersets: One set is a superset of a second set if it contains all elements that are in the second set. Example: The set {1,2,5} is a superset of {1,5}. Subsets: One set is a subset of a second set if all the elements of that set are contained in the first set. Example: The set {1,5} is a subset of {1,2,5}. If We’re Not Doing Math, Why Do We Care? Now, you may be thinking: why bother with the set classes in C# if you have no need for mathematical set manipulation?  The answer is simple: they are extremely efficient ways to determine ownership in a collection. For example, let’s say you are designing an order system that tracks the price of a particular equity, and once it reaches a certain point will trigger an order.  Now, since there’s tens of thousands of equities on the markets, you don’t want to track market data for every ticker as that would be a waste of time and processing power for symbols you don’t have orders for.  Thus, we just want to subscribe to the stock symbol for an equity order only if it is a symbol we are not already subscribed to. Every time a new order comes in, we will check the list of subscriptions to see if the new order’s stock symbol is in that list.  If it is, great, we already have that market data feed!  If not, then and only then should we subscribe to the feed for that symbol. So far so good, we have a collection of symbols and we want to see if a symbol is present in that collection and if not, add it.  This really is the essence of set processing, but for the sake of comparison, let’s say you do a list instead: 1: // class that handles are order processing service 2: public sealed class OrderProcessor 3: { 4: // contains list of all symbols we are currently subscribed to 5: private readonly List<string> _subscriptions = new List<string>(); 6:  7: ... 8: } Now whenever you are adding a new order, it would look something like: 1: public PlaceOrderResponse PlaceOrder(Order newOrder) 2: { 3: // do some validation, of course... 4:  5: // check to see if already subscribed, if not add a subscription 6: if (!_subscriptions.Contains(newOrder.Symbol)) 7: { 8: // add the symbol to the list 9: _subscriptions.Add(newOrder.Symbol); 10: 11: // do whatever magic is needed to start a subscription for the symbol 12: } 13:  14: // place the order logic! 15: } What’s wrong with this?  In short: performance!  Finding an item inside a List<T> is a linear - O(n) – operation, which is not a very performant way to find if an item exists in a collection. (I used to teach algorithms and data structures in my spare time at a local university, and when you began talking about big-O notation you could immediately begin to see eyes glossing over as if it was pure, useless theory that would not apply in the real world, but I did and still do believe it is something worth understanding well to make the best choices in computer science). Let’s think about this: a linear operation means that as the number of items increases, the time that it takes to perform the operation tends to increase in a linear fashion.  Put crudely, this means if you double the collection size, you might expect the operation to take something like the order of twice as long.  Linear operations tend to be bad for performance because they mean that to perform some operation on a collection, you must potentially “visit” every item in the collection.  Consider finding an item in a List<T>: if you want to see if the list has an item, you must potentially check every item in the list before you find it or determine it’s not found. Now, we could of course sort our list and then perform a binary search on it, but sorting is typically a linear-logarithmic complexity – O(n * log n) - and could involve temporary storage.  So performing a sort after each add would probably add more time.  As an alternative, we could use a SortedList<TKey, TValue> which sorts the list on every Add(), but this has a similar level of complexity to move the items and also requires a key and value, and in our case the key is the value. This is why sets tend to be the best choice for this type of processing: they don’t rely on separate keys and values for ordering – so they save space – and they typically don’t care about ordering – so they tend to be extremely performant.  The .NET BCL (Base Class Library) has had the HashSet<T> since .NET 3.5, but at that time it did not implement the ISet<T> interface.  As of .NET 4.0, HashSet<T> implements ISet<T> and a new set, the SortedSet<T> was added that gives you a set with ordering. HashSet<T> – For Unordered Storage of Sets When used right, HashSet<T> is a beautiful collection, you can think of it as a simplified Dictionary<T,T>.  That is, a Dictionary where the TKey and TValue refer to the same object.  This is really an oversimplification, but logically it makes sense.  I’ve actually seen people code a Dictionary<T,T> where they store the same thing in the key and the value, and that’s just inefficient because of the extra storage to hold both the key and the value. As it’s name implies, the HashSet<T> uses a hashing algorithm to find the items in the set, which means it does take up some additional space, but it has lightning fast lookups!  Compare the times below between HashSet<T> and List<T>: Operation HashSet<T> List<T> Add() O(1) O(1) at end O(n) in middle Remove() O(1) O(n) Contains() O(1) O(n)   Now, these times are amortized and represent the typical case.  In the very worst case, the operations could be linear if they involve a resizing of the collection – but this is true for both the List and HashSet so that’s a less of an issue when comparing the two. The key thing to note is that in the general case, HashSet is constant time for adds, removes, and contains!  This means that no matter how large the collection is, it takes roughly the exact same amount of time to find an item or determine if it’s not in the collection.  Compare this to the List where almost any add or remove must rearrange potentially all the elements!  And to find an item in the list (if unsorted) you must search every item in the List. So as you can see, if you want to create an unordered collection and have very fast lookup and manipulation, the HashSet is a great collection. And since HashSet<T> implements ICollection<T> and IEnumerable<T>, it supports nearly all the same basic operations as the List<T> and can use the System.Linq extension methods as well. All we have to do to switch from a List<T> to a HashSet<T>  is change our declaration.  Since List and HashSet support many of the same members, chances are we won’t need to change much else. 1: public sealed class OrderProcessor 2: { 3: private readonly HashSet<string> _subscriptions = new HashSet<string>(); 4:  5: // ... 6:  7: public PlaceOrderResponse PlaceOrder(Order newOrder) 8: { 9: // do some validation, of course... 10: 11: // check to see if already subscribed, if not add a subscription 12: if (!_subscriptions.Contains(newOrder.Symbol)) 13: { 14: // add the symbol to the list 15: _subscriptions.Add(newOrder.Symbol); 16: 17: // do whatever magic is needed to start a subscription for the symbol 18: } 19: 20: // place the order logic! 21: } 22:  23: // ... 24: } 25: Notice, we didn’t change any code other than the declaration for _subscriptions to be a HashSet<T>.  Thus, we can pick up the performance improvements in this case with minimal code changes. SortedSet<T> – Ordered Storage of Sets Just like HashSet<T> is logically similar to Dictionary<T,T>, the SortedSet<T> is logically similar to the SortedDictionary<T,T>. The SortedSet can be used when you want to do set operations on a collection, but you want to maintain that collection in sorted order.  Now, this is not necessarily mathematically relevant, but if your collection needs do include order, this is the set to use. So the SortedSet seems to be implemented as a binary tree (possibly a red-black tree) internally.  Since binary trees are dynamic structures and non-contiguous (unlike List and SortedList) this means that inserts and deletes do not involve rearranging elements, or changing the linking of the nodes.  There is some overhead in keeping the nodes in order, but it is much smaller than a contiguous storage collection like a List<T>.  Let’s compare the three: Operation HashSet<T> SortedSet<T> List<T> Add() O(1) O(log n) O(1) at end O(n) in middle Remove() O(1) O(log n) O(n) Contains() O(1) O(log n) O(n)   The MSDN documentation seems to indicate that operations on SortedSet are O(1), but this seems to be inconsistent with its implementation and seems to be a documentation error.  There’s actually a separate MSDN document (here) on SortedSet that indicates that it is, in fact, logarithmic in complexity.  Let’s put it in layman’s terms: logarithmic means you can double the collection size and typically you only add a single extra “visit” to an item in the collection.  Take that in contrast to List<T>’s linear operation where if you double the size of the collection you double the “visits” to items in the collection.  This is very good performance!  It’s still not as performant as HashSet<T> where it always just visits one item (amortized), but for the addition of sorting this is a good thing. Consider the following table, now this is just illustrative data of the relative complexities, but it’s enough to get the point: Collection Size O(1) Visits O(log n) Visits O(n) Visits 1 1 1 1 10 1 4 10 100 1 7 100 1000 1 10 1000   Notice that the logarithmic – O(log n) – visit count goes up very slowly compare to the linear – O(n) – visit count.  This is because since the list is sorted, it can do one check in the middle of the list, determine which half of the collection the data is in, and discard the other half (binary search).  So, if you need your set to be sorted, you can use the SortedSet<T> just like the HashSet<T> and gain sorting for a small performance hit, but it’s still faster than a List<T>. Unique Set Operations Now, if you do want to perform more set-like operations, both implementations of ISet<T> support the following, which play back towards the mathematical set operations described before: IntersectWith() – Performs the set intersection of two sets.  Modifies the current set so that it only contains elements also in the second set. UnionWith() – Performs a set union of two sets.  Modifies the current set so it contains all elements present both in the current set and the second set. ExceptWith() – Performs a set difference of two sets.  Modifies the current set so that it removes all elements present in the second set. IsSupersetOf() – Checks if the current set is a superset of the second set. IsSubsetOf() – Checks if the current set is a subset of the second set. For more information on the set operations themselves, see the MSDN description of ISet<T> (here). What Sets Don’t Do Don’t get me wrong, sets are not silver bullets.  You don’t really want to use a set when you want separate key to value lookups, that’s what the IDictionary implementations are best for. Also sets don’t store temporal add-order.  That is, if you are adding items to the end of a list all the time, your list is ordered in terms of when items were added to it.  This is something the sets don’t do naturally (though you could use a SortedSet with an IComparer with a DateTime but that’s overkill) but List<T> can. Also, List<T> allows indexing which is a blazingly fast way to iterate through items in the collection.  Iterating over all the items in a List<T> is generally much, much faster than iterating over a set. Summary Sets are an excellent tool for maintaining a lookup table where the item is both the key and the value.  In addition, if you have need for the mathematical set operations, the C# sets support those as well.  The HashSet<T> is the set of choice if you want the fastest possible lookups but don’t care about order.  In contrast the SortedSet<T> will give you a sorted collection at a slight reduction in performance.   Technorati Tags: C#,.Net,Little Wonders,BlackRabbitCoder,ISet,HashSet,SortedSet

    Read the article

  • Why should main() be short?

    - by Stargazer712
    I've been programming for over 9 years, and according to the advice of my first programming teacher, I always keep my main() function extremely short. At first I had no idea why. I just obeyed without understanding, much to the delight of my professors. After gaining experience, I realized that if I designed my code correctly, having a short main() function just sortof happened. Writing modularized code and following the single responsibility principle allowed my code to be designed in "bunches", and main() served as nothing more than a catalyst to get the program running. Fast forward to a few weeks ago, I was looking at Python's souce code, and I found the main() function: /* Minimal main program -- everything is loaded from the library */ ... int main(int argc, char **argv) { ... return Py_Main(argc, argv); } Yay python. Short main() function == Good code. Programming teachers were right. Wanting to look deeper, I took a look at Py_Main. In its entirety, it is defined as follows: /* Main program */ int Py_Main(int argc, char **argv) { int c; int sts; char *command = NULL; char *filename = NULL; char *module = NULL; FILE *fp = stdin; char *p; int unbuffered = 0; int skipfirstline = 0; int stdin_is_interactive = 0; int help = 0; int version = 0; int saw_unbuffered_flag = 0; PyCompilerFlags cf; cf.cf_flags = 0; orig_argc = argc; /* For Py_GetArgcArgv() */ orig_argv = argv; #ifdef RISCOS Py_RISCOSWimpFlag = 0; #endif PySys_ResetWarnOptions(); while ((c = _PyOS_GetOpt(argc, argv, PROGRAM_OPTS)) != EOF) { if (c == 'c') { /* -c is the last option; following arguments that look like options are left for the command to interpret. */ command = (char *)malloc(strlen(_PyOS_optarg) + 2); if (command == NULL) Py_FatalError( "not enough memory to copy -c argument"); strcpy(command, _PyOS_optarg); strcat(command, "\n"); break; } if (c == 'm') { /* -m is the last option; following arguments that look like options are left for the module to interpret. */ module = (char *)malloc(strlen(_PyOS_optarg) + 2); if (module == NULL) Py_FatalError( "not enough memory to copy -m argument"); strcpy(module, _PyOS_optarg); break; } switch (c) { case 'b': Py_BytesWarningFlag++; break; case 'd': Py_DebugFlag++; break; case '3': Py_Py3kWarningFlag++; if (!Py_DivisionWarningFlag) Py_DivisionWarningFlag = 1; break; case 'Q': if (strcmp(_PyOS_optarg, "old") == 0) { Py_DivisionWarningFlag = 0; break; } if (strcmp(_PyOS_optarg, "warn") == 0) { Py_DivisionWarningFlag = 1; break; } if (strcmp(_PyOS_optarg, "warnall") == 0) { Py_DivisionWarningFlag = 2; break; } if (strcmp(_PyOS_optarg, "new") == 0) { /* This only affects __main__ */ cf.cf_flags |= CO_FUTURE_DIVISION; /* And this tells the eval loop to treat BINARY_DIVIDE as BINARY_TRUE_DIVIDE */ _Py_QnewFlag = 1; break; } fprintf(stderr, "-Q option should be `-Qold', " "`-Qwarn', `-Qwarnall', or `-Qnew' only\n"); return usage(2, argv[0]); /* NOTREACHED */ case 'i': Py_InspectFlag++; Py_InteractiveFlag++; break; /* case 'J': reserved for Jython */ case 'O': Py_OptimizeFlag++; break; case 'B': Py_DontWriteBytecodeFlag++; break; case 's': Py_NoUserSiteDirectory++; break; case 'S': Py_NoSiteFlag++; break; case 'E': Py_IgnoreEnvironmentFlag++; break; case 't': Py_TabcheckFlag++; break; case 'u': unbuffered++; saw_unbuffered_flag = 1; break; case 'v': Py_VerboseFlag++; break; #ifdef RISCOS case 'w': Py_RISCOSWimpFlag = 1; break; #endif case 'x': skipfirstline = 1; break; /* case 'X': reserved for implementation-specific arguments */ case 'U': Py_UnicodeFlag++; break; case 'h': case '?': help++; break; case 'V': version++; break; case 'W': PySys_AddWarnOption(_PyOS_optarg); break; /* This space reserved for other options */ default: return usage(2, argv[0]); /*NOTREACHED*/ } } if (help) return usage(0, argv[0]); if (version) { fprintf(stderr, "Python %s\n", PY_VERSION); return 0; } if (Py_Py3kWarningFlag && !Py_TabcheckFlag) /* -3 implies -t (but not -tt) */ Py_TabcheckFlag = 1; if (!Py_InspectFlag && (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0') Py_InspectFlag = 1; if (!saw_unbuffered_flag && (p = Py_GETENV("PYTHONUNBUFFERED")) && *p != '\0') unbuffered = 1; if (!Py_NoUserSiteDirectory && (p = Py_GETENV("PYTHONNOUSERSITE")) && *p != '\0') Py_NoUserSiteDirectory = 1; if ((p = Py_GETENV("PYTHONWARNINGS")) && *p != '\0') { char *buf, *warning; buf = (char *)malloc(strlen(p) + 1); if (buf == NULL) Py_FatalError( "not enough memory to copy PYTHONWARNINGS"); strcpy(buf, p); for (warning = strtok(buf, ","); warning != NULL; warning = strtok(NULL, ",")) PySys_AddWarnOption(warning); free(buf); } if (command == NULL && module == NULL && _PyOS_optind < argc && strcmp(argv[_PyOS_optind], "-") != 0) { #ifdef __VMS filename = decc$translate_vms(argv[_PyOS_optind]); if (filename == (char *)0 || filename == (char *)-1) filename = argv[_PyOS_optind]; #else filename = argv[_PyOS_optind]; #endif } stdin_is_interactive = Py_FdIsInteractive(stdin, (char *)0); if (unbuffered) { #if defined(MS_WINDOWS) || defined(__CYGWIN__) _setmode(fileno(stdin), O_BINARY); _setmode(fileno(stdout), O_BINARY); #endif #ifdef HAVE_SETVBUF setvbuf(stdin, (char *)NULL, _IONBF, BUFSIZ); setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ); setvbuf(stderr, (char *)NULL, _IONBF, BUFSIZ); #else /* !HAVE_SETVBUF */ setbuf(stdin, (char *)NULL); setbuf(stdout, (char *)NULL); setbuf(stderr, (char *)NULL); #endif /* !HAVE_SETVBUF */ } else if (Py_InteractiveFlag) { #ifdef MS_WINDOWS /* Doesn't have to have line-buffered -- use unbuffered */ /* Any set[v]buf(stdin, ...) screws up Tkinter :-( */ setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ); #else /* !MS_WINDOWS */ #ifdef HAVE_SETVBUF setvbuf(stdin, (char *)NULL, _IOLBF, BUFSIZ); setvbuf(stdout, (char *)NULL, _IOLBF, BUFSIZ); #endif /* HAVE_SETVBUF */ #endif /* !MS_WINDOWS */ /* Leave stderr alone - it should be unbuffered anyway. */ } #ifdef __VMS else { setvbuf (stdout, (char *)NULL, _IOLBF, BUFSIZ); } #endif /* __VMS */ #ifdef __APPLE__ /* On MacOS X, when the Python interpreter is embedded in an application bundle, it gets executed by a bootstrapping script that does os.execve() with an argv[0] that's different from the actual Python executable. This is needed to keep the Finder happy, or rather, to work around Apple's overly strict requirements of the process name. However, we still need a usable sys.executable, so the actual executable path is passed in an environment variable. See Lib/plat-mac/bundlebuiler.py for details about the bootstrap script. */ if ((p = Py_GETENV("PYTHONEXECUTABLE")) && *p != '\0') Py_SetProgramName(p); else Py_SetProgramName(argv[0]); #else Py_SetProgramName(argv[0]); #endif Py_Initialize(); if (Py_VerboseFlag || (command == NULL && filename == NULL && module == NULL && stdin_is_interactive)) { fprintf(stderr, "Python %s on %s\n", Py_GetVersion(), Py_GetPlatform()); if (!Py_NoSiteFlag) fprintf(stderr, "%s\n", COPYRIGHT); } if (command != NULL) { /* Backup _PyOS_optind and force sys.argv[0] = '-c' */ _PyOS_optind--; argv[_PyOS_optind] = "-c"; } if (module != NULL) { /* Backup _PyOS_optind and force sys.argv[0] = '-c' so that PySys_SetArgv correctly sets sys.path[0] to '' rather than looking for a file called "-m". See tracker issue #8202 for details. */ _PyOS_optind--; argv[_PyOS_optind] = "-c"; } PySys_SetArgv(argc-_PyOS_optind, argv+_PyOS_optind); if ((Py_InspectFlag || (command == NULL && filename == NULL && module == NULL)) && isatty(fileno(stdin))) { PyObject *v; v = PyImport_ImportModule("readline"); if (v == NULL) PyErr_Clear(); else Py_DECREF(v); } if (command) { sts = PyRun_SimpleStringFlags(command, &cf) != 0; free(command); } else if (module) { sts = RunModule(module, 1); free(module); } else { if (filename == NULL && stdin_is_interactive) { Py_InspectFlag = 0; /* do exit on SystemExit */ RunStartupFile(&cf); } /* XXX */ sts = -1; /* keep track of whether we've already run __main__ */ if (filename != NULL) { sts = RunMainFromImporter(filename); } if (sts==-1 && filename!=NULL) { if ((fp = fopen(filename, "r")) == NULL) { fprintf(stderr, "%s: can't open file '%s': [Errno %d] %s\n", argv[0], filename, errno, strerror(errno)); return 2; } else if (skipfirstline) { int ch; /* Push back first newline so line numbers remain the same */ while ((ch = getc(fp)) != EOF) { if (ch == '\n') { (void)ungetc(ch, fp); break; } } } { /* XXX: does this work on Win/Win64? (see posix_fstat) */ struct stat sb; if (fstat(fileno(fp), &sb) == 0 && S_ISDIR(sb.st_mode)) { fprintf(stderr, "%s: '%s' is a directory, cannot continue\n", argv[0], filename); fclose(fp); return 1; } } } if (sts==-1) { /* call pending calls like signal handlers (SIGINT) */ if (Py_MakePendingCalls() == -1) { PyErr_Print(); sts = 1; } else { sts = PyRun_AnyFileExFlags( fp, filename == NULL ? "<stdin>" : filename, filename != NULL, &cf) != 0; } } } /* Check this environment variable at the end, to give programs the * opportunity to set it from Python. */ if (!Py_InspectFlag && (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0') { Py_InspectFlag = 1; } if (Py_InspectFlag && stdin_is_interactive && (filename != NULL || command != NULL || module != NULL)) { Py_InspectFlag = 0; /* XXX */ sts = PyRun_AnyFileFlags(stdin, "<stdin>", &cf) != 0; } Py_Finalize(); #ifdef RISCOS if (Py_RISCOSWimpFlag) fprintf(stderr, "\x0cq\x0c"); /* make frontend quit */ #endif #ifdef __INSURE__ /* Insure++ is a memory analysis tool that aids in discovering * memory leaks and other memory problems. On Python exit, the * interned string dictionary is flagged as being in use at exit * (which it is). Under normal circumstances, this is fine because * the memory will be automatically reclaimed by the system. Under * memory debugging, it's a huge source of useless noise, so we * trade off slower shutdown for less distraction in the memory * reports. -baw */ _Py_ReleaseInternedStrings(); #endif /* __INSURE__ */ return sts; } Good God Almighty...it is big enough to sink the Titanic. It seems as though Python did the "Intro to Programming 101" trick and just moved all of main()'s code to a different function called it something very similar to "main". Here's my question: Is this code terribly written, or are there other reasons to have a short main function? As it stands right now, I see absolutely no difference between doing this and just moving the code in Py_Main() back into main(). Am I wrong in thinking this?

    Read the article

< Previous Page | 41 42 43 44 45 46  | Next Page >