Search Results

Search found 18079 results on 724 pages for 'compiler options'.

Page 115/724 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • "No keyboard for id 0"?

    - by Mellon
    I am new in Android app. development, now I have encountered a strange problem with the Menu button. Here is the thing: I have two activities, "ActivityOne" and "ActivityTwo", where "ActivityTwo" is the child Activity of "ActivityOne". In both activity, I have defined the menu button options like following: @Override public boolean onCreateOptionsMenu(Menu menu) { super.onCreateOptionsMenu(menu); MenuItem insertMenuItem = menu.add(0, INSERT_ID, 0, R.string.menu_insert); insertMenuItem.setIcon(R.drawable.ic_menu_add); MenuItem settingMenuItem = menu.add(0, SETTING_ID, 0, R.string.menu_setting); settingMenuItem.setIcon(R.drawable.ic_menu_settings); MenuItem aboutMenuItem = menu.add(0, ABOUT_ID, 0, R.string.menu_about); aboutMenuItem.setIcon(R.drawable.ic_menu_about); logPrinter.println("creating menu options..."); return true; } @Override public boolean onMenuItemSelected(int featureId, MenuItem item) { switch(item.getItemId()) { case INSERT_ID: doInsert(); return true; case SETTING_ID: return true; case ABOUT_ID: showAbout(); return true; } return super.onMenuItemSelected(featureId, item); } In "ActivityOne", when I click the physical Menu button, there is no menu options pop up from screen bottom, when I checked the LogCat console, there are two warning messages, which are "No keyboard for id 0" and "Using default keyMap:/system/usr/keychars/qwerty.kcm.bin" . BUT, in "ActivityTwo", the menu button works fine, it shows me those menu options I defined. Why the menu button does not work in "ActivityOne" ?? What does the warning msg mean???

    Read the article

  • How do I get this linearlayout working on android?

    - by Arkadiy
    I'm trying to create a scrollview that contains a linearlayout with some options inside of it. I'd like for the options to show up at the top and grow downwards, and to have "save," "delete," and "cancel" buttons which show up at the very bottom of the screen (if there is space left in between the bottom of all the options and the top of the buttons). Here's a screenshot of my attempt: http: //img375.imageshack.us/img375/3628/10370572.png (StackOverflow was only letting me post a single link so you'll have to get rid of the space in between http: and the rest) As you can see, the buttons show up directly below the options rather than at the bottom of the screen. In fact the top level (green) linearlayout doesn't even extend all the way down, despite having a height of fill_parent and a weight of 1 (and the blue is the scrollview, so there is room to expand). I've tried every possible combination of heights, weights, gravities, and anything else to get this working and still can't do it. Everytime I pasted my xml layout into here, the top and bottom portions got cut off, so instead I pasted it here: pastey link Can anyone see what I'm doing wrong?

    Read the article

  • I read 3 pages of a JQuery book and here's my reaction and question

    - by George
    My jQuery reaction to the language's flexible "selectors" is probably rooted in this experience: I once had managed a project where a developer constructed a web page that was used by users to provide very flexible search parameters for a search screen using dynamic sql string building based on the user's specified search parameter. The resulting queries were usually very complicated and involved joins to many tables. One of the options that the user had was to choose from one of 3 an options. Depending on the user's choice for this option, the resulting SQL would need to query a different set of database columns. For example, if choice option "A" were selected, the resulting database columns queried would be prefixed with "A_"; if option "B" were selected, he resulting database columns queried would be prefixed with "B_" and so on. The developer choice to write all the complete SQL assuming that the user selected, for example, option "A" and therefore first constructed SQLs of this type: SQL = "SELECT A_COL1, A_COL2, A_COL3 FROM TABLE ..." and then after constructing one of a million possible variations on the Query From Hell, did something like this: If UserOption = "B" then SQL = SQL.Replace("A_","B_") 'replace everywhere End if He insisted that this was the easiest was to code it, and while I understood that, I was concerned about maintenance of this code. You see, this worked for a while, but as the search options grew and the database columns evolved, the various "REPLACE small substring" with another small substring had unexpected consequences when applied to an evolving database and new search options. My feeling is that code should be written as much as possible such that you can add to it without fear of breaking what is already there. I feel a better approach, though a bit more work, would have been to write a function to return the appropriate target column based on a common set name and the user selected option. OK, so what does this have to do with jQuery selectors? Are the ultra flexible JQuery selectors kind of like perform a "replace all" on a SQL string? Handy as hell but potentially creating a maintenance nightmare?

    Read the article

  • Dynamic Variable Names in Included Module in Ruby?

    - by viatropos
    I'm hoping to implement something like all of the great plugins out there for ruby, so that you can do this: acts_as_commentable has_attached_file :avatar But I have one constraint: That helper method can only include a module; it can't define any variables or methods. Here's what the structure looks like, and I'm wondering if you know the missing piece in the puzzle: # 1 - The workhorse, encapsuling all dynamic variables module My::Module def self.included(base) base.extend ClassMethods base.class_eval do include InstanceMethods end end module InstanceMethods self.instance_eval %Q? def #{options[:my_method]} "world!" end ? end module ClassMethods end end # 2 - all this does is define that helper method module HelperModule def self.included(base) base.extend(ClassMethods) end module ClassMethods def dynamic_method(options = {}) include My::Module(options) end end end # 3 - send it to active_record ActiveRecord::Base.send(:include, HelperModule) # 4 - what it looks like class TestClass < ActiveRecord::Base dynamic_method :my_method => "hello" end puts TestClass.new.hello #=> "world!" That %Q? I'm not totally sure how to use, but I'm basically just wanting to somehow be able to pass the options hash from that helper method into the workhorse module. Is that possible? That way, the workhorse module could define all sorts of functionality, but I could name the variables whatever I wanted at runtime.

    Read the article

  • How reference popup object when opener page changes?

    - by achairapart
    This is driving me crazy! Scenario: The main page opens a pop-up window and later calls a function in it: newWin = window.open(someurl,'newWin',options); ...some code later... newWin.remoteFunction(options); and it's ok. Now, popup is still open and Main Page changes to Page 2. In Page 2 newWin doesn't exist anymore and I need to recreate the popup window object reference in order to call again the remote function (newWin.remoteFunction) I tried something like: newWin = open('','newWin', options); if (!newWin || newWin.closed || !newWin.remoteFunction) { newWin = window.open(someurl,'newWin',options); } And it works, I can call newWin.remoteFunction again BUT Safari for some reason gives Focus() to the popup window everytime the open() method is called breaking the navigation (I absolutely need the popup working in the background). Only workaround I can think to solve this is to create an interval in the popup with: if(window.opener && !window.opener.newWin) window.opener.newWin = self; and then set another interval in the opener Page with some try/catch but it is inelegant and very inefficient. so, I wonder, it's really so hard to get the popup object reference between different pages in the opener window?

    Read the article

  • php | Multidimensional array sorting

    - by user889349
    I have an array and need to be sorted (based on id): Array ( [0] => Array ( [qty] => 1 [id] => 3 [name] => Name1 [sku] => Model 1 [options] => [price] => 100.00 ) [1] => Array ( [qty] => 2 [id] => 1 [name] => Name2 [sku] => Model 1 [options] => Color: <em>Black (+10$)</em>. Memory: <em>32GB (+99$)</em>. [price] => 209.00 ) ) Is it possible to sort my array to get output (id based)? Array ( [0] => Array ( [qty] => 2 [id] => 1 [name] => Name2 [sku] => Model 1 [options] => Color: <em>Black (+10$)</em>. Memory: <em>32GB (+99$)</em>. [price] => 209.00 ) [1] => Array ( [qty] => 1 [id] => 3 [name] => Name1 [sku] => Model 1 [options] => [price] => 100.00 ) ) Thanks!

    Read the article

  • Javascript object properties access functions in parent constructor?

    - by Bob Spryn
    So I'm using this pretty standard jquery plugin pattern whereby you can grab an api after applying the jquery function to a specific instance. This API is essentially a javascript object with a bunch of methods and data. So I wanted to essentially create some private internal methods for the object only to manipulate data etc, which just doesn't need to be available as part of the API. So I tried this: // API returned with new $.TranslationUI(options, container) $.TranslationUI = function (options, container) { // private function? function monkey(){ console.log("blah blah blah"); } // extend the default settings with the options object passed this.settings = $.extend({},$.TranslationUI.defaultSettings,options); // set a reference for the container dom element this.container = container; // call the init function this.init(); }; The problem I'm running into is that init can't call that function "monkey". I'm not understanding the explanation behind why it can't. Is it because init is a prototype method?($.TranslationUI's prototype is extended with a bunch of methods including init elsewhere in the code) $.extend($.TranslationUI, { prototype: { init : function(){ // doesn't work monkey(); // editing flag this.editing = false; // init event delegates here for // languagepicker $(this.settings.languageSelector, this.container).bind("click", {self: this}, this.selectLanguage); } } }); Any explanations would be helpful. Would love other thoughts on creating private methods with this model too. These particular functions don't HAVE to be in prototype, and I don't NEED private methods protected from being used externally, but I want to know how should I have that requirement in the future.

    Read the article

  • problem using 'as_json' in my model and 'render :json' => in my controller (rails)

    - by patrick
    Hi everyone. I am trying to create a unique json data structure, and I have run into a problem that I can't seem to figure out. In my controller, I am doing: favorite_ids = Favorites.all.map(&:photo_id) data = { :albums => PhotoAlbum.all.to_json, :photos => Photo.all.to_json(:favorite => lambda {|photo| favorite_ids.include?(photo.id)}) } render :json => data and in my model: def as_json(options = {}) { :name => self.name, :favorite => options[:favorite].is_a?(Proc) ? options[:favorite].call(self) : options[:favorite] } end The problem is, rails encodes the values of 'photos' & 'albums' (in my data hash) as JSON twice, and this breaks everything... The only way I could get this to work is if I call 'as_json' instead of 'to_json': data = { :albums => PhotoAlbum.all.as_json, :photos => Photo.all.as_json(:favorite => lambda {|photo| favorite_ids.include?(photo.id)}) } However, when I do this, my :favorite = lambda option no longer makes it into the model's as_json method.......... So, I either need a way to tell 'render :json' not to encode the values of the hash so I can use 'to_json' on the values myself, or I need a way to get the parameters passed into 'as_json' to actually show up there....... I hope someone here can help... Thanks!

    Read the article

  • Clang warning flags for Objective-C development

    - by Macmade
    As a C & Objective-C programmer, I'm a bit paranoid with the compiler warning flags. I usually try to find a complete list of warning flags for the compiler I use, and turn most of them on, unless I have a really good reason not to turn it on. I personally think this may actually improve coding skills, as well as potential code portability, prevent some issues, as it forces you to be aware of every little detail, potential implementation and architecture issues, and so on... It's also in my opinion a good every day learning tool, even if you're an experienced programmer. For the subjective part of this question, I'm interested in hearing other developers (mainly C, Objective-C and C++) about this topic. Do you actually care about stuff like pedantic warnings, etc? And if yes or no, why? Now about Objective-C, I recently completely switched to the LLVM toolchain (with Clang), instead of GCC. On my production code, I usually set this warning flags (explicitly, even if some of them may be covered by -Wall): -Wall -Wbad-function-cast -Wcast-align -Wconversion -Wdeclaration-after-statement -Wdeprecated-implementations -Wextra -Wfloat-equal -Wformat=2 -Wformat-nonliteral -Wfour-char-constants -Wimplicit-atomic-properties -Wmissing-braces -Wmissing-declarations -Wmissing-field-initializers -Wmissing-format-attribute -Wmissing-noreturn -Wmissing-prototypes -Wnested-externs -Wnewline-eof -Wold-style-definition -Woverlength-strings -Wparentheses -Wpointer-arith -Wredundant-decls -Wreturn-type -Wsequence-point -Wshadow -Wshorten-64-to-32 -Wsign-compare -Wsign-conversion -Wstrict-prototypes -Wstrict-selector-match -Wswitch -Wswitch-default -Wswitch-enum -Wundeclared-selector -Wuninitialized -Wunknown-pragmas -Wunreachable-code -Wunused-function -Wunused-label -Wunused-parameter -Wunused-value -Wunused-variable -Wwrite-strings I'm interested in hearing what other developers have to say about this. For instance, do you think I missed a particular flag for Clang (Objective-C), and why? Or do you think a particular flag is not useful (or not wanted at all), and why?

    Read the article

  • Errors trying to run MongoDB

    - by SomeKittens
    I'm running Ubuntu Server 12.04 (32 bit) on an old (1998) computer. Everything's working fine until I try and start MongoDB. somekittens@DLserver01:~$ mongo MongoDB shell version: 2.2.2 connecting to: test Sun Dec 16 22:47:50 Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91 exception: connect failed Googling the error lead me to all sorts of "repair" options, none of which fixed anything. I've also removed MongoDB and installed it again (using apt-get, have not built from source). Mongo's log shows the following error: Thu Dec 13 18:36:32 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Thu Dec 13 18:36:32 Thu Dec 13 18:36:32 [initandlisten] MongoDB starting : pid=758 port=27017 dbpath=/var/lib/mongodb 32-bit host=DLserver01 Thu Dec 13 18:36:32 [initandlisten] Thu Dec 13 18:36:32 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Thu Dec 13 18:36:32 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Thu Dec 13 18:36:32 [initandlisten] ** with --journal, the limit is lower Thu Dec 13 18:36:32 [initandlisten] Thu Dec 13 18:36:32 [initandlisten] db version v2.2.2, pdfile version 4.5 Thu Dec 13 18:36:32 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Thu Dec 13 18:36:32 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Thu Dec 13 18:36:32 [initandlisten] options: { config: "/etc/mongodb.conf", dbpath: "/var/lib/mongodb", logappend: "true", logpath: "/var/log/mongodb/mongodb.log" } Thu Dec 13 18:36:32 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/var/lib/mongodb/journal" ************** Unclean shutdown detected. Please visit http://dochub.mongodb.org/core/repair for recovery instructions. ************* Thu Dec 13 18:36:32 [initandlisten] exception in initAndListen: 12596 old lock file, terminating Thu Dec 13 18:36:32 dbexit: Thu Dec 13 18:36:32 [initandlisten] shutdown: going to close listening sockets... Thu Dec 13 18:36:32 [initandlisten] shutdown: going to flush diaglog... Thu Dec 13 18:36:32 [initandlisten] shutdown: going to close sockets... Thu Dec 13 18:36:32 [initandlisten] shutdown: waiting for fs preallocator... Thu Dec 13 18:36:32 [initandlisten] shutdown: closing all files... Thu Dec 13 18:36:32 [initandlisten] closeAllFiles() finished Thu Dec 13 18:36:32 dbexit: really exiting now Running through the recovery instructions lead to the following adventure: somekittens@DLserver01:/var/log/mongodb$ mongod --repair Sun Dec 16 22:42:54 Sun Dec 16 22:42:54 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:42:54 Sun Dec 16 22:42:54 [initandlisten] MongoDB starting : pid=1887 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:42:54 [initandlisten] Sun Dec 16 22:42:54 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:42:54 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:42:54 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:42:54 [initandlisten] Sun Dec 16 22:42:54 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:42:54 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:42:54 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:42:54 [initandlisten] options: { repair: true } Sun Dec 16 22:42:54 [initandlisten] exception in initAndListen: 10296 ********************************************************************* ERROR: dbpath (/data/db/) does not exist. Create this directory or give existing directory in --dbpath. See http://dochub.mongodb.org/core/startingandstoppingmongo ********************************************************************* , terminating Sun Dec 16 22:42:54 dbexit: Sun Dec 16 22:42:54 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:42:54 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:42:54 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:42:54 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:42:54 [initandlisten] shutdown: closing all files... Sun Dec 16 22:42:54 [initandlisten] closeAllFiles() finished Sun Dec 16 22:42:54 dbexit: really exiting now somekittens@DLserver01:/var/log/mongodb$ sudo mkdir /data somekittens@DLserver01:/var/log/mongodb$ sudo mkdir /data/db somekittens@DLserver01:/var/log/mongodb$ mongod --repair Sun Dec 16 22:43:51 Sun Dec 16 22:43:51 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:43:51 Sun Dec 16 22:43:51 [initandlisten] MongoDB starting : pid=1909 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:43:51 [initandlisten] Sun Dec 16 22:43:51 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:43:51 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:43:51 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:43:51 [initandlisten] Sun Dec 16 22:43:51 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:43:51 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:43:51 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:43:51 [initandlisten] options: { repair: true } Sun Dec 16 22:43:51 [initandlisten] exception in initAndListen: 10309 Unable to create/open lock file: /data/db/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating Sun Dec 16 22:43:51 dbexit: Sun Dec 16 22:43:51 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:43:51 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:43:51 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:43:51 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:43:51 [initandlisten] shutdown: closing all files... Sun Dec 16 22:43:51 [initandlisten] closeAllFiles() finished Sun Dec 16 22:43:51 [initandlisten] shutdown: removing fs lock... Sun Dec 16 22:43:51 [initandlisten] couldn't remove fs lock errno:9 Bad file descriptor Sun Dec 16 22:43:51 dbexit: really exiting now somekittens@DLserver01:/var/log/mongodb$ service mongodb stop stop: Unknown instance: somekittens@DLserver01:/var/log/mongodb$ sudo mongod --repair Sun Dec 16 22:45:04 Sun Dec 16 22:45:04 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:45:04 Sun Dec 16 22:45:04 [initandlisten] MongoDB starting : pid=1921 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:45:04 [initandlisten] Sun Dec 16 22:45:04 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:45:04 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:45:04 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:45:04 [initandlisten] Sun Dec 16 22:45:04 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:45:04 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:45:04 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:45:04 [initandlisten] options: { repair: true } Sun Dec 16 22:45:04 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/data/db/journal" Sun Dec 16 22:45:04 [initandlisten] finished checking dbs Sun Dec 16 22:45:04 dbexit: Sun Dec 16 22:45:04 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:45:04 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:45:04 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:45:04 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:45:04 [initandlisten] shutdown: closing all files... Sun Dec 16 22:45:04 [initandlisten] closeAllFiles() finished Sun Dec 16 22:45:04 [initandlisten] shutdown: removing fs lock... Sun Dec 16 22:45:04 dbexit: really exiting now Which didn't change anything. What can I do to resolve this? It's an old computer (640MB RAM, single-core P2). Could that be causing it?

    Read the article

  • Unavailable repository

    - by katrina
    I am new to Ubuntu and keep butting up against errors, such as this: Package libpng12-dev is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: libpng12-0 E: Unable to locate package subversion E: Package 'git-core' has no installation candidate E: Package 'build-essential' has no installation candidate E: Package 'autoconf' has no installation candidate E: Package 'libtool' has no installation candidate E: Unable to locate package libxml2-dev E: Unable to locate package libgeos-dev E: Unable to locate package libpq-dev E: Unable to locate package libbz2-dev E: Package 'proj' has no installation candidate E: Unable to locate package munin-node E: Unable to locate package munin E: Unable to locate package libprotobuf-c0-dev E: Unable to locate package protobuf-c-compiler E: Unable to locate package libfreetype6-dev E: Package 'libpng12-dev' has no installation candidate E: Unable to locate package libtiff4-dev E: Unable to locate package libicu-dev E: Unable to locate package libboost-all-dev E: Unable to locate package libgdal-dev E: Unable to locate package libcairo-dev E: Unable to locate package libcairomm-1.0-dev E: Couldn't find any package by regex 'libcairomm-1.0-dev' E: Unable to locate package apache2 E: Unable to locate package apache2-dev E: Unable to locate package libagg-dev when I want to do this: sudo apt-get install subversion git-core tar unzip wget bzip2 build-essential autoconf libtool libxml2-dev libgeos-dev libpq-dev libbz2-dev proj munin-node munin libprotobuf-c0-dev protobuf-c-compiler libfreetype6-dev libpng12-dev libtiff4-dev libicu-dev libboost-all-dev libgdal-dev libcairo-dev libcairomm-1.0-dev apache2 apache2-dev libagg-dev. Any help or advice would be greatly appreciated. Or referrals to other questions...

    Read the article

  • How to configure VPN in Windows XP

    - by SAMIR BHOGAYTA
    VPN Overview A VPN is a private network created over a public one. It’s done with encryption, this way, your data is encapsulated and secure in transit – this creates the ‘virtual’ tunnel. A VPN is a method of connecting to a private network by a public network like the Internet. An internet connection in a company is common. An Internet connection in a Home is common too. With both of these, you could create an encrypted tunnel between them and pass traffic, safely - securely. If you want to create a VPN connection you will have to use encryption to make sure that others cannot intercept the data in transit while traversing the Internet. Windows XP provides a certain level of security by using Point-to-Point Tunneling Protocol (PPTP) or Layer Two Tunneling Protocol (L2TP). They are both considered tunneling protocols – simply because they create that virtual tunnel just discussed, by applying encryption. Configure a VPN with XP If you want to configure a VPN connection from a Windows XP client computer you only need what comes with the Operating System itself, it's all built right in. To set up a connection to a VPN, do the following: 1. On the computer that is running Windows XP, confirm that the connection to the Internet is correctly configured. • You can try to browse the internet • Ping a known host on the Internet, like yahoo.com, something that isn’t blocking ICMP 2. Click Start, and then click Control Panel. 3. In Control Panel, double click Network Connections 4. Click Create a new connection in the Network Tasks task pad 5. In the Network Connection Wizard, click Next. 6. Click Connect to the network at my workplace, and then click Next. 7. Click Virtual Private Network connection, and then click Next. 8. If you are prompted, you need to select whether you will use a dialup connection or if you have a dedicated connection to the Internet either via Cable, DSL, T1, Satellite, etc. Click Next. 9. Type a host name, IP or any other description you would like to appear in the Network Connections area. You can change this later if you want. Click Next. 10. Type the host name or the Internet Protocol (IP) address of the computer that you want to connect to, and then click Next. 11. You may be asked if you want to use a Smart Card or not. 12. You are just about done, the rest of the screens just verify your connection, click Next. 13. Click to select the Add a shortcut to this connection to my desktop check box if you want one, if not, then leave it unchecked and click finish. 14. You are now done making your connection, but by default, it may try to connect. You can either try the connection now if you know its valid, if not, then just close it down for now. 15. In the Network Connections window, right-click the new connection and select properties. Let’s take a look at how you can customize this connection before it’s used. 16. The first tab you will see if the General Tab. This only covers the name of the connection, which you can also rename from the Network Connection dialog box by right clicking the connection and selecting to rename it. You can also configure a First connect, which means that Windows can connect the public network (like the Internet) before starting to attempt the ‘VPN’ connection. This is a perfect example as to when you would have configured the dialup connection; this would have been the first thing that you would have to do. It's simple, you have to be connected to the Internet first before you can encrypt and send data over it. This setting makes sure that this is a reality for you. 17. The next tab is the Options Tab. It is The Options tab has a lot you can configure in it. For one, you have the option to connect to a Windows Domain, if you select this check box (unchecked by default), then your VPN client will request Windows logon domain information while starting to work up the VPN connection. Also, you have options here for redialing. Redial attempts are configured here if you are using a dial up connection to get to the Internet. It is very handy to redial if the line is dropped as dropped lines are very common. 18. The next tab is the Security Tab. This is where you would configure basic security for the VPN client. This is where you would set any advanced IPSec configurations other security protocols as well as requiring encryption and credentials. 19. The next tab is the Networking Tab. This is where you can select what networking items are used by this VPN connection. 20. The Last tab is the Advanced Tab. This is where you can configure options for configuring a firewall, and/or sharing. Connecting to Corporate Now that you have your XP VPN client all set up and ready, the next step is to attempt a connection to the Remote Access or VPN server set up at the corporate office. To use the connection follow these simple steps. To open the client again, go back to the Network Connections dialog box. 1. One you are in the Network Connection dialog box, double-click, or right click and select ‘Connect’ from the menu – this will initiate the connection to the corporate office. 2. Type your user name and password, and then click Connect. Properties bring you back to what we just discussed in this article, all the global settings for the VPN client you are using. 3. To disconnect from a VPN connection, right-click the icon for the connection, and then click “Disconnect” Summary In this article we covered the basics of building a VPN connection using Windows XP. This is very handy when you have a VPN device but don’t have the ‘client’ that may come with it. If the VPN Server doesn’t use highly proprietary protocols, then you can use the XP client to connect with. In a future article I will get into the nuts and bolts of both IPSec and more detail on how to configure the advanced options in the Security tab of this client. 678: The remote computer did not respond. 930: The authentication server did not respond to authentication requests in a timely fashion. 800: Unable to establish the VPN connection. 623: The system could not find the phone book entry for this connection. 720: A connection to the remote computer could not be established. More on : http://www.windowsecurity.com/articles/Configure-VPN-Connection-Windows-XP.html

    Read the article

  • .NET CoffeeScript Handler

    - by Liam McLennan
    After more time than I care to admit I have finally released a rudimentary Http Handler for serving compiled CoffeeScript from Asp.Net applications. It was a long and painful road but I am glad to finally have a usable strategy for client-side scripting in CoffeeScript. Why CoffeeScript? As Douglas Crockford discussed in detail, Javascript is a mixture of good and bad features. The genius of CoffeeScript is to treat javascript in the browser as a virtual machine. By compiling to javascript CoffeeScript gets a clean slate to re-implement syntax, taking the best of javascript and ruby and combining them into a beautiful scripting language. The only limitation is that CoffeeScript cannot do anything that javascript cannot do. Here is an example from the CoffeeScript website. First, the coffeescript syntax: reverse: (string) -> string.split('').reverse().join '' alert reverse '.eeffoC yrT' and the javascript that it compiles to: var reverse; reverse = function(string) { return string.split('').reverse().join(''); }; alert(reverse('.eeffoC yrT')); Areas For Improvement ;) The current implementation is deeply flawed, however, at this point I’m just glad it works. When the server receives a request for a coffeescript file the following things happen: The CoffeeScriptHandler is invoked If the script has previously been compiled then the compiled version is returned. Else it writes a script file containing the CoffeeScript compiler and the requested coffee script The process shells out to CScript.exe to to execute the script. The resulting javascript is sent back to the browser. This outlandish process is necessary because I could not find a way to directly execute the coffeescript compiler from .NET. If anyone can help out with that I would appreciate it.

    Read the article

  • Why is HTML/Javascript minification beneficial

    - by Channel72
    Why is HTML/Javascript minification beneficial when the HTTP protocol already supports gzip data compression? I realize that Javascript/HTML minification has the potential to significantly reduce the size of Javascript/HTML files by removing unnecessary whitespace, and perhaps renaming variables to a few letters each, but doesn't the LZW algorithm do especially well when there are many repeated characters (e.g. lots of whitespace?) I realize that some Javascript minification tools do more than just reduce size. Google's closure compiler, for example, also tries to improve code performance by inlining functions and doing other analyses. But the primary purpose of Javascript minification is usually to reduce file size. I also realize there are other reasons you might want to minify aside from performace, such as code obfuscation. But again, that reason is not usually emphasized as much as performance gain and file size reduction. For example, Closure Compiler is not advertised as an obfuscation tool, but as a code size reducer and download-speed enhancer. So, how much performance do you really gain from Javascript/HTML minification when you're already significantly reducing file size with gzip compression?

    Read the article

  • What is up with the Joy of Clojure 2nd edition?

    - by kurofune
    Manning just released the second edition of the beloved Joy of Clojure book, and while I share that love I get the feeling that many of the examples are already outdated. In particular, in the chapter on optimization the recommended type-hinting seems not to be allowed by the compiler. I don't know if this was allowable for older versions of Clojure. For example: (defn factorial-f [^long original-x] (loop [x original-x, acc 1] (if (>= 1 x) acc (recur (dec x) (*' x acc))))) returns: clojure.lang.Compiler$CompilerException: java.lang.UnsupportedOperationException: Can't type hint a primitive local, compiling:(null:3:1) Likewise, the chapter on core.logic seems be using an old API and I have to find workarounds for each example to accommodate the recent changes. For example, I had to turn this: (logic/defrel orbits orbital body) (logic/fact orbits :mercury :sun) (logic/fact orbits :venus :sun) (logic/fact orbits :earth :sun) (logic/fact orbits :mars :sun) (logic/fact orbits :jupiter :sun) (logic/fact orbits :saturn :sun) (logic/fact orbits :uranus :sun) (logic/fact orbits :neptune :sun) (logic/run* [q] (logic/fresh [orbital body] (orbits orbital body) (logic/== q orbital))) into this, leveraging the pldb lib: (pldb/db-rel orbits orbital body) (def facts (pldb/db [orbits :mercury :sun] [orbits :venus :sun] [orbits :earth :sun] [orbits :mars :sun] [orbits :jupiter :sun] [orbits :saturn :sun] [orbits :uranus :sun] [orbits :neptune :sun])) (pldb/with-db facts (logic/run* [q] (logic/fresh [orbital body] (orbits orbital body) (logic/== q orbital)))) I am still pulling teeth to get the later examples to work. I am relatively new programming, myself, so I wonder if I am naively looking over something here, or are if these points I'm making legitimate concerns? I really want to get good at this stuff like type-hinting and core.logic, but wanna make sure I am studying up to date materials. Any illuminating facts to help clear up my confusion would be most welcome.

    Read the article

  • Computer Bugs - Etymology and Entomology

    - by PointsToShare
    Whatever bugs you My wife and I used to take some of our summer vacation I a cabin on the shore of Lake Atsion in NJ. I t is a delightful place in the Wharton forest with Brown yet fresh water, where we would canoe, swim and enjoy true rest. Alas, in the last few years, yellow flies also discovered the area’s pastoral delights and came in hoards to bug us. So much so that we had to give up. As a computer programmer I abhor bugs. The bugs that bug me – except the pesky yellow flies – are program bugs , a specific variety of computer bugs. You can find an excellent take on the etymology of the word ‘bug” in this delightful monogram: http://www.jamesshuggins.com/h/tek1/first_computer_bug.htm In my youth, I worked on Burroughs computers. Unlike their IBM brethren, the Burroughs used a 96 column card. The cards were much smaller than the 80 column IBM cards. We wrote our programs on coding sheets and then a key-punch operator transcribed them into punched cards. These were fed into a card reader and compiled. The compiler would notify us of compiler errors or bugs, but it was not always easy to get the meaning of the message. My friend Mark Wildt, also a Burroughs veteran, gave me an old punched card from one of his programs. Obviously a bug!! Here It Is!! That’s All Folks!

    Read the article

  • What kinds of demos are good to make for a software engineer job

    - by user23012
    I have created my cv site and sent out my demos for a while now, but most of my demos are either from my course or games related since my course was a games programming course, I was wondering what kind of demos are good to show off my skills in programming in general. These are what i already have Pennies:just a simple game first coursework i did. Compiler:coursework for compiler writing module Pongout: basic a pong game in 68k using colour detection Snake: snake in 68k same thing as the pong Game Cube Maze: gamecube work BeatmyBot: basic Ai Basic plat-former game: 2d game with different types of collision Turing Lambda Simulation: my dissertation Turing machine simulated in Miranda. alpha and Beta reduction,and SKI calculus simulated in the Turing machine. What I am asking here is what kind of demos are good to add or have, i have been looking and have hit a tough spot I cant think of anything to make more than games. so for a general graduate software engineer what types would be good examples? EDIT: since responding to the comments bellow well for what languages well my main one would be C++, followed by Java, Erlang and abit of Haskell

    Read the article

  • Discuss: PLs are characterised by which (iso)morphisms are implemented

    - by Yttrill
    I am interested to hear discussion of the proposition summarised in the title. As we know programming language constructions admit a vast number of isomorphisms. In some languages in some places in the translation process some of these isomorphisms are implemented, whilst others require code to be written to implement them. For example, in my language Felix, the isomorphism between a type T and a tuple of one element of type T is implemented, meaning the two types are indistinguishable (identical). Similarly, a tuple of N values of the same type is not merely isomorphic to an array, it is an array: the isomorphism is implemented by the compiler. Many other isomorphisms are not implemented for example there is an isomorphism expressed by the following client code: match v with | ((?x,?y),?z = x,(y,z) // Felix match v with | (x,y), - x,(y,z) (* Ocaml *) As another example, a type constructor C of int in Felix may be used directly as a function, whilst in Ocaml you must write a wrapper: let c x = C x Another isomorphism Felix implements is the elimination of unit values, including those in tuples: Felix can do this because (most) polymorphic values are monomorphised which can be done because it is a whole program analyser, Ocaml, for example, cannot do this easily because it supports separate compilation. For the same reason Felix performs type-class dispatch at compile time whilst Haskell passes around dictionaries. There are some quite surprising issues here. For example an array is just a tuple, and tuples can be indexed at run time using a match and returning a value of a corresponding sum type. Indeed, to be correct the index used is in fact a case of unit sum with N summands, rather than an integer. Yet, in a real implementation, if the tuple is an array the index is replaced by an integer with a range check, and the result type is replaced by the common argument type of all the constructors: two isomorphisms are involved here, but they're implemented partly in the compiler translation and partly at run time.

    Read the article

  • Why unhandled exceptions are useful

    - by Simon Cooper
    It’s the bane of most programmers’ lives – an unhandled exception causes your application or webapp to crash, an ugly dialog gets displayed to the user, and they come complaining to you. Then, somehow, you need to figure out what went wrong. Hopefully, you’ve got a log file, or some other way of reporting unhandled exceptions (obligatory employer plug: SmartAssembly reports an application’s unhandled exceptions straight to you, along with the entire state of the stack and variables at that point). If not, you have to try and replicate it yourself, or do some psychic debugging to try and figure out what’s wrong. However, it’s good that the program crashed. Or, more precisely, it is correct behaviour. An unhandled exception in your application means that, somewhere in your code, there is an assumption that you made that is actually invalid. Coding assumptions Let me explain a bit more. Every method, every line of code you write, depends on implicit assumptions that you have made. Take this following simple method, that copies a collection to an array and includes an item if it isn’t in the collection already, using a supplied IEqualityComparer: public static T[] ToArrayWithItem( ICollection<T> coll, T obj, IEqualityComparer<T> comparer) { // check if the object is in collection already // using the supplied comparer foreach (var item in coll) { if (comparer.Equals(item, obj)) { // it's in the collection already // simply copy the collection to an array // and return it T[] array = new T[coll.Count]; coll.CopyTo(array, 0); return array; } } // not in the collection // copy coll to an array, and add obj to it // then return it T[] array = new T[coll.Count+1]; coll.CopyTo(array, 0); array[array.Length-1] = obj; return array; } What’s all the assumptions made by this fairly simple bit of code? coll is never null comparer is never null coll.CopyTo(array, 0) will copy all the items in the collection into the array, in the order defined for the collection, starting at the first item in the array. The enumerator for coll returns all the items in the collection, in the order defined for the collection comparer.Equals returns true if the items are equal (for whatever definition of ‘equal’ the comparer uses), false otherwise comparer.Equals, coll.CopyTo, and the coll enumerator will never throw an exception or hang for any possible input and any possible values of T coll will have less than 4 billion items in it (this is a built-in limit of the CLR) array won’t be more than 2GB, both on 32 and 64-bit systems, for any possible values of T (again, a limit of the CLR) There are no threads that will modify coll while this method is running and, more esoterically: The C# compiler will compile this code to IL according to the C# specification The CLR and JIT compiler will produce machine code to execute the IL on the user’s computer The computer will execute the machine code correctly That’s a lot of assumptions. Now, it could be that all these assumptions are valid for the situations this method is called. But if this does crash out with an exception, or crash later on, then that shows one of the assumptions has been invalidated somehow. An unhandled exception shows that your code is running in a situation which you did not anticipate, and there is something about how your code runs that you do not understand. Debugging the problem is the process of learning more about the new situation and how your code interacts with it. When you understand the problem, the solution is (usually) obvious. The solution may be a one-line fix, the rewrite of a method or class, or a large-scale refactoring of the codebase, but whatever it is, the fix for the crash will incorporate the new information you’ve gained about your own code, along with the modified assumptions. When code is running with an assumption or invariant it depended on broken, then the result is ‘undefined behaviour’. Anything can happen, up to and including formatting the entire disk or making the user’s computer sentient and start doing a good impression of Skynet. You might think that those can’t happen, but at Halting problem levels of generality, as soon as an assumption the code depended on is broken, the program can do anything. That is why it’s important to fail-fast and stop the program as soon as an invariant is broken, to minimise the damage that is done. What does this mean in practice? To start with, document and check your assumptions. As with most things, there is a level of judgement required. How you check and document your assumptions depends on how the code is used (that’s some more assumptions you’ve made), how likely it is a method will be passed invalid arguments or called in an invalid state, how likely it is the assumptions will be broken, how expensive it is to check the assumptions, and how bad things are likely to get if the assumptions are broken. Now, some assumptions you can assume unless proven otherwise. You can safely assume the C# compiler, CLR, and computer all run the method correctly, unless you have evidence of a compiler, CLR or processor bug. You can also assume that interface implementations work the way you expect them to; implementing an interface is more than simply declaring methods with certain signatures in your type. The behaviour of those methods, and how they work, is part of the interface contract as well. For example, for members of a public API, it is very important to document your assumptions and check your state before running the bulk of the method, throwing ArgumentException, ArgumentNullException, InvalidOperationException, or another exception type as appropriate if the input or state is wrong. For internal and private methods, it is less important. If a private method expects collection items in a certain order, then you don’t necessarily need to explicitly check it in code, but you can add comments or documentation specifying what state you expect the collection to be in at a certain point. That way, anyone debugging your code can immediately see what’s wrong if this does ever become an issue. You can also use DEBUG preprocessor blocks and Debug.Assert to document and check your assumptions without incurring a performance hit in release builds. On my coding soapbox… A few pet peeves of mine around assumptions. Firstly, catch-all try blocks: try { ... } catch { } A catch-all hides exceptions generated by broken assumptions, and lets the program carry on in an unknown state. Later, an exception is likely to be generated due to further broken assumptions due to the unknown state, causing difficulties when debugging as the catch-all has hidden the original problem. It’s much better to let the program crash straight away, so you know where the problem is. You should only use a catch-all if you are sure that any exception generated in the try block is safe to ignore. That’s a pretty big ask! Secondly, using as when you should be casting. Doing this: (obj as IFoo).Method(); or this: IFoo foo = obj as IFoo; ... foo.Method(); when you should be doing this: ((IFoo)obj).Method(); or this: IFoo foo = (IFoo)obj; ... foo.Method(); There’s an assumption here that obj will always implement IFoo. If it doesn’t, then by using as instead of a cast you’ve turned an obvious InvalidCastException at the point of the cast that will probably tell you what type obj actually is, into a non-obvious NullReferenceException at some later point that gives you no information at all. If you believe obj is always an IFoo, then say so in code! Let it fail-fast if not, then it’s far easier to figure out what’s wrong. Thirdly, document your assumptions. If an algorithm depends on a non-trivial relationship between several objects or variables, then say so. A single-line comment will do. Don’t leave it up to whoever’s debugging your code after you to figure it out. Conclusion It’s better to crash out and fail-fast when an assumption is broken. If it doesn’t, then there’s likely to be further crashes along the way that hide the original problem. Or, even worse, your program will be running in an undefined state, where anything can happen. Unhandled exceptions aren’t good per-se, but they give you some very useful information about your code that you didn’t know before. And that can only be a good thing.

    Read the article

  • Are Intel compilers really better than Microsoft ones?

    - by Rocket Surgeon
    Years ago I was surprised when discovered that Intel sells Studio compatible compilers. I tried it in particular for C/C++ as well as fantastic diagnostic tools. But the code was simply not that computationally intensive to notice the difference. The only impression was: did Intel really did it for me just now, Wow, amazing tools with nanoseconds resolution, unbeleivable. But the trial ended and team never seriously considered a purchase. From your experience, if license cost does not matter, which vendor is a winner ? It is not broad or vague question or attemt to spark a holy war. This sort of question about 2 very visible tools. Nobody likes when tools have any mysteries or surprises. And choices between best and best are always the pain. I also understand the "grass greener" argument. I want to hear all "what ifs" stories. What if Intel just locally optimizes it for the chip stepping of the month, and not every hardware target will actually work as well as Microsoft compiled ? What if AMD hardware is the target and everything will slow down for no reason ? Or on other hand, what if Intel's hardware has so many unnoticable opportunities, that Microsoft compiler writers are too slow to adopt and never implement in the compiler ? What if both are the same exactly, actually a single codebase just wrapped into 2 different boxes and licensed to both vendors by some 3rd party shop? And so on. But someone knows some answers.

    Read the article

  • Automatic Generalization

    - by Nick Harrison
    I have been interested in functional programming since college. I played around a little with LISP back then, but I have not had an opportunity since then. Now that F# ships standard with VS 2010, I figured now is my chance. So, I was reading up on it a little over the weekend when I came across a very interesting topic. F# includes a concept called "Automatic Generalization". As I understand it, the compiler will look at your method and analyze how you are using parameters. It will automatically switch to a generic parameter if it is possible based on your usage. Wow! I am looking forward to playing with this. I have long been an advocate of using the most generic types possible especially when developing library classes. Use the highest level base class that you can get away with. Use an interface instead of a specific implementation. I don't advocate passing object around, but you get the idea. Tools like resharper, fxCop, and most static code analysis tools provide guidance to help you identify when a more generalized type is possible, but this is the first time I have heard about the compiler taking matters into its own hands. I like the sound of this. We'll see if it is a good idea or not. What are your thoughts? Am I missing the mark on what Automatic Generalization does in F#? How would this work in C#? Do you see any problems with this?

    Read the article

  • Compiling GCC or Clang for thumb drive on OSX

    - by user105524
    I have a mac book that I don't have admin rights to which I would like to be able to use either GCC or clang. Since I lack admin right I can't install binutils or a compiler to /usr directory. My plan is to install both of these (using an old macbook that I do have admin rights for) to a flash drive and then run the compiler off of there. How would one go building gcc or clang so that it could run just off of a thumb drive? I've tried both but haven't had any success. I've tried doing it defining as many of the directories as possible through configure, but haven't been able to successfully build. My current configure script for gcc-4.8.1 is (where USB20D is the thumb drive): ../gcc-4.8.1/configure --prefix=/Volumes/USB20FD/usr \ --with-local-prefix=/Volumes/USB20FD/usr/local \ --with-native-system-header-dir=/Volumes/USB20FD/usr/include \ --with-as=/Volumes/USB20FD/usr/bin/as \ --enable-languages=c,c++,fortran\ --with-ld=/Volumes/USB20FD/usr/bin/ld \ --with-build-time-tools=/Volumes/USB20FD/usr/bin \ AR=/Volumes/USB20FD/usr/bin/ar \ AS=/Volumes/USB20FD/usr/bin/as \ RANLIB=/Volumes/USB20FD/usr/bin/ranlib \ LD=/Volumes/USB20FD/usr/bin/ld \ NM=/Volumes/USB20FD/usr/bin/nm \ LIPO=/Volumes/USB20FD/usr/bin/lipo \ AR_FOR_TARGET=/Volumes/USB20FD/usr/bin/ar \ AS_FOR_TARGET=/Volumes/USB20FD/usr/bin/as \ RANLIB_FOR_TARGET=/Volumes/USB20FD/usr/bin/ranlib \ LD_FOR_TARGET=/Volumes/USB20FD/usr/bin/ld \ NM_FOR_TARGET=/Volumes/USB20FD/usr/bin/nm \ LIPO_FOR_TARGET=/Volumes/USB20FD/usr/bin/lipo CFLAGS=" -nodefaultlibs -nostdlib -B/Volumes/USB20FD/bin -isystem/Volumes/USB20FD/usr/include -static-libgcc -v -L/Volumes/USB20FD/usr/lib " \ LDFLAGS=" -Z -lc -nodefaultlibs -nostdlib -L/Volumes/USB20FD/usr/lib -lgcc -syslibroot /Volumes/USB20FD/usr/lib/crt1.10.6.o " Any obvious ideas of which of these options need to be turned on to install the appropriate files on the thumb drive during installation? What other magic occurs during xcode installation which isn't occurring here? Thanks for any suggestions

    Read the article

  • Are Intel compilers really better than the Microsoft ones?

    - by Rocket Surgeon
    Years ago, I was surprised when I discovered that Intel sells Visual Studio compatible compilers. I tried it in particular for C/C++ as well as fantastic diagnostic tools. But the code was simply not that computationally intensive to notice the difference. The only impression was: did Intel really do it for me just now, wow, amazing tools with nanoseconds resolution, unbelievable. But the trial ended and the team never seriously considered a purchase. From your experience, if license cost does not matter, which vendor is the winner? It is not a broad or vague question or attemt to spark a holy war. This sort of question is about two very visible tools. Nobody likes when tools have any mysteries or surprises. And choices between best and best are always the pain. I also understand the grass is always greener argument. I want to hear all "what ifs" stories. What if Intel just locally optimizes it for the chip stepping of the month, and not every hardware target will actually work as well as Microsoft compiled? What if AMD hardware is the target and everything will slow down for no reason? Or on the other hand, what if Intel's hardware has so many unnoticable opportunities, that Microsoft compiler writers are too slow to adopt and never implement it in the compiler? What if both are the same exactly, actually a single codebase just wrapped into two different boxes and licensed to both vendors by some third-party shop? And so on. But someone knows some answers.

    Read the article

  • Programming languages with a Lisp-like syntax extension mechanism

    - by Giorgio
    I have only a limited knowledge of Lisp (trying to learn a bit in my free time) but as far as I understand Lisp macros allow to introduce new language constructs and syntax by describing them in Lisp itself. This means that a new construct can be added as a library, without changing the Lisp compiler / interpreter. This approach is very different from that of other programming languages. E.g., if I wanted to extend Pascal with a new kind of loop or some particular idiom I would have to extend the syntax and semantics of the language and then implement that new feature in the compiler. Are there other programming languages outside the Lisp family (i.e. apart from Common Lisp, Scheme, Clojure (?), Racket (?), etc) that offer a similar possibility to extend the language within the language itself? EDIT Please, avoid extended discussion and be specific in your answers. Instead of a long list of programming languages that can be extended in some way or another, I would like to understand from a conceptual point of view what is specific to Lisp macros as an extension mechanism, and which non-Lisp programming languages offer some concept that is close to them.

    Read the article

  • Making money from a custom built interpreter?

    - by annoying_squid
    I have been making considerable progress lately on building an interpreter. I am building it from NASM assembly code (for the core engine) and C (cl.exe the Microsoft compiler for the parser). I really don't have a lot of time but I have a lot of good ideas on how to build this so it appeals to a certain niche market. I'd love to finish this but I need to face reality here ... unless I can make some good monetary return on my investment, there is not a lot of time for me to invest. So I ask the following questions to anyone out there, especially those who have experience in monetizing their programs: 1) How easy is it for a programmer to make good money from one design? (I know this is vague but it will be interesting to hear from those who have experience or know of others' experiences). 2) What are the biggest obstacles to making money from a programming design? 3) For the parser, I am using the Microsoft compiler (no IDE) I got from visual express, so will this be an issue? Will I have to pay royalties or a license fee? 4) As far as I know NASM is a 2-clause BSD licensed application. So this should allow me to use NASM for commericial development unless I am missing something? It's good to know these things before launching into the meat and potatoes of the project.

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >