Search Results

Search found 21472 results on 859 pages for 'language features'.

Page 160/859 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Installing a new SQL Server instance fails

    - by Rubio
    I've previously in my setup installed SQL Server Express 2005. Now I've switched to SQL Server Express 2008. I updated the command line parameters to those documented for the latter. If the comp already has SQL Server Express 2008 installed, my installer should create a new instance. The command line parameters are as follows: /ACTION=Install /FEATURES=SQLEngine /QS /INSTANCENAME=ABCD /SECURITYMODE=SQL /SAPWD=CunningPassword The requested instance name does not exist on the target machine. This will end in an error -2068643838. The logs show the following error: "No features were installed during the setup execution. The requested features may already be installed." If I remove the /QS parameter and try to install interactively, I'll get as far as the Feature Selection page. The UI shows three options, Instance Features, Shared Features and Redistributable Features. Whatever I select, clicking Next results in the same error (There are validation errors on this page). Any ideas anyone?

    Read the article

  • php include path problem:Same code works on Ubuntu default Apache and php conf, but not on CentOS

    - by Neo
    So the same code works on my ubuntu server but when I upload it to my dedicated hosting server running CentOS it seems to add an extra prefix of .:/usr/share/pear:/usr/share/php: I tried setting includepath to different things but it just doesn't work. the file is in a directory called language in the same folder as the file that is including it and I'm using : include dirname(FILE).DIRECTORY_SEPARATOR."language".DIRECTORY_SEPARATOR."storage.inc"; and include dirname(__FILE__)."/language/language.php"; and include "language/language.php"; and alot of other combinations but I can't get it to find the file. Fatal error: require_once() [function.require]: Failed opening required '/home/neo/public_html/migration/include/class/core/storage.inc' (include_path='.:/usr/share/pear:/usr/share/php:/home/neo/public_html/migration') in /home/neo/public_html/migration/include/class/core/class_lang.inc on line 153

    Read the article

  • What are Web runtime environments and programming languages

    - by Bradly Spicer
    I've been looking into the details behind these two different categories: Web runtime environments Web application programming languages I believe I have the correct information and have phrased it correctly but I am unsure. I have been searching for a while but only find snippets of information or what I can see as useless information (I could be wrong). Here are my descriptions so far: Web runtime environments - A Run-time environment implements part of the core behaviour of any computer language and allows it to be modified via an API or embedded domain-specific language. A web runtime environment is similar except it uses web based languages such as Java-script which utilises the core behaviour a computer language. Another example of a Run-time environment web language is JsLibs which is a standable JavaScript development runtime environment for using JavaScript as a general all round scripting language. JavaScript is often used to create responsive interfaces which improve the user experience and provide dynamic functionality without having to wait for the server to react and direct to another page. Web application programming languages - A web application program language is something that mimics a traditional desktop application within a web page. For example, using PHP you can create forms and tables which use a database similar to that of Microsoft Excel. Some of the other languages for web application programming are: Ajax Perl Ruby Here are some of the resources used: http://en.wikipedia.org/wiki/Web_application_development http://code.google.com/p/jslibs/ I would like some confirmation that the descriptions I have created are correct as I am still slightly unsure as to whether I have hit the nail on the head.

    Read the article

  • Don't Miss What Procurement Experts Are Talking About. Join the Webcasts starting next week!

    - by LuciaC
    The Procurement team have three Advisor Webcasts scheduled in December with information about new features, tips and tricks and troubleshooting guidance. New Features and enhancements Incorporated in the Procurement Rollup Patch 14254641:R12.PRC_PF.B December 4, 2012 at 14:00 London / 16:00 Egypt / 06:00 am Pacific / 7:00 am Mountain / 9:00 am EasternThis session is recommended for technical and functional users who need to know about the new features and enhancements incorporated in the Procurement Rollup Patch. Topics will include: GCPA Enable All Sites E-Mail PO - .LANGUAGE Read Only BWC Validate Document GBPA OSP Items GL Date Defaulting Cancel Refactoring Action History Cleanup Click here to register for this event. Approval Management Engine (AME) New Features, Setup and Use for Purchase Orders December 6, 2012 at 14:00 London / 16:00 Egypt / 06:00 am Pacific / 7:00 am Mountain / 9:00 am EasternThis is recommended for Functional Users and Application Technical Users who work in the Procurement Module including Purchasing and iProcurement and would like to know more about how to set up and use the Approval Management Engine (AME) for purchase orders.Topics will include: Scope and limitations of AME functionality for purchase orders Setup and use of AME for purchase orders PO Review and PO E-Sign new features Demonstration: Example of scenarios using the new features Click here to register for this event. How to Solve Approval Errors in Procurement December 18, 2012 at 4:00 pm Egypt / 2:00 pm London / 6:00 am Pacific / 7:00 am Mountain / 9:00 am EasternThis session is recommended for technical and functional users who need to know about how to diagnose and troubleshoot common Approval Errors.Topics will include: Basic mandatory setups for approvals of PO documents Differences between Purchase Order Approval and Requisition Approval Process. Troubleshooting of Approval Errors. Basic Setup of AME which can be used in Requisition Approval Process. Click here to register for this event. You can see a listing of all scheduled and archived webcasts from Doc ID 740966.1.  Select the product you are interested in (such as E-Business Suite Procurement) and this will take you to the webcast listing for the product.

    Read the article

  • New Release of Oracle Berkeley DB

    - by Eric Jensen
    We are pleased to announce that a new release of Oracle Berkeley DB, version 11.2.5.2.28, is available today. Our latest release includes yet more value added features for SQLite users, as well as several performance enhancements and new customer-requested features to the key-value pair API.  We continue to provide technology leadership, features and performance for SQLite applications.  This release introduces additional features that are not available in native SQLite, and adds functionality allowing customers to create richer, more scalable, more concurrent applications using the Berkeley DB SQL API. This release is compelling to Oracle’s customers and partners because it: delivers a complete, embeddable SQL92 database as a library under 1MB size drop-in API compatible with SQLite version 3 no-oversight, zero-touch database administration industrial quality, battle tested Berkeley DB B-TREE for concurrent transactional data storage New Features Include: MVCC support for even higher concurrency direct SQL support for HA/replication transactionally protected Sequence number generation functions lower memory requirements, shared memory regions and faster/smaller memory on startup easier B-TREE page size configuration with new ''db_tuner" utility New Key-Value API Features Include: HEAP access method for constrained disk-space applications (key-value API) faster QUEUE access method operations for highly concurrent applications -- up 2-3X faster! (key-value API) new X/open compliant XA resource manager, easily integrated with Oracle Tuxedo (key-value API) additional HA/replication management and communication options (key-value API) and a lot more! BDB is hands-down the best edge, mobile, and embedded database available to developers. Downloads available today on the Berkeley DB download pageProduct Documentation

    Read the article

  • Unit Tests as a learning tool - a good idea?

    - by Ekkehard.Horner
    I'm interested in ways and means for learning (a) programming language(s) efficiently. I believe that using Unit Test concepts and infrastructure early in that process is a good thing, even better than starting with "Hello world". Why: To write a decent program even for a toy/restricted problem in a new language, you'll have to master many heterogenous concepts (control flow & variables & IO ...), you are tempted to glance over details just to get your program 'to work'. Putting (your understanding of) the facts about the new language in assertions with good descriptions (=success messages) enforces thinking thru/clearness/precision. Grouping topics and adding assertions to such groups is much easier than incorporation features from the 2. chapter of your "Learning X" book to your chapter 1 program. Why not: 'Real' Unit Tests are meant to output "1234 tests ok; 1 failure: saveWorld() chokes on negative input"; 'didactic' Unit Tests should output relevant facts about the new language like perl6 10-string.t # ### p5chop ... ok 13 - p5chop( "cbä" ) returns "ä" ok 14 - after that, victim is changed to "cb" # ### (p6) chop ... ok 27 - (p6) chop( "cbä" ) returns chopped copy: "cb" ok 18 - after that, victim is unchanged: "cbä" # ### chomp ... So (mis?)using Unit Tests may be counterproductive - practicing actions while learning you wouldn't use professionally. How: Writing 'didactic' Unit Tests in languages with lightweight testing systems (Perl 5/6) is easy; (mis?)using more elaborate systems (JUnit, CppUnit) may be not worth the effort or not suitable for a person just starting with a new language. So Is using Unit Tests as a learning tool a bad idea? Can the Unit Test tool(s) of your favourite language(s) used didactically? Should implementation details (eventually) be discussed here or over at stackoverflow.com?

    Read the article

  • Ubuntu 12.04 LTS 32bit does not detect 4Gb ram

    - by David
    I have recently installed 4Gb of ram for an existing 12.04 32bit Ubuntu. It's not being recognised, only 3.2Gb is showing, See: administrator@Root2:~$ free total used free shared buffers cached Mem: 3355256 1251112 2104144 0 48664 391972 -/+ buffers/cache: 810476 2544780 System is PAE capable, See: administrator@Root2:~$ grep --color=always -i PAE /proc/cpuinfo flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm lahf_lm dts flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm lahf_lm dts The system us fully patched and tried to run manual PAE upgrade, See: administrator@Root2:~$ sudo apt-get install linux-generic-pae linux-headers-generic-pae [sudo] password for administrator: Reading package lists... Done Building dependency tree Reading state information... Done linux-generic-pae is already the newest version. linux-headers-generic-pae is already the newest version. The following packages were automatically installed and are no longer required: language-pack-zh-hans language-pack-kde-en language-pack-kde-zh-hans language-pack-kde-en-base kde-l10n-engb kde-l10n-zhcn language-pack-zh-hans-base firefox-locale-zh-hans language-pack-kde-zh-hans-base Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. I am not sure what else to try to recognise the full physical memory installed other than loading 64bit. Any thoughts? Thanks! output of uname -r administrator@Root2:~$ uname -r 3.2.0-24-generic-pae

    Read the article

  • Visual Studio 2010 Pro Power Tools Screencast

    - by Steve Michelotti
    Microsoft just released the Visual Studio 2010 Pro Power Tools extension and it is awesome. A summary of all the features can be found here and it is available in the Visual Studio Gallery here. There are a bunch of great features but, in my opinion, the best one is the replacement for the Add Reference dialog. This gives sub-string search capabilities as well as the ability to add multiple references without having to continually re-open the dialog. For this feature alone, you should install the Pro Power Tools right now. There are a few blogs posts that do a good job describing all the features but what I wanted to do here was to post a quick screencast (7 minutes) that shows the features that I think are really cool. I show most (but not all) of the features focusing on the ones I think are the best. The features I cover are: Installation with the Extension Manager Add Reference Dialog replacement Tab Well including pinned tabs, pinned tabs in second row, fixed close button, colorized tabs, dirty indicator Highlight current line Triple Click for full-line selection Ctrl + Click for Go To Definition Colorized Parameter Help Enjoy! (Right-click and Zoom to view in full screen)

    Read the article

  • what will EcmaScript 6 bring to the table for us

    - by user697296
    Our company ported moderate chunks of business logic to JavaScript. We compile the code with a minifier, which further improves performance. Since the language is dynamically typed, it lends itself well to obfuscation, which occurs as a byproduct of minification. We went to great efforts to ensure it positively screams, performance-wise. We can now do what we did before, faster, better, with less code, on more platforms. In summary, we are very satisfied with the current state of the language. I personally love the language especially for its cross-platform nature. So naturally, I read up a lot about the state of JavaScript compilers, performance and compatibility across as many browsers and platforms as I have time to research. The one theme which has been growing louder and louder these days, is the news about ECMAScript 6. So far, what I have been able to gather is that ES6 promises a better development experience; firstly by enabling new ways to do things, secondly by reporting errors early. This sounds great for those who are still waiting for the language to meet their needs before jumping on board. But we have already jumped on board in a big way. Sure, I expect that we will have to do ongoing maintenance and feature revisions on our code through the years, and that we would obviously make use of best practices at the time. But I don't see us refactoring major portions of it to take advantage of language features that are mostly intended to boost developer productivity. I keep wondering, what impact will the language advances ultimately have on our existing, well-written, well-performing code base? Is there something I am missing? Is there something we ought to look out for? Does anyone have tips or guidance on how we should approach the ecmascript.next finalization? Should we care?

    Read the article

  • How can a wix custom action dll call be made to use the debug runtime via a merge module?

    - by Benj
    I'm trying to create a debug build with a corresponding debug installer for our product. I'm new to Wix so please forgive any naivety contained herein. The debug Dlls in my project are dependent on both the VS2008 and the VS2008SP1 debug runtimes. I've created a merge module feature in wix to bundle those runtimes with my installer. <Include xmlns="http://schemas.microsoft.com/wix/2006/wi"> <!-- Include our 'variables' file --> <!--<?include variables.wxi ?>--> <!--<Fragment>--> <DirectoryRef Id="TARGETDIR"> <!-- Always install the 32 bit ATL/CRT libraries, but only install the 64 bit ones on a 64 bit build --> <Merge Id="AtlFiles_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_ATL_x86.msm" DiskId="1" Language="1033"/> <Merge Id="AtlPolicy_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_ATL_x86.msm" DiskId="1" Language="1033"/> <Merge Id="CrtFiles_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_DebugCRT_x86.msm" DiskId="1" Language="1033"/> <Merge Id="CrtPolicy_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_DebugCRT_x86.msm" DiskId="1" Language="1033"/> <Merge Id="MfcFiles_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_DebugMFC_x86.msm" DiskId="1" Language="1033"/> <Merge Id="MfcPolicy_x86" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_DebugMFC_x86.msm" DiskId="1" Language="1033"/> <!-- If this is a 64 bit build, install the relevant modules --> <?if $(env.Platform) = "x64" ?> <Merge Id="AtlFiles_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_ATL_x86_x64.msm" DiskId="1" Language="1033"/> <Merge Id="AtlPolicy_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_ATL_x86_x64.msm" DiskId="1" Language="1033"/> <Merge Id="CrtFiles_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_DebugCRT_x86_x64.msm" DiskId="1" Language="1033"/> <Merge Id="CrtPolicy_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_DebugCRT_x86_x64.msm" DiskId="1" Language="1033"/> <Merge Id="MfcFiles_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\Microsoft_VC90_DebugMFC_x86_x64.msm" DiskId="1" Language="1033"/> <Merge Id="MfcPolicy_x64" SourceFile="$(env.CommonProgramFiles)\Merge Modules\policy_9_0_Microsoft_VC90_DebugMFC_x86_x64.msm" DiskId="1" Language="1033"/> <?endif?> </DirectoryRef> <Feature Id="MS2008_SP1_DbgRuntime" Title="VC2008 Debug Runtimes" AllowAdvertise="no" Display="hidden" Level="1"> <!-- 32 bit libraries --> <MergeRef Id="AtlFiles_x86"/> <MergeRef Id="AtlPolicy_x86"/> <MergeRef Id="CrtFiles_x86"/> <MergeRef Id="CrtPolicy_x86"/> <MergeRef Id="MfcFiles_x86"/> <MergeRef Id="MfcPolicy_x86"/> <!-- 64 bit libraries --> <?if $(env.Platform) = "x64" ?> <MergeRef Id="AtlFiles_x64"/> <MergeRef Id="AtlPolicy_x64"/> <MergeRef Id="CrtFiles_x64"/> <MergeRef Id="CrtPolicy_x64"/> <MergeRef Id="MfcFiles_x64"/> <MergeRef Id="MfcPolicy_x64"/> <?endif?> </Feature> <!--</Fragment>--> </Include> If I'm doing a debug build of the installer, I include that feature like so: <!-- The 'Feature' that contains the debug CRT/ATL libraries --> <?if $(var.Configuration) = "Debug"?> <?include ..\includes\MS2008_SP1_DbgRuntime.wxi?> <?endif?> The only problem is that my installer also includes a custom action which is also dependent on the debug runtime: <!-- Private key installer --> <Binary Id="InstallPrivateKey" SourceFile="..\InstallPrivateKey\win32\$(var.Configuration)\InstallPrivateKey.dll"></Binary> <CustomAction Id='InstallKey' BinaryKey='InstallPrivateKey' DllEntry='InstallPrivateKey'/> So how can I package the debug run time in such a way that the custom action also has access to it?

    Read the article

  • The Inkremental Architect&acute;s Napkin - #4 - Make increments tangible

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/12/the-inkremental-architectacutes-napkin---4---make-increments-tangible.aspxThe driver of software development are increments, small increments, tiny increments. With an increment being a slice of the overall requirement scope thin enough to implement and get feedback from a product owner within 2 days max. Such an increment might concern Functionality or Quality.[1] To make such high frequency delivery of increments possible, the transition from talking to coding needs to be as easy as possible. A user story or some other documentation of what´s supposed to get implemented until tomorrow evening at latest is one side of the medal. The other is where to put the logic in all of the code base. To implement an increment, only logic statements are needed. Functionality like Quality are just about expressions and control flow statements. Think of Assembler code without the CALL/RET instructions. That´s all is needed. Forget about functions, forget about classes. To make a user happy none of that is really needed. It´s just about the right expressions and conditional executions paths plus some memory allocation. Automatic function inlining of compilers which makes it clear how unimportant functions are for delivering value to users at runtime. But why then are there functions? Because they were invented for optimization purposes. We need them for better Evolvability and Production Efficiency. Nothing more, nothing less. No software has become faster, more secure, more scalable, more functional because we gathered logic under the roof of a function or two or a thousand. Functions make logic easier to understand. Functions make us faster in producing logic. Functions make it easier to keep logic consistent. Functions help to conserve memory. That said, functions are important. They are even the pivotal element of software development. We can´t code without them - whether you write a function yourself or not. Because there´s always at least one function in play: the Entry Point of a program. In Ruby the simplest program looks like this:puts "Hello, world!" In C# more is necessary:class Program { public static void Main () { System.Console.Write("Hello, world!"); } } C# makes the Entry Point function explicit, not so Ruby. But still it´s there. So you can think of logic always running in some function. Which brings me back to increments: In order to make the transition from talking to code as easy as possible, it has to be crystal clear into which function you should put the logic. Product owners might be content once there is a sticky note a user story on the Scrum or Kanban board. But developers need an idea of what that sticky note means in term of functions. Because with a function in hand, with a signature to run tests against, they have something to focus on. All´s well once there is a function behind whose signature logic can be piled up. Then testing frameworks can be used to check if the logic is correct. Then practices like TDD can help to drive the implementation. That´s why most code katas define exactly how the API of a solution should look like. It´s a function, maybe two or three, not more. A requirement like “Write a function f which takes this as parameters and produces such and such output by doing x” makes a developer comfortable. Yes, there are all kinds of details to think about, like which algorithm or technology to use, or what kind of state and side effects to consider. Even a single function not only must deliver on Functionality, but also on Quality and Evolvability. Nevertheless, once it´s clear which function to put logic in, you have a tangible starting point. So, yes, what I´m suggesting is to find a single function to put all the logic in that´s necessary to deliver on a the requirements of an increment. Or to put it the other way around: Slice requirements in a way that each increment´s logic can be located under the roof of a single function. Entry points Of course, the logic of a software will always be spread across many, many functions. But there´s always an Entry Point. That´s the most important function for each increment, because that´s the root to put integration or even acceptance tests on. A batch program like the above hello-world application only has a single Entry Point. All logic is reached from there, regardless how deep it´s nested in classes. But a program with a user interface like this has at least two Entry Points: One is the main function called upon startup. The other is the button click event handler for “Show my score”. But maybe there are even more, like another Entry Point being a handler for the event fired when one of the choices gets selected; because then some logic could check if the button should be enabled because all questions got answered. Or another Entry Point for the logic to be executed when the program is close; because then the choices made should be persisted. You see, an Entry Point to me is a function which gets triggered by the user of a software. With batch programs that´s the main function. With GUI programs on the desktop that´s event handlers. With web programs that´s handlers for URL routes. And my basic suggestion to help you with slicing requirements for Spinning is: Slice them in a way so that each increment is related to only one Entry Point function.[2] Entry Points are the “outer functions” of a program. That´s where the environment triggers behavior. That´s where hardware meets software. Entry points always get called because something happened to hardware state, e.g. a key was pressed, a mouse button clicked, the system timer ticked, data arrived over a wire.[3] Viewed from the outside, software is just a collection of Entry Point functions made accessible via buttons to press, menu items to click, gestures, URLs to open, keys to enter. Collections of batch processors I´d thus say, we haven´t moved forward since the early days of software development. We´re still writing batch programs. Forget about “event-driven programming” with its fancy GUI applications. Software is just a collection of batch processors. Earlier it was just one per program, today it´s hundreds we bundle up into applications. Each batch processor is represented by an Entry Point as its root that works on a number of resources from which it reads data to process and to which it writes results. These resources can be the keyboard or main memory or a hard disk or a communication line or a display. Together many batch processors - large and small - form applications the user perceives as a single whole: Software development that way becomes quite simple: just implement one batch processor after another. Well, at least in principle ;-) Features Each batch processor entered through an Entry Point delivers value to the user. It´s an increment. Sometimes its logic is trivial, sometimes it´s very complex. Regardless, each Entry Point represents an increment. An Entry Point implemented thus is a step forward in terms of Agility. At the same time it´s a tangible unit for developers. Therefore, identifying the more or less numerous batch processors in a software system is a rewarding task for product owners and developers alike. That´s where user stories meet code. In this example the user story translates to the Entry Point triggered by clicking the login button on a dialog like this: The batch then retrieves what has been entered via keyboard, loads data from a user store, and finally outputs some kind of response on the screen, e.g. by displaying an error message or showing the next dialog. This is all very simple, but you see, there is not just one thing happening, but several. Get input (email address, password) Load user for email address If user not found report error Check password Hash password Compare hash to hash stored in user Show next dialog Viewed from 10,000 feet it´s all done by the Entry Point function. And of course that´s technically possible. It´s just a bunch of logic and calling a couple of API functions. However, I suggest to take these steps as distinct aspects of the overall requirement described by the user story. Such aspects of requirements I call Features. Features too are increments. Each provides some (small) value of its own to the user. Each can be checked individually by a product owner. Instead of implementing all the logic behind the Login() entry point at once you can move forward increment by increment, e.g. First implement the dialog, let the user enter any credentials, and log him/her in without any checks. Features 1 and 4. Then hard code a single user and check the email address. Features 2 and 2.1. Then check password without hashing it (or use a very simple hash like the length of the password). Features 3. and 3.2 Replace hard coded user with a persistent user directoy, but a very simple one, e.g. a CSV file. Refinement of feature 2. Calculate the real hash for the password. Feature 3.1. Switch to the final user directory technology. Each feature provides an opportunity to deliver results in a short amount of time and get feedback. If you´re in doubt whether you can implement the whole entry point function until tomorrow night, then just go for a couple of features or even just one. That´s also why I think, you should strive for wrapping feature logic into a function of its own. It´s a matter of Evolvability and Production Efficiency. A function per feature makes the code more readable, since the language of requirements analysis and design is carried over into implementation. It makes it easier to apply changes to features because it´s clear where their logic is located. And finally, of course, it lets you re-use features in different context (read: increments). Feature functions make it easier for you to think of features as Spinning increments, to implement them independently, to let the product owner check them for acceptance individually. Increments consist of features, entry point functions consist of feature functions. So you can view software as a hierarchy of requirements from broad to thin which map to a hierarchy of functions - with entry points at the top.   I like this image of software as a self-similar structure on many levels of abstraction where requirements and code match each other. That to me is true agile design: the core tenet of Agility to move forward in increments is carried over into implementation. Increments on paper are retained in code. This way developers can easily relate to product owners. Elusive and fuzzy requirements are not tangible. Software production is moving forward through requirements one increment at a time, and one function at a time. In closing Product owners and developers are different - but they need to work together towards a shared goal: working software. So their notions of software need to be made compatible, they need to be connected. The increments of the product owner - user stories and features - need to be mapped straightforwardly to something which is relevant to developers. To me that´s functions. Yes, functions, not classes nor components nor micro services. We´re talking about behavior, actions, activities, processes. Their natural representation is a function. Something has to be done. Logic has to be executed. That´s the purpose of functions. Later, classes and other containers are needed to stay on top of a growing amount of logic. But to connect developers and product owners functions are the appropriate glue. Functions which represent increments. Can there always be such a small increment be found to deliver until tomorrow evening? I boldly say yes. Yes, it´s always possible. But maybe you´ve to start thinking differently. Maybe the product owner needs to start thinking differently. Completion is not the goal anymore. Neither is checking the delivery of an increment through the user interface of a software. Product owners need to become comfortable using test beds for certain features. If it´s hard to slice requirements thin enough for Spinning the reason is too little knowledge of something. Maybe you don´t yet understand the problem domain well enough? Maybe you don´t yet feel comfortable with some tool or technology? Then it´s time to acknowledge this fact. Be honest about your not knowing. And instead of trying to deliver as a craftsman officially become a researcher. Research an check back with the product owner every day - until your understanding has grown to a level where you are able to define the next Spinning increment. ? Sometimes even thin requirement slices will cover several Entry Points, like “Add validation of email addresses to all relevant dialogs.” Validation then will it put into a dozen functons. Still, though, it´s important to determine which Entry Points exactly get affected. That´s much easier, if strive for keeping the number of Entry Points per increment to 1. ? If you like call Entry Point functions event handlers, because that´s what they are. They all handle events of some kind, whether that´s palpable in your code or note. A public void btnSave_Click(object sender, EventArgs e) {…} might look like an event handler to you, but public static void Main() {…} is one also - for then event “program started”. ?

    Read the article

  • Entity Framework vs. nHibernate for Performance, Learning Curve overall features

    - by hadi
    I know this has been asked several times and I have read all the posts as well but they all are very old. And considering there have been advancements in versions and releases, I am hoping there might be fresh views. We are building a new application on ASP.NET MVC and need to finalize on an ORM tool. We have never used ORM before and have pretty much boiled down to two - nHibernate & Entity Framework. I really need some advice from someone who has used both these tools and can recommend based on experience. There are three points that I am focusing on to finalize - Performance Learning Curve Overall Capability Your advice will be highly appreciated. Best Regards,

    Read the article

  • Regular Expressions Reference Tables Updated

    - by Jan Goyvaerts
    The regular expressions reference on the Regular-Expressions.info website was completely overhauled with the big update of that site last month. In the past, the reference section consisted of two parts. One part was a summary of the regex features commonly found in Perl-style regex flavors with short descriptions and examples. This part of the reference ignored differences between regex flavors and omitted most features that don’t have wide support. The other part was a regular expression flavor comparison that listed many more regex features along with YES/no indicators for many regex flavors, but without any explanations of the features. When reworking the site, I wanted to make the reference section more detailed, with descriptions and examples of all the syntax supported by the flavors discussed on the site. Doing that resulted in a reference that lists many features that are only supported by a few regex flavors. For such a reference to be usable, it needs to indicate which flavors support each feature. My original design for the new reference table used two rows for each feature. The first row had 4 columns with a label, syntax, description, and example, similar to the old reference tables. The second row had 20 columns indicating which versions of which flavors support these features. While the double-row design allowed all the information to fit within the table without requiring horizontal scrolling, it made it more difficult to quickly scan the tables for the feature you’re looking for. To make the new reference tables easier to read, they now have only a single row for each feature. The first 4 columns are the same as before. The remaining two columns show which versions of two regular expression flavors support the feature. You can use the drop-down lists above the table to choose the flavors the table should indicate. The site uses cookies to allow the flavor choices to persist while you navigate the reference. The result of this latest update is that the new regex tables are now just as easy to read as the ten-year-old tables on the old site were, while still covering all the features big and small of all the flavors discussed on the site.

    Read the article

  • Looking for a good semantic parser for the Russian language.

    - by Gregory Gelfond
    Does anyone known of a semantic parser for the Russian language? I've attempted to configure the link-parser available from link-grammar site but to no avail. I'm hoping for a system that can run on the Mac and generate either a prolog or lisp-like representation of the parse tree (but XML output is fine as well). Thank you kindly in advance, Gregory Gelfond

    Read the article

  • What is the best scripting language to embed in a C# desktop application.

    - by Ewan Makepeace
    We are writing a complex rich desktop application and need to offer flexibility in reporting formats so we thought we would just expose our object model to a scripting langauge. Time was when that meant VBA (which is still an option), but the managed code derivative VSTA (I think) seems to have withered on the vine. What is now the best choice for an embedded scripting language on Windows .NET?

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Intelligent web features, algorithms (people you may follow, similar to you ...)

    - by hilal
    I have 3 main questions about the algorithms in intelligent web (web 2.0) Here the book I'm reading http://www.amazon.com/Algorithms-Intelligent-Web-Haralambos-Marmanis/dp/1933988665 and I want to learn the algorithms in deeper 1. People You may follow (Twitter) How can one determine the nearest result to my requests ? Data mining? which algorithms? 2. How you’re connected feature (Linkedin) Simply algorithm works like that. It draws the path between two nodes let say between Me and the other person is C. Me - A, B - A connections - C . It is not any brute force algorithms or any other like graph algorithms :) 3. Similar to you (Twitter, Facebook) This algorithms is similar to 1. Does it simply work the max(count) friend in common (facebook) or the max(count) follower in Twitter? or any other algorithms they implement? I think the second part is true because running the loop dict{count, person} for person in contacts: dict.add(count(common(person))) return dict(max) is a silly act in every refreshing page. 4. Did you mean (Google) I know that they may implement it with phonetic algorithm http://en.wikipedia.org/wiki/Phonetic_algorithm simply soundex http://en.wikipedia.org/wiki/Soundex and here is the Google VP of Engineering and CIO Douglas Merrill speak http://www.youtube.com/watch?v=syKY8CrHkck#t=22m03s What about first 3 questions? Any ideas are welcome ! Thanks

    Read the article

  • Why no more macro languages?

    - by Muhammad Alkarouri
    In this answer to a previous question of mine about scripting languages suitability as shells, DigitalRoss identifies the difference between the macro languages and the "parsed typed" languages in terms of string treatment as the main reason that scripting languages are not suitable for shell purposes. Macro languages include nroff and m4 for example. What are the design decisions (or compromises) needed to create a macro programming language? And why are most of the mainstream languages parsed rather than macro? This very similar question (and the accepted answer) covers fairly well why the parsed typed languages, take C for example, suffer from the use of macros. I believe my question here covers different grounds: Macro languages or those working on a textual level are not wholly failures. Arguably, they include bash, Tcl and other shell languages. And they work in a specific niche such as shells as explained in my links above. Even m4 had a fairly long time of success, and some of the web template languages can be regarded as macro languages. It is quite possible that macros and parsed typing do not go well together and that is why macros "break" common languages. In the answer to the linked question, a macro like #define TWO 1+1 would have been covered by the common rules of the language rather than conflicting with those of the host language. And issues like "macros are not typed" and "code doesn't compile" are not relevant in the context of a language designed as untyped and interpreted with little concern for efficiency. The question about the design decisions needed to create a macro language pertain to a hobby project which I am currently working on on designing a new shell. Taking the previous question in context would clarify the difference between adding macros to a parsed language and my objective. I hope the clarification shows that the question linked doesn't cover this question, which is two parts: If I want to create a macro language (for a shell or a web template, for example), what limitations and compromises (and guidelines, if exist) need to be done? (Probably answerable by a link or reference) Why have no macro languages succeed in becoming mainstream except in particular niches? What makes typed languages successful in large programming, while "stringly-typed" languages succeed in shells and one-liner like environments?

    Read the article

  • how to write programs that using advanced OpenType features?

    - by Sorush Rabiee
    How could I write a simple program using OpenType tables in order to dynamically render text? please answer in : assembly , C , C++ , C# , java or Python (and a little WPF:-) or introduce libraries of them. comments and answers about text rendering system of common Operating Systems, or designing text engines compatible with unicode 5.02 protocol are welcomed.

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >