Search Results

Search found 4549 results on 182 pages for 'quickly'.

Page 114/182 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Set Custom Reload Times for Individual Webpages in Chrome

    - by Asian Angel
    Do you have a webpage that needs to be reloaded every so often or perhaps you have multiple webpages that each need their own individual reload time? Now you can have the best of both with the AutoReloader extension for Google Chrome. Using AutoReloader When you first look at the drop-down window everything will be in a neutral “waiting” state. You can start using the extension immediately by simply entering the desired “time frame” for reloading a webpage. Notice for the “Repeat Option” that “0 = Continuous”… You may want to have a quick look through the “Options” to see if there are any “operational changes” that you would like to make. Once you enter a time click on the “Set Link” to start the timer. Notice that you can view the time remaining on the “Toolbar Button” unless you disabled the feature in the “Options”. Clicking on the “Toolbar Button” will show a larger version of the timer in the drop-down window along with a “Cancel Current Timer Link”. Here is the best part of all with AutoReloader…you can set up your own customized list of “Reload Times” and then access them through the drop-down window. Using the two times shown here we were able to set the “Productive Geek Webpage” up for 30 second reloads and the “TinyHacker Webpage” up for 1 minute reloads at the same time. There was no conflict whatsoever in running both “reload times” simultaneously. This is a really terrific feature! Conclusion Whether you have only one webpage or multiple pages that need periodic reloading (such as tracking a Woot-Off or an Ebay auction) the AutoReloader extension is the perfect tool for the job. Running custom reload times simultaneously have never been easier. Links Download the AutoReloader extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Set Up Automatic Timed Page Reloading on Your Webpages in FirefoxRemove Custom about:config Entries the Easy WayEnable Vista Black Style Theme for Google Chrome in XPActivate the Redesigned New-Tab Interface in Google ChromeModify Tab Ordering in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier

    Read the article

  • Introducing the New Face of Fusion Applications

    - by mvaughan
    By Misha Vaughan and Kathy Miedema, Oracle Applications User Experience At OpenWorld 2012, the Oracle Applications User Experience (UX) team unveiled the new face of Fusion Applications. You may have seen it in sessions presented by Chris Leone, Anthony Lye, Jeremy Ashley or others, or you may have gotten a look on the demogrounds. This screenshot shows the new Oracle Fusion Applications entry experience.Why are we delivering a new face for Fusion Applications? Because, says Ashley, the vice president of the Oracle Applications User Experience team, we want to provide a simple, modern, productive way for users to complete their top quick-entry tasks. The idea is to provide a clear, productive user experience that is backed by the full functionality of Fusion Applications. The first release of the new face of Fusion focuses on three types of users. It provides a fully functional gateway to Fusion Applications for: New and casual users who need quick access to self-service tasks Professional users who need fast access to quick-entry, high-volume tasks Users who are looking for a way to quickly brand their portal for employees The new face of Fusion allows users to move easily from navigation to action, Ashley said, and it has been designed for any device -- Mac, PC, iPad, Android, SmartBoard -- in the browser. The Oracle Fusion Applications Employee Directory. How did we build it? The new face of Fusion essentially is a custom shell, developed by the Apps UX team, and a set of page templates that embodies a simple design aesthetic. It’s repeatable, providing consistency across its pages, and requires little to zero training. More specifically, the new face of Fusion has been built on ADF. The Applications UX team created pages in JDeveloper using local tasks flows bound to existing view objects. Three new components were commissioned from ADF, and existing Fusion components were re-skinned to deliver a simple, modern user experience. It really is that simple – and to prove that point, we’ve been sharing our story around the new face of Fusion on several Oracle channels such as this one. Want to know more? Check the VoX blog for our favorite highlights from OpenWorld, which included demos of the new face of Fusion. And take a look at these posts from Ace Directors Debra Lilley, and Floyd Teter. Special mention to Floyd for the first screen shot credit. Also a nod to Wilfred vander Deijl for capturing the demo to share as part 1 and part 2. We will also be hitting upcoming user group conferences with our demos, and you can always reach out to one of our Fusion User Experience Advocates for a look.

    Read the article

  • Oracle VM Templates for EBS 12.1.3 for Exalogic Now Available

    - by Elke Phelps (Oracle Development)
    Oracle VM Templates for Oracle E-Business Suite 12.1.3 for x86 Exalogic Platform (64 bit) are now available on the Oracle Software Delivery Cloud.  The templates contain all the required elements to create an Oracle E-Business Suite R12 demonstration system on an Exalogic server. You can use these templates to quickly build an EBS 12.1.3 demonstration environment, bypassing the operating system and the software install (via the EBS Rapid Install).   The Oracle E-Business Suite Release 12.1.3 (64 bit) template for the Exalogic platform is a Oracle Virtual Server Guest template that contains a complete Oracle E-Business Suite Release 12.1.3 Database Tier and Application Tier Installation.  For additional details, please refer to the following My Oracle Support Note: Oracle E-Business Suite Release 12.1.3 Database Tier and Application Tier Template for Oracle Exalogic Platform (Note 1499132.1) The Oracle E-Business Suite system is installed on top of Oracle Linux Version 5 update 6. The templates have been optimized for performance, including OS kernel settings and E-Business Suite configuration settings tuned specifically for the Exalogic platform.  The configuration delivered with this template for a mid-tier running on Exalogic will support hundreds of concurrent users.  Please refer to Section 2: Performance Analysis in My Oracle Support Note 1499132.1 for additional details.   Additional Information The Oracle E-Business Suite VM templates for the Exalogic platform contain the following software versions: Operating System: Oracle Linux Version 5 Update 6 Oracle E-Business Suite 12.1.3 (Database Tier) Oracle E-Business Suite 12.1.3 (Application Tier) The following considerations were made when the Oracle E-Business Suite VM template for the Exalogic platform were designed: Templates use the hardware-virtualized architecture, supporting hardware with virtualization feature. Database Tier Template is configured to use the following configuration: 16 GB RAM 4 VCPUs 250 GB of Disk space for application installation Application Tier Template is configured to use the following configuration: 16 GB RAM 4 VCPUs 50 GB of Disk space for application installation References Oracle E-Business Suite Release 12.1.3 Database Tier and Application Tier Template for Oracle Exalogic Platform (Note 1499132.1) Related Articles Part 1: E-Business Suite 12.1.1 Templates for Oracle VM Now Available Part 2: Using Oracle VM with Oracle E-Business Suite Virtualization Kit Part 3: On Clouds and Virtualization in EBS Environments (OpenWorld 2009 Recap) Part 4: Deploying E-Business Suite on Amazon Web Services Elastic Compute Cloud Part 5: Live Migration of EBS Services Using Oracle VM Support Policies for Virtualization Technologies and Oracle E-Business Suite Virtualization and the E-Business Suite, Redux Virtualization and E-Business Suite

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Now Shipping! NetAdvantage for .NET 2010 Volume 3!

    The new NetAdvantage Ultimate includes all four Line of Business user interface control sets for ASP .NET, Windows Forms, WPF and Silverlight plus two advanced Data Visualization UI control sets for WPF and Silverlight. With six NetAdvantage products in one robust package, Infragistics® gives you hundreds of controls and infinite development possibilities. Unified XAML Product Strategy-Share Code, Get More Controls In the 10.3 release, Infragistics continues to deliver code parity between the XAML platforms, WPF and Silverlight. In the line of business toolsets, Infragistics introduces the new xamSchedule™, full-featured, Outlook® 2010-style schedule controls, and the new xamDataTree™, a data bound tree view that comfortably handles tens of thousands of tree nodes. Mimicking our Silverlight Drag and Drop Framework, the WPF Drag and Drop Framework CTP empowers you to add your own rich touches to your applications. Track Users' Behaviors New to all NetAdvantage Silverlight controls is the Infragistics Analytics Framework (IGAF), which empowers you to track user behavior in RIAs running on Silverlight 4. Building on the Microsoft® Silverlight Analytics Framework, with IGAF you can analyze the user's behaviors to ensure the experience you want to deliver. NetAdvantage for Windows Forms--New Office® 2010 Ribbon and Application Menu 2010 Create new experiences with Windows Forms. Now with Office 2010 styling, NetAdvantage for Windows Forms has new features such as Microsoft® Office 2010 ribbon and enhanced Infragistics.Excel to export the contents of the high performance WinGrid™ into Microsoft Excel® 2010. The new Windows Message Support enables Infragistics standalone editor controls to process numerous Windows® OS messages, allowing them to respond just like native controls to changes in the Windows environment. Create Faster Web 2.0 Experiences with NetAdvantage for ASP .NET Infragistics continues to push the envelope to deliver the fastest ASP .NET WebForms controls available on the market. Our lightning fast ASP .NET grids are now enhanced with XPS/PDF Exporting and Summary Rows. This release also includes support for jQuery Templating (as a CTP) within our WebDataGrid™ and WebDataTree™ controls allowing you to quickly cut down overall page size. Deliver Business Intelligence with Power, Flexibility and the Office 2010 Experience NetAdvantage for WPF Data Visualization and NetAdvantage for Silverlight Data Visualization help you deliver flexible, powerful and usable end user experiences in Business Intelligence applications. Both suites include the Pivot Grid that delivers the full power of online analytical processing (OLAP) to present multi-dimensional data, sliced and diced in cross-tabulated form for end users to drill down into, interact with and easily extract meaning from the data. Mapping Made Easy 10.3 marks the official release of the WPF Data Visualization xamMap™ control to map anything and everything from geographic to geo-spacial mapping data. Map layers allow you to add successive levels of detail, navigational panes for panning in all directions, color swatch panes that facilitate value scales like Choropleth shading, and scale panes allowing users to zoom-in and out. Both toolsets introduce the first of many relationship maps! With the xamOrgChart™ CTP you can map out organizational charts of up to 50K employees, competitive brackets (think World Cup) and any other relational, organizational map your application needs. http://www.infragistics.com span.fullpost {display:none;}

    Read the article

  • More details on America's Cup use of Oracle Data Mining

    - by charlie.berger
    BMW Oracle Racing's America's Cup: A Victory for Database Technology BMW Oracle Racing's victory in the 33rd America's Cup yacht race in February showcased the crew's extraordinary sailing expertise. But to hear them talk, the real stars weren't actually human. "The story of this race is in the technology," says Ian Burns, design coordinator for BMW Oracle Racing. Gathering and Mining Sailing DataFrom the drag-resistant hull to its 23-story wing sail, the BMW Oracle USA trimaran is a technological marvel. But to learn to sail it well, the crew needed to review enormous amounts of reliable data every time they took the boat for a test run. Burns and his team collected performance data from 250 sensors throughout the trimaran at the rate of 10 times per second. An hour of sailing alone generates 90 million data points.BMW Oracle Racing turned to Oracle Data Mining in Oracle Database 11g to extract maximum value from the data. Burns and his team reviewed and shared raw data with crew members daily using a Web application built in Oracle Application Express (Oracle APEX). "Someone would say, 'Wouldn't it be great if we could look at some new combination of numbers?' We could quickly build an Oracle Application Express application and share the information during the same meeting," says Burns. Analyzing Wind and Other Environmental ConditionsBurns then streamed the data to the Oracle Austin Data Center, where a dedicated team tackled deeper analysis. Because the data was collected in an Oracle Database, the Data Center team could dive straight into the analytics problems without having to do any extract, transform, and load processes or data conversion. And the many advanced data mining algorithms in Oracle Data Mining allowed the analytics team to build vital performance analytics. For example, the technology team could remove masking elements such as environmental conditions to give accurate data on the best mast rotation for certain wind conditions. Without the data mining, Burns says the boat wouldn't have run as fast. "The design of the boat was important, but once you've got it designed, the whole race is down to how the guys can use it," he says. "With Oracle database technology we could compare the incremental improvements in our performance from the first day of sailing to the very last day. With data mining we could check data against the things we saw, and we could find things that weren't otherwise easily observable and findable."

    Read the article

  • Be Careful When Referencing SPList.Items

    - by Brian Jackett
    Be very careful how you reference your SPListItem objects through the SharePoint API.  I’ll say it again.  Be very careful how you reference your SPListItem objects through the SharePoint API.  Ok, now that you get the point that this will be a “learn from my mistakes and don’t do unsmart things like I did” post, let’s dig into what it was that I did poorly. Scenario     For the past year I’ve been building custom .Net applications that are hosted through SharePoint.  These application involve a number of SharePoint lists, external databases, custom web parts, and other SharePoint elements to provide functionality.  About two weeks ago I received a message from one of our end users that a custom application was performing slowly.  Specifically performance was slow when users were performing actions that interacted with the primary SharePoint list storing data for that app. The Problem     I took a copy of the production site into a dev environment to investigate the code that was executing.  After attaching the debugger and running through the code I quickly found pieces of code referencing SPListItem objects (like below) that were performing very poorly: SPListItem myItem = SPContext.Current.Web.Lists["List Name"].Items.GetItemById(value); // do updates on SPListItem retrieved     As it turns out the SPList I was referencing was fairly large at ~1000 items and weighing in over 150 MB.  You see the problem with my above code is that I retrieved the SPListItem by first (unnecessarily) going through the Items member of the list.  As I understand it, when doing so the executing code will attempt to resolve that entity and pull it from the database and into RAM (all 150 MB.)  This causes the equivalent of a 50 car pile up in terms of performance with a single update taking more than 15 seconds. The Solution     The solution is actually quite simple and I wish I had realized this during development.  Instead of going through the Items member it is possible to call GetItemById(…) directly on the SPList as in the example below: SPListItem myItem = SPContext.Current.Web.Lists["List Name"].GetItemById(value); // do updates on SPListItem retrieved     After making this simple change performance skyrocketed and updates were back to less than a second.   Conclusion     When given the option between two solutions, usually the simplest is the best solution.  In my scenario I was adding extra complexity going through the API the long way around to get to the objects I needed and it ended up hurting performance greatly.  Luckily we were able to find and resolve the performance issue in a relatively short amount of time.  Like I said at the beginning of the post, learn from my mistakes and hope it helps you.         -Frog Out   Image linked from http://www.freespirit.com/files/IMAGE/COVER/LARGE/BeCarefulSafe.jpg

    Read the article

  • SQL SERVER – Solution – 2 T-SQL Puzzles – Display Star and Shortest Code to Display 1

    - by pinaldave
    Earlier on this blog we had asked two puzzles. The response from all of you is nothing but Amazing. I have received 350+ responses. Many are valid and many were indeed something I had not thought about it. I strongly suggest you read all the puzzles and their answers here - trust me if you start reading the comments you will not stop till you read every single comment. Seriously trust me on it. Personally I have learned a lot from it. Let us recap the puzzles here quickly. Puzzle 1: Why following code when executed in SSMS displays result as a * (Star)? SELECT CAST(634 AS VARCHAR(2)) Puzzle 2: Write the shortest code that produces results as 1 without using any numbers in the select statement. Bonus Q: How many different Operating System (OS) NuoDB support? As I mentioned earlier the participation was nothing but Amazing. I will write about the winners and the best answers in short time. Meanwhile I will give to the point answers to above puzzles. Solution 1: When you convert character or binary expressions (char, nchar, nvarchar, varchar,binary, or varbinary) to an expression of a different data type, data can be truncated, only partially displayed, or an error is returned because the result is too short to display. Conversions to char, varchar, nchar, nvarchar, binary, and varbinary are truncated, except for the conversions shown in the following table. Reference of the text and table from MSDN. Solution 2: The shortest code to produce answer 1 : SELECT EXP($) or SELECT COS($) or SELECT DAY($) When SELECT $ it gives us the result as 0.00 and the EXP of the same is 1. I believe it is pretty neat. There were plenty other answers but this was the shortest. Another shorter answer would be PRINT EXP($) but no one has proposed that as in original Question I have explicitly mentioned SELECT in the original question. Bonus Answer: 5 OS: Windows, MacOS, Linux, Solaris, Joyent SmartOS Reference Please do read every single comment here. Do leave a comment which one do you think is the best comment out of all the comments. Meanwhile if there is a better solution and I have missed it do let me know as we still have time to correct it. I will be selecting the winner before the weekend as I am going through each and every of 350 comment. I will be selecting the best comments along with the winning comment. If our selection matches – one of you may still win something cool.  Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: NuoDB

    Read the article

  • Oracle Open World starts on Sunday, Sept 30

    - by Mike Dietrich
    Oracle Open World 2012 starts on Sunday this week - and we are really looking forward to see you in one of our presentations, especially theDatabase Upgrade on SteriodsReal Speed, Real Customers, Real Secretson Monday, Oct 1, 12:15pm in Moscone South 307(just skip the lunch - the boxed food is not healthy at all): Monday, Oct 1, 12:15 PM - 1:15 PM - Moscone South - 307 Database Upgrade on Steroids:Real Speed, Real Customers, Real Secrets Mike Dietrich - Consulting Member Technical Staff, Oracle Georg Winkens - Technical Manager, Amadeus Data Processing Carol Tagliaferri - Senior Development Manager, Oracle  Looking to improve the performance of your database upgrade and learn about other ways to reduce upgrade time? Isn’t everyone? In this session, you will learn directly from Oracle’s Upgrade Development team about what you can do to speed things up. Find out about ways to reduce upgrade downtime such as using a transient logical standby database and/or Oracle GoldenGate, and get other hints and tips. Learn about new features that improve upgrade performance and reduce downtime. Hear Georg Winkens, DB Services technical manager from Amadeus, speak about his upgrade experience, and get real-life performance measurements and advice for a successful upgrade. . And don't forget: we already start on Sunday so if you'd like to learn about the SAP database upgrades at Deutsche Messe: Sunday, Sep 30, 11:15 AM - 12:00 PM - Moscone West - 2001Oracle Database Upgrade to 11g Release 2 with SAP Applications Andreas Ellerhoff - DBA, Deutsche Messe AG Mike Dietrich - Consulting Member Technical Staff, Oracle Jan Klokkers - Sr.Director SAP Development, Oracle Deutsche Messe began to use Oracle6 Database at the end of the 1980s and has been using Oracle Database technology together with SAP applications successfully since 2002. At the end of 2010, it took the first steps of an upgrade to Oracle Database 11g Release 2 (11.2), and since mid-2011, all SAP production systems there run successfully with Oracle Database 11g. This presentation explains why Deutsche Messe uses Oracle Database together with SAP applications, discusses the many reasons for the upgrade to Release 11g, and focuses on the operational top aspects from a DBA perspective. . And unfortunately the Hands-On-Lab is sold out already ... We would like to apologize but we have absolutely ZERO influence on either the number of runs or the number of available seats.  Tuesday, Oct 2, 10:15 AM - 12:45 PM - Marriott Marquis - Salon 12/13 Hands On Lab:Upgrading an Oracle Database Instance, Using Best Practices Roy Swonger - Senior Director, Software Development, Oracle Carol Tagliaferri - Senior Development Manager, Oracle Mike Dietrich - Consulting Member Technical Staff, Oracle Cindy Lim - PMTS, Oracle Carol Palmer - Principal Product Manager, Oracle This hands-on lab gives participants the opportunity to work through a database upgrade from an older release of Oracle Database to the very latest Oracle Database release available. Participants will learn how the improved automation of the upgrade process and the generation of fix-up scripts can quickly help fix database issues prior to upgrading. The lab also uses the new parallel upgrade feature to improve performance of the upgrade, resulting in less downtime. Come get inside information about database upgrades from the Database Upgrade development team. . See you soon

    Read the article

  • Subscribe to RSS Feeds in Chrome with a Single Click

    - by Asian Angel
    Do you have a Google Reader account and need a quick simple way to subscribe to new RSS feeds while you browse? Then you will definitely want to have a look at the Chrome Reader extension for Chrome. Before If you want to add a new feed to your Google Reader account in Chrome then you have to do it manually. A single feed now and then is not a problem but if you are wanting to build a serious set of RSS feeds quickly then not so good. Chrome Reader in Action Once the extension is installed you are ready to go. Any time that you visit a webpage with an RSS feed available you will see the familiar orange feed icon appear in your “Address Bar”. To add the feed to your Google Reader account just click on the orange feed icon. Note: You will need to be logged into your Google Reader account in your browser. When you click on the orange feed icon a small drop-down window will appear where you can modify the feed name and/or add it to a “custom folder” if desired. Notice that the orange feed icon has changed to the familiar Google Reader icon indicating that the feed has been added to the account. Now you are ready to continue browsing…no other actions are required. And now to subscribe to the Microsoft feed at Ars Technica. Once again a single click and all done. Refreshing our Google Reader page shows both of our new RSS feeds ready to enjoy. Conclusion The Chrome Reader extension makes it as simple as can be to add new RSS feeds to your Google Reader account while browsing with Chrome. Links Download the Chrome Reader extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Access Your favorite RSS Feeds in Windows Media CenterChange Default Feed Reader in FirefoxUse Outlook 2007 as an RSS ReaderInstall Extensions in Google ChromeMake Outlook Stop Using Internet Explorer’s RSS Feeds TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Out of band Security Update for Internet Explorer 7 Cool Looking Screensavers for Windows SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) If it were only this easy Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook

    Read the article

  • Database design and performance impact

    - by Craige
    I have a database design issue that I'm not quite sure how to approach, nor if the benefits out weigh the costs. I'm hoping some P.SE members can give some feedback on my suggested design, as well as any similar experiences they may have came across. As it goes, I am building an application that has large reporting demands. Speed is an important issue, as there will be peak usages throughout the year. This application/database has a multiple-level, many-to-many relationship. eg object a object b object c object d object b has relationship to object a object c has relationship to object b, a object d has relationship to object c, b, a Theoretically, this could go on for unlimited levels, though logic dictates it could only go so far. My idea here, to speed up reporting, would be to create a syndicate table that acts as a global many-to-many join table. In this table (with the given example), one might see: +----------+-----------+---------+ | child_id | parent_id | type_id | +----------+-----------+---------+ | b | a | 1 | | c | b | 2 | | c | a | 3 | | d | c | 4 | | d | b | 5 | | d | a | 6 | +----------+-----------+---------+ Where a, b, c and d would translate to their respective ID's in their respective tables. So, for ease of reporting all of a which exist on object d, one could query SELECT * FROM `syndicates` ... JOINS TO child and parent tables ... WHERE parent_id=a and type_id=6; rather than having a query with a join to each level up the chain. The Problem This table grows exponentially, and in a given year, could easily grow past 20,000 records for one client. Given multiple clients over multiple years, this table will VERY quickly explode to millions of records and beyond. Now, the database will, in time, be partitioned across multiple servers, but I would like (as most would) to keep the number of servers as low as possible while still offering flexibility. Also writes and updates would be exponentially longer (though possibly not noticeable to the end user) as there would be multiple inserts/updates/scans on this table to keep it in sync. Am I going in the right direction here, or am I way off track. What would you do in a similar situation? This solution seems overly complex, but allows the greatest flexibility and fastest read-operations. Sidenote 1 - This structure allows me to add new levels to the tree easily. Sidenote 2 - The database querying for this database is done through an ORM framework.

    Read the article

  • Speed Up the Help Dialog in Windows and Office

    - by Matthew Guay
    When you click help, you don’t want to wait for your computer to bring it to you.  Here’s how you can speed up the help dialog in Windows and Office. If you have a slow internet connection, chances are you’ve been frustrated by the Help dialog in Windows and Office trying to download fresh content every time you open them. This can be great if the updated help files contain better content, but sometimes you just want to find what you were looking for without waiting.  Here’s how you can turn off the automatic online help. Use Local Help in Windows Windows 7 and Vista’s help dialog usually tries to load the latest content from the net, but this can take a long time on slow connections. If you’re seeing the above screen a lot, you may want to switch to offline help.  Click the “Online Help” button at the bottom, and select “Get offline Help”. Now your computer will just load the pre-installed help files.  And don’t worry; if there’s a major update to your help files, Windows will download and install it through Windows Update.   Stupid Geek Tip: An easy way to open Windows Help is to click on your desktop or Start Menu and press F1 on your keyboard. Use Local Help in Office This same trick works in Office 2007 and 2010.  We’ve actually had more problems with Office’s help being tardy. Solve this the same way as with Windows help.  Click on the “Connected to Office.com” or “Connected to Office Online” button, depending on your version of Office, and select “Show content only from this computer”. This will automatically change the settings for Help in all of your Office applications. While this may not be a major trick, it can be helpful especially if you have a slow internet connection and want to get things done quickly.  Similar Articles Productive Geek Tips How to See the About Dialog and Version Information in Office 2007Speed Up SATA Hard Drives in Windows VistaMake Mouse Navigation Faster in WindowsSpeed up Your Windows Vista Computer with ReadyBoostSet the Speed Dial as the Opera Startup Page TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • Listen to Local FM Radio in Windows 7 Media Center

    - by DigitalGeekery
    If you have a supported tuner card and connected FM antenna, you can listen to your favorite local over-the-air FM stations in Windows 7 Media Center. Before the FM radio option will be available in Windows Media Center, you’ll need to have a TV or Radio tuner card installed and configured. If you have a TV tuner card installed, you may already have a Radio tuner as well. Many TV tuner cards also have built in FM tuners. Open Windows Media Center, scroll the “Music” and over to “Radio.” Click on “FM Radio.”   The radio will turn on and you’ll see the current station number listed in the white box. Just below are standard “Seek” and “Tune” buttons, as well as “Preset” options. Tuning works just like a typical FM radio. Click on the (-) or (+) buttons to “Tune” or “Seek” up and down the dial. If you already know the frequency of the station, enter the numbers using the numeric keypad on the remote control or keyboard. To save the current station you’re listening to as a preset, click on the “Save as Preset” button. Type in a custom name for your preset station and click “Save.”   Once you set your presets, they will also be available on the main FM Radio screen. The transport controls at the bottom of the screen also allow you to control Volume, Pause, Play, Skip back, and Skip forward. Fast Forward and Rewind, however, are not supported.   This is a nice option if you’d like to listen to your local FM favorites on your computer, especially if those stations aren’t available online. If you don’t have an FM tuner and want to listen to thousands of online radio streams, check out our article on RadioTime in WMC. Similar Articles Productive Geek Tips Listen to Over 100,000 Radio Stations in Windows Media CenterListen To XM Radio with Windows Media Center in Windows 7Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Schedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier

    Read the article

  • Oracle VM networking under the hood and 3 new templates

    - by Chris Kawalek
    We have a few cool things to tell you about:  First up: have you ever wondered what happens behind the scenes in the network when you Live Migrate your Oracle VM server workload? Or how Oracle VM implements the network infrastructure you configure through your point & click action in the GUI? Really….how do they do this? For an in-depth view of the Oracle VM for x86 Networking model, Look ‘Under the Hood’ at Networking in Oracle VM Server for x86 with our best practices engineer in a blog post on OTN Garage. Next, making things simple in Oracle VM is what we strive every day to deliver to our user community. With that, we are pleased to bring you updates on three new Oracle Application templates: E-Business Suite 12.1.3 for Oracle ExalogicOracle VM templates for Oracle E-Business Suite 12.1.3 (x86 64-bit for Oracle Exalogic Elastic Cloud) contain all the required elements to create an Oracle E-Business Suite R12 demonstration system on an Exalogic server. You can use these templates to quickly build an EBS 12.1.3 demonstration environment, bypassing the operating system and the software install (via the EBS Rapid Install). For further details, please review the announcement.   JD Edwards EnterpriseOne 9.1 and JD Edwards EnterpriseOne Tools 9.1.2.1 for x86 servers and Oracle Exalogic The Oracle VM Templates for JD Edwards EnterpriseOne provide a method to rapidly install JD Edwards EnterpriseOne 9.1  and Tools 9.1.2.1. The complete stack includes Oracle Database 11g R2 and Oracle WebLogic Server 10.3.5 running on Oracle Linux 5. The templates can be installed to Oracle VM Server for x86 release 3.x and to the Oracle Exalogic Elastic Cloud.  PeopleSoft PeopleTools 8.5.2.10 for Oracle Exalogic This virtual deployment package delivers a "quick start" of PeopleSoft Middle-tier template on Oracle Linux for Oracle Exalogic Elastic Cloud. And last, are you wondering why we talk about “fast”, “rapid” when we refer to using Oracle VM templates to virtualize Oracle applications? Read the Evaluator Group Lab Validation report quantifying speeds of deployment up to 10x faster than with VMware vSphere. Or you can also check out our on demand webcast Quantifying the Value of Application-Driven Virtualization.

    Read the article

  • Is the science of Computer Science dead?

    - by Veaviticus
    Question : Is the science and art of CS dead? By that I mean, the real requirements to think, plan and efficiently solve problems seems to be falling away from CS these days. The field seems to be lowering the entry-barrier so more people can 'program' without having to learn how to truly program. Background : I'm a recent graduate with a BS in Computer Science. I'm working a starting position at a decent sized company in the IT department. I mostly do .NET and other Microsoft technologies at my job, but before this I've done Java stuff through internships and the like. I personally am a C++ programmer for my own for-fun projects. In Depth : Through the work I've been doing, it seems to me that the intense disciplines of a real science don't exist in CS anymore. In the past, programmers had to solve problems efficiently in order for systems to be robust and quick. But now, with the prevailing technologies like .NET, Java and scripting languages, it seems like efficiency and robustness have been traded for ease of development. Most of the colleagues that I work with don't even have degrees in Computer Science. Most graduated with Electrical Engineering degrees, a few with Software Engineering, even some who came from tech schools without a 4 year program. Yet they get by just fine without having the technical background of CS, without having studied theories and algorithms, without having any regard for making an elegant solution (they just go for the easiest, cheapest solution). The company pushes us to use Microsoft technologies, which take all the real thought out of the matter and replace it with libraries and tools that can auto-build your project for you half the time. I'm not trying to hate on the languages, I understand that they serve a purpose and do it well, but when your employees don't know how a hash-table works, and use the wrong sorting methods, or run SQL commands that are horribly inefficient (but get the job done in an acceptable time), it feels like more effort is being put into developing technologies that coddle new 'programmers' rather than actually teaching people how to do things right. I am interested in making efficient and, in my opinion, beautiful programs. If there is a better way to do it, I'd rather go back and refactor it than let it slide. But in the corporate world, they push me to complete tasks quickly rather than elegantly. And that really bugs me. Is this what I'm going to be looking forward to the rest of my life? Are there still positions out there for people who love the science and art of CS rather than just the paycheck? And on the same note, here's a good read if you haven't seen it before The Perils Of Java Schools

    Read the article

  • Appropriate response when client empowered with CMS destroys content to his own will

    - by dukeofgaming
    So, I just recently closed a website project that pretty much was The Oatmeals' Design Hell, but with content. The client loved the site at the beginning but started getting other people involved and mercilessly bombarding us with their opinions. We served a carefully thought content strategy (which the client approved) and extremely curated copywriting that took us four months after at least 5 requirement changes (new content, new objectives for the business, changed offerings, new mindfaps, etc.) that required us to rewrite the content about 3 times. The client never gave timely feedback even though we kept the process open for him and his people to see (content being developed transparently in Google Docs). Near the end of the project he still wanted to make changes but wanted us to finish already (there are not enough words in the world to even try to make sense of this). So I explained to him the obvious implications of the never-ending requirement changes and advised him to take the time to gather his thoughts with his own team and see the new content introduced as a new content maintenance project. He happily accepted, but on the day of training/delivery things went very wrong and we have no idea why. The client didn't even allow the site to be out for a week with the content we developed for him and quickly replaced us with a Joomla savvy intern so that he completely destroy the content with shallow, unstructured, tasteless and plain wordsmithing (and I'm not even being visceral). Worst insult of all, he revoked our access from his server and the deployed CMS not even having passed 10 minutes of being given his administrator account (we realized the day after that he did it in our own office, the nerve!). Everybody involved in the team is enraged and insulted. I never want to see this happen again. So, to try to make sense of this situation and avoid it in the future with new clients I have two concrete questions: Is there even an appropriate course of action with a client like this?, or is he just not worth the trouble of analyzing (blindly hoping this never repeats again). In the exercise to try and blame ourselves instead of the client and take this as a lesson of... something, how should we set expectations for new clients about the working terms, process and final product so that they are discouraged from mauling the content to their own contempt once they get the codes to the nukes (access to the CMS)?

    Read the article

  • SQL SERVER – CSVExpress and Quick Data Load

    - by pinaldave
    One of the newest ETL tools is CSVexpress.com.  This is a program that can quickly load any CSV file into ODBC compliant databases uses data integration.  For those of you familiar with databases and how they operate, the question that comes to mind might be what use this program will have in your life. I have written earlier article on this subject over here SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress. You might know that RDBMS have automatic support for loading CSV files into tables – but it is not quite as easy as one click of a button.  First of all, most databases have a command line interface and you need the file and configuration script in order to load up.  You also need to know enough to write the script – which for novices can be extremely daunting.  On top of all this, if you work with more than one type of RDBMS, you need to know the ins and outs of uploading and writing script for more than one program. So you might begin to see how useful CSVexpress.com might be!  There are many other tools that enable uploading files to a database.  They can be very fancy – some can generate configuration files automatically, others load the data directly.  Again, novices will be able to tell you why these aren’t the most useful programs in the world.  You see, these programs were created with SQL in mind, not for uploading data.  If you don’t have large amounts of data to upload, getting the configurations right can be a long process and you will have to check the code that is generated yourself.  Not exactly “easy to use” for novices. That makes CSVexpress.com one of the best new tools available for everyone – but especially people who don’t want to learn a lot of new material all at once.  CSVexpress has an easy to navigate graphical user interface and no scripting or coding is required.  There are built-in constraints and data validations, and you can configure transforms and reject records right there on the screen.  But the best thing of all – it’s free! That’s right, you can download CSVexpress for free from www.csvexpress.com and start easily uploading and configuring riles almost immediately.  If you’re currently happy with your method of data configuration, keep up with the good work.  For the rest of us, there’s CSVexpress.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Visual Studio 2010 RC &ndash; Silverlight 4 and WCF RIA Services Development - Updates from MIX Anno

    - by Harish Ranganathan
    MIX is happening and there is a lot of excitement around the various releases such as the Windows Phone 7 Developer Preview, IE9 Platform Preview and few other announcements that have been made.  Clearly, the Windows Phone 7 Developer Preview has generated the maximum interest and opened a plethora of opportunities for .NET Developers.  It also takes the mobile development to a new generation and doesn’t force developers to learn different programming language. Along with this, few other releases have been out.  The most anticipated Silverlight 4 RC is out and its corresponding templates are also out there for you to download.  Once VS 2010 RC was released, it was much of a disappointment that it doesn’t support SL4 development as well as the SL4 Business Application Development (a.k.a. WCF RIA Services).   There were few workarounds though nothing concrete.  Earlier I had written about how the WCF RIA Services Preview does work with ASP.NET Development using VS 2010 RC. However, with the release of SL4 RC and the corresponding tooling updates, one can develop for both SL4 as well as SL4 + WCF RIA Services using VS 2010 RC.  This is kind of important and keeps the continuum going until VS 2010 RTMs.  So, the purpose of this post is to quickly give the updates and links to install the relevant tools. Silverlight 4 RC Runtime Windows Runtime or the Mac Runtime Silverlight 4 RC Developer Tools for Visual Studio 2010 RC Silverlight 4 Tools for Visual Studio 2010 (this would install the Runtime as well automatically) Expression Blend 4 Beta Expression Blend 4 Beta If you install the SL4 RC Developer Tools, it also installs the WCF RIA Services Preview automatically.  You just need to install the WCF RIA Services Toolkit that can be downloaded from Install the WCF RIA Services Toolkit Of course you can also just install the WCF RIA Services for VS 2010 RC separately (without SL4 Tools) from here Kindly note, all the above mentioned links are with respect to Visual Studio 2010 RC edition.  If you are developing with VS 2008, then you can just target SL3 (as I write this, there seems to be no official support for developing SL4 with VS 2008) and the related tools can be downloaded from http://www.silverlight.net/getstarted/ Basically you need to download SL 3 Runtime, SDK, Expression Blend 3 and the Silverlight Toolkit.  All the links for the download are available in the above mentioned page. Also, a version of WCF RIA Services that is supported in VS 2008 is available for download at WCF RIA Services Beta for VS 2008 I know there are far too many things to keep in mind.  So, I put a flowchart that could help with depicting it pictorial.  Note that this is just my own imagination and doesn’t cover all scenarios.  for example, if you are neither developing for Webform, Silverlight, you end up nowhere whereas in actual scenario you may want to develop Desktop, Services, Console, Game and what not.  So, keep in mind this is just Web. Cheers !!!

    Read the article

  • Book Review (Book 10) - The Information: A History, a Theory, a Flood

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for year. You can read my first book review here, and the entire list is here. The book I chose for March 2012 was: The Information: A History, a Theory, a Flood by James Gleick. I was traveling at the end of last month so I’m a bit late posting this review here. Why I chose this book: My personal belief about computing is this: All computing technology is simply re-arranging data. We take data in, we manipulate it, and we send it back out. That’s computing. I had heard from some folks about this book and it’s treatment of data. I heard that it dealt with the basics of data - and the semantics of data, information and so on. It also deals with the earliest forms of history of information, which fascinates me. It’s similar I was told, to GEB which a favorite book of mine as well, so that was a bonus. Some folks I talked to liked it, some didn’t - so I thought I would check it out. What I learned: I liked the book. It was longer than I thought - took quite a while to read, even though I tend to read quickly. This is the kind of book you take your time with. It does in fact deal with the earliest forms of human interaction and the basics of data. I learned, for instance, that the genesis of the binary communication system is based in the invention of telegraph (far-writing) codes, and that the earliest forms of communication were expensive. In fact, many ciphers were invented not to hide military secrets, but to compress information. A sort of early “lol-speak” to keep the cost of transmitting data low! I think the comparison with GEB is a bit over-reaching. GEB is far more specific, fanciful and so on. In fact, this book felt more like something fro Richard Dawkins, and tended to wander around the subject quite a bit. I imagine the author doing his research and writing each chapter as a book that followed on from the last one. This is what possibly bothered those who tended not to like it, I think. Towards the middle of the book, I think the author tended to be a bit too fragmented even for me. He began to delve into memes, biology and more - I think he might have been better off breaking that off into another work. The existentialism just seemed jarring. All in all, I liked the book. I recommend it to any technical professional, specifically ones involved with data technology in specific. And isn’t that all of us? :)

    Read the article

  • SQL SERVER – Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function

    - by pinaldave
    Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: A Puzzle – Swap Value of Column Without Case Statement,there were more than 50 solutions proposed in the comment. There were many creative solutions. I have mentioned my personal favorite (different ones) here: Solution of Puzzle – Swap Value of Column Without Case Statement. However, I received lots of questions regarding one of the Solution by SIJIN KUMAR V P. He has used the function SOUNDEX in his solution. The request was to explain how SOUNDEX and DIFFERENCE works. Well, there are pretty decent documentations provided over here SOUNDEX function and DIFFERENCE over on MSDN and if I attempt to explain this function I will end up writing the same details which are available on MSDN. Instead of writing theory, we will try to learn this function by using a couple of simple puzzles. You try to solve the puzzles using the MSDN and see if you can learn something very quickly. In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. The first character of the code is the first character of character_expression and the second through fourth characters of the code are numbers that represent the letters in the expression. Vowels incharacter_expression are ignored unless they are the first letter of the string. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. The return value ranges from 0 through 4: 0 indicates weak or no similarity, and 4 indicates strong similarity or the same values. Learning Puzzle 1: Now let us run following four queries and observe its output. SELECT SOUNDEX('SQLAuthority') SdxValue SELECT SOUNDEX('SLTR') SdxValue SELECT SOUNDEX('SaLaTaRa') SdxValue SELECT SOUNDEX('SaLaTaRaM') SdxValue When you look at the result set all the four values are same. The reason for all the values to be same is as for SQL Server SOUNDEX function all the four strings are similarly sounding string. Learning Puzzle 2: Now let us run following five queries and observe its output. SELECT DIFFERENCE (SOUNDEX('SLTR'),SOUNDEX('SQLAuthority')) SELECT DIFFERENCE (SOUNDEX('TH'),SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SQLAuthority',SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SLTR',SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SLTR','SQLAuthority') When you look at the result set you will get the result in the ranges from 1 to 4. Here is how it works if your result is 0 which means absolutely not relevant to each other and if your result is 1 which means the results are relevant to each other. Have you ever used above two functions in your business need or on production server? If yes, would you please leave a comment with use cases. I believe it will be beneficial to everyone. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Will my self-taught code be fine, or should I take it to the professional level?

    - by G1i1ch
    Lately I've been getting professional work, hanging out with other programmers, and making friends in the industry. The only thing is I'm 100% self-taught. It's caused my style to extremely deviate from the style of those that are properly trained. It's the techniques and organization of my code that's different. It's a mixture of several things I do. I tend to blend several programming paradigms together. Like Functional and OO. I lean to the Functional side more than OO, but I see the use of OO when something would make more sense as an abstract entity. Like a game object. Next I also go the simple route when doing something. When in contrast, it seems like sometimes the code I see from professional programmers is complicated for the sake of it! I use lots of closures. And lastly, I'm not the best commenter. I find it easier just to read through my code than reading the comment. And most cases I just end up reading the code even if there are comments. Plus I've been told that, because of how simply I write my code, it's very easy to read it. I hear professionally trained programmers go on and on about things like unit tests. Something I've never used before so I haven't even the faintest idea of what they are or how they work. Lots and lots of underscores "_", which aren't really my taste. Most of the techniques I use are straight from me, or a few books I've read. Don't know anything about MVC, I've heard a lot about it though with things like backbone.js. I think it's a way to organize an application. It just confuses me though because by now I've made my own organizational structures. It's a bit of a pain. I can't use template applications at all when learning something new like with Ubuntu's Quickly. I have trouble understanding code that I can tell is from someone trained. Complete OO programming really leaves a bad taste in my mouth, yet that seems to be what EVERYONE else is strictly using. It's left me not that confident in the look of my code, or wondering whether I'll cause sparks when joining a company or maybe contributing to open source projects. In fact I'm rather scared of the fact that people will eventually be checking out my code. Is this just something normal any programmer goes through or should I really look to change up my techniques?

    Read the article

  • Coding different states in Adventure Games

    - by Cardin
    I'm planning out an adventure game, and can't figure out what's the right way to implement the behaviour of a level depending on state of story progression. My single-player game features a huge world where the player has to interact with people in a town at various points in the game. However, depending on story progression, different things would be presented to the player, for e.g. the Guild Leader will change locations from the town square to various locations around the city; Doors would only unlock at certain times of the day after finishing a particular routine; Different cut-screen/trigger events happen only after a particular milestone has been reached. I naively thought of using a switch{} statement initially to decide what the NPC should say or which he could be found at, and making quest objectives interact-able only after checking a global game_state variable's condition. But I realised I would quickly run into a lot of different game states and switch-cases in order to change the behaviour of an object. That switch statement would also be massively hard to debug, and I guess it might also be hard to use in a level editor. So I thought, instead of having a single object with multiple states, maybe I should have multiple instances of the same object, with a single state. That way, if I use something like a level editor, I can put an instance of the NPC at all the different locations he could ever appear at, and also an instance for each conversation state he has. But that means there'll be a lot of inactive, invisible game objects floating around the level, which might be trouble for memory, or simply hard to see in a level editor, i don't know. Or simply, make an identical, but separate level for each game state. This feels the cleanest and bug-free way to do things, but it feels like massive manual work making sure each version of the level is really identical to each other. All my methods feel so inefficient, so to recap my question, is there a better or standardised way to implement behaviour of a level depending on state of story progression? PS: I don't have a level editor yet - thinking of using something like JME SDK or making my own.

    Read the article

  • Favorite moments of JavaOne

    - by Tori Wieldt
    There are so many events and sessions to attend at JavaOne, it's unfair to ask people to choose just one thing they liked, but here are some favorite moments: I loved meeting many open source contributors and friends I have not met in person before and seeing that projects like e.g. Hudson are alive and kicking and have a great future ahead of them. -Manfred Moser My "The Problem with Women" session. It had LOADS of interactivity from the audience, who really helped to make that session.  I came out if it with a real sense of optimism - we love our jobs, we love what we do, and we should be proud of telling everyone about it to attract different talent into the industry. (Read her blog JavaOne: The Problem With Women - A Technical Approach for details.) -Trish Gee My kudos to Oracle for making the presentation materials quickly available to the public. Some of them were already available during JavaOne. Lots of slide decks are already there, and in some cases you may even find the video recordings too. Go to http://www.oracle.com/javaone and select JavaOne Technical Sessions.  -Yakov Fain I loved that not only was James Gosling present at the Community Keynote (which felt more like the keynotes of old times [big space, big screens, fun and tech]) but he was also found wandering the halls of the Hilton the day prior. Bring back James! Add back the toys section in the Community Keynote. Let the t-shirt tossing begin anew. These are "small" things that really fire up the community. -Andres Almiray Seeing James Gosling at JavaOne was a real shot in the arm for Java.  He needs to be there every year. -Frank Greco +42 on having James and the T-shirt tossing. -Stephan Janssen The session "Integrate Java with Robots, Home Automation, Musical Instruments, and Kinect." Fabiane Nardon explained connecting Jenkins to jHome to a truck horn placed in their sysadmin's bedroom. She dubbed it "extreme feedback."  -Tori Wieldt The User Group Forum [on Sunday] was a success! Congratulations Bruno Souza and John Yeary and everybody that were involved. I believe it really helps to increase community participation! There were lots of interesting talks, and great discussion with JUG leaders and members. Thank you Oracle for supporting that! -Yara Senger What was your favorite moment? Please comment! 

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • Back-sliding into Unmanaged Code

    - by Laila
    It is difficult to write about Microsoft's ambivalence to .NET without mentioning clichés about dog food.  In case you've been away a long time, you'll remember that Microsoft surprised everyone with the speed and energy with which it introduced and evangelised the .NET Framework for managed code. There was good reason for this. Once it became obvious to all that it had sleepwalked into third place as a provider of development languages, behind Borland and Sun, it reacted quickly to attract the best talent in the industry to produce a windows version of the Java runtime, with Bounds-checking, Automatic Garbage collection, structures exception handling and common data types. To develop applications for this managed runtime, it produced several excellent languages, and more are being provided. The only thing Microsoft ever got wrong was to give it a stupid name. The logical step for Microsoft would be to base the entire operating system on the .NET framework, and to re-engineer its own applications. In 2002, Bill Gates, then Microsoft Chairman and Chief Software Architect said about their plans for .NET, "This is a long-term approach. These things don't happen overnight." Now, eight years later, we're still waiting for signs of the 'long-term approach'. Microsoft's vision of an entirely managed operating system has subsided since the Vista fiasco, but stays alive yet dormant as Midori, still being developed by Microsoft Research. This is an Internet-centric fork of the singularity operating system, a research project started in 2003 to build a highly-dependable operating system in which the kernel, device drivers, and applications are all written in managed code. Midori is predicated on the prevalence of connected systems, with provisions for distributed concurrency where application components exist 'in the cloud', and supports a programming model that can tolerate cancellation, intermittent connectivity and latency. It features an entirely new security model that sandboxes applications for increased security. So have Microsoft converted its existing applications to the .NET framework? It seems not. What Windows applications can run on Mono? Very few, it seems. We all thought that .NET spelt the end of DLL Hell and the need for COM interop, but it looks as if Bill Gates' idea of 'not overnight' might stretch to a decade or more. The Operating System has shown only minimal signs of migrating to .NET. Even where the use of .NET has come to dominate, when used for server applications with IIS, IIS itself is still entirely developed in unmanaged code. This is an irritation to Microsoft's greatest supporters who committed themselves fully to the NET framework, only to find parts of the Ambivalent Microsoft Empire quietly backsliding into unmanaged code and the awful C++. It is a strategic mistake that the invigorated Apple didn't make with the Mac OS X Architecture. Cheers, Laila

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >