Search Results

Search found 4237 results on 170 pages for 'delphi 2009'.

Page 7/170 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Video From PDC 2009

    Found this video this morning with me and Victor Gaudioso talking about UX and Silverlight.http://www.youtube.com/watch?v=hUqd3vJwRg4...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • BizTalk 2009 - Pipeline Component Wizard Error

    - by Stuart Brierley
    When attempting to run the BizTalk Server Pipeline Component Wizard for the first time I encountered an error that prevented the creation of the pipeline component project: System.ArgumentException : Value does not fall withing the expected range  I found the solution for this error in a couple of places, first a reference to the issue on the Codeplex Project and then a fuller description on a blog referring to the BizTalk 2006 implementation. To resolve this issue you need to make a change to the downloaded wizard solution code, by changing the file BizTalkPipelineComponentWizard.cs around line 304 From // get a handle to the project. using mySolution will not work. pipelineComponentProject = this._Application.Solution.Projects.Item(Path.Combine(Path.GetFileNameWithoutExtension(projectFileName), projectFileName)); To // get a handle to the project. using mySolution will not work. if (this._Application.Solution.Projects.Count == 1) pipelineComponentProject = this._Application.Solution.Projects.Item(1); else { // In Doubt: Which project is the target? pipelineComponentProject = this._Application.Solution.Projects.Item(projectFileName); } Following this you need to uninstall the the wizard, rebuild the solution and re-install the wizard again.

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Schema Generation with Input Parameters

    - by Stuart Brierley
    As previsouly noted in my post on Schema Generation using the Community ODBC Adapter, I ran into a problem when trying to generate a schema to represent a MySQL stored procedure that had input parameters.  After a bit of investigation and a few deadends I managed to figure out a way around this issue - detailed below are both the problem and solution in case you ever run into this yourself. The Problem Imagine a stored procedure that is coded as follows in MySQL: StuTest(in DStr varchar(80)) BEGIN   Declare GRNID int;   Select grn_id into GRNID from grn_header where distribution_number = DStr;   Select GRNID; END This is quite a simple stored procedure but can be used to illustrate the issue with parameters quite niceley. When generating the schema using the Add Generated Items wizard, I tried selecting "Stored Procedure" and then in the Statement Information window typing the stored procedure name: StuTest Pressing generate then gives the following error: "Incorrect Number of arguments for Procedure StuTest; expected 1, got 0" If you attempt to supply a value for the parameter you end up with a schema that will only ever supply the parameter value that you specify.  For example supplying StuTest('123') will always call the procedure with a parameter value of 123. The Solution   I tried contacting Two Connect about this, but their experience of testing the adapter with MySQL was limited. After looking through the code for the ODBC adapter myself and trying a few things out, I was eventually able to use the ODBC adapter to call a test stored procedure using a two way send port. In the generate schema wizard instead of selecting Stored Procedure I had to choose SQL Script instead, detailing the following script: Call StuTest(@InputParameter) By default this would create a request schema with an attribute called InputParameter, with a SQL type of NVarChar(1).  In most cases this is not going to be correct for the stored procedure being called. To change the type from the default that is applied you need to select the "Override default query processing" check box when specifying the script in the wizard.  This then opens the BizTalk ODBC Override window which lets you change the properties of the parameters and also test out the query script.  Once I had done this I was then able to generate the correct schema, which included an attribute representing the parameter.  By deploying the schema assembly I was then able to try the ODBC adapter out on a two way send port. When supplied with an appropriate message instance (for the generated request schema) this send port successfully returned the expected response.

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Receive Location

    - by Stuart Brierley
    I have previously talked about the installation of the Community ODBC adapter and also using the ODBC adapter to generate schemas.  But what about creating a receive location? An ODBC receive location will periodically poll the configured database using the stored procedure or SQL string defined in your request schema. If you need to, begin by adding a new receive port to your BizTalk configuration. Create a new receive location and select to use the ODBC adapter and click Address. You will now be shown the ODBC Community Adapter Transport properties window.  Select connection string and you will be shown the Choose data Source window.  If you have already created the Test Database source when generating a schema from ODBC this will be shown (if not go and take a look in my previous post to see how this is done).   You will then need to choose the SQL command that will be run by the recieve port.  In this case I have deployed the Test Mapping schemas that I created previously and selected the Request schema. You should now have populated the appropriate properties for the ODBC Com Adapter. Finally set the standard receive location properties and your ODBC receive location is now ready.

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Send Port

    - by Stuart Brierley
    I have previously talked about the installation of the Community ODBC adapter and also using the ODBC adapter to generate schemas and laterly the creation of a receive port using the ODBC Adapter.  But what about creating a send port? Select to add a new Send Port, select the ODBC Adapter and click configure. Clicking Connection string will open the DataSource window. Choose one of your system datasources and press OK. This will now update the Transport properties.  Select okay. All that remains is to set the standard send port properties and your ODBC send port is now ready.

    Read the article

  • Azure Table Storage Creation using Nov 2009 CTP

    - by kaleidoscope
    The new SDK introduces a new class - · The CloudTableClient : This new class enables us to create tables and test for the existence of tables. We need not need use this class for querying table storage, it's   more of an administrative class for dealing with table storage itself.   · Once we have got the account key and the account name from ConfigurationSetting, we can create an instance of the storage credentials and table client classes:   StorageCredentialsAccountAndKey creds = new StorageCredentialsAccountAndKey(accountName, accountKey);     CloudTableClient tableStorage = new CloudTableClient(tableBaseUri, creds);     CustomerContext ctx = new CustomerContext(tableBaseUri, creds);     //where tableBaseUri is the TableStorageEndpoint obtained from ConfigurationSetting Using the table storage class, we can now create a new table (if it doesn't already exist):     if (tableStorage.CreateTableIfNotExist("Customers"))     {        CustomerRow cust = new CustomerRow("AccountsReceivable", "kevin");         cust.FirstName = "Kevin";        cust.LastName = "Hoffman";        ctx.AddObject("Customers", cust);        ctx.SaveChanges();     } For a complete article on this topic please follow this link: http://dotnetaddict.dotnetdevelopersjournal.com/azure_nov09_tablestorage.htm Tinu, O

    Read the article

  • Oracle Magazine, May/June 2009

    Oracle Magazine May/June features articles on Developer solutions, Oracle and Windows support for midsize businesses, application testing solutions, custom frameworks, ODP.NET transactions, managing literal values with PL/SQL, modernizing Oracle Forms, customizing Oracle Application Express, improving performance in Oracle Database 11g, Tom Kyte answering your questions and much more.

    Read the article

  • Portraits of Excellence: Editors' Choice Awards 2009

    Each year the editors of Oracle Magazine recognize men and women who exemplify leadership, vision, and dedication in working with and managing Oracle technology. This year, we are pleased to present the winners of our eighth annual Editors' Choice Awards, and we are honored to feature them in our pages.

    Read the article

  • Final Keynotes of Pass Summit 2009

    The final day of PASS offered insights into configuration management, how it helps with Disaster Recovery and Consolidation, and a glimpse in the direction that Microsoft is heading with SQL Server 10.5.

    Read the article

  • Oracle Magazine, March/April 2009

    Oracle Magazine March/April features articles on next-generation data centers, Oracle and midsize businesses, efficient business processes, improving performance and management of Oracle Database 11g, managing Oracle Application Express, Technologist Tom Kyte, Oracle ADF, PL/SQL best practices, the HP Oracle Database Machine, security with Oracle Configuration Manager and much more.

    Read the article

  • Two new features in November 2009 CTP

    - by kaleidoscope
    Windows Azure Diagnostics Managed Library: The new Diagnostics API enables logging using standard .NET APIs. The Diagnostics API provides built-in support for collecting standard logs and diagnostic information, including the Windows Azure logs, IIS 7.0 logs, Failed Request logs, crash dumps, Windows Event logs, performance counters, and custom logs. Variable-size Virtual Machines (VMs): Developers may now specify the size of the virtual machine to which they wish to deploy a role instance, based on the role's resource requirements. The size of the VM determines the number of CPU cores, the memory capacity, and the local file system size allocated to a running instance. e.g.: <WebRole name=”WebRole1” vmsize=”ExtraLarge”> Supported values for the ‘vmsize’ are: 1. Small 2. Medium 3. Large 4.       ExtraLarge More information for Diagnostics Managed Library can be found at: http://msdn.microsoft.com/en-us/library/ee758705.aspx   Girish, A

    Read the article

  • BizTalk 2009 - Scoped Record Counting in Maps

    - by StuartBrierley
    Within BizTalk there is a functoid called Record Count that will return the number of instances of a repeated record or repeated element that occur in a message instance. The input to this functoid is the record or element to be counted. As an example take the following Source schema, where the Source message has a repeated record called Box and each Box has a repeated element called Item: An instance of this Source schema may look as follows; 2 box records - one with 2 items and one with only 1 item. Our destination schema has a number of elements and a repeated box record.  The top level elements contain totals for the number of boxes and the overall number of items.  Each box record contains a single element representing the number of items in that box. Using the Record Count functoid it is easy to map the top level elements, producing the expected totals of 2 boxes and 3 items: We now need to map the total number of items per box, but how will we do this?  We have already seen that the record count functoid returns the total number of instances for the entire message, and unfortunately it does not allow you to specify a scoping parameter.  In order to acheive Scoped Record Counting we will need to make use of a combination of functoids. As you can see above, by linking to a Logical Existence functoid from the record/element to be counted we can then feed the output into a Value Mapping functoid.  Set the other Value Mapping parameter to "1" and link the output to a Cumulative Sum functoid. Set the other Cumulative Sum functoid parameter to "1" to limit the scope of the Cumulative Sum. This gives us the expected results of Items per Box of 2 and 1 respectively. I ran into this issue with a larger schema on a more complex map, but the eventual solution is still the same.  Hopefully this simplified example will act as a good reminder to me and save someone out there a few minutes of brain scratching.

    Read the article

  • Oracle Magazine, July/August 2009

    Oracle Magazine July/August features articles on business efficiency with Oracle data warehousing, business intelligence and enterprise performance management; Oracle Enterprise Linux and Oracle Unbreakable Linux support, Oracle OpenWorld preview, open source, Oracle Application Development Framework, best PL/SQL practices, security for Oracle Application Express applications, Microsoft Visual Studio for .NET and Oracle Database, Oracle Data Pump, Tom Kyte answering your questions and much more.

    Read the article

  • Using Computer Management (MMC) with the Solaris CIFS Service (August 25, 2009)

    - by user12612012
    One of our goals for the Solaris CIFS Service is to provide seamless Windows interoperability: not just to deliver ubiquitous, multi-protocol file sharing, which is obviously a major part of this project, but to support Windows services at a fundamental level.  It's an ongoing mission and our latest update includes support for Windows remote management. Remote management is extremely important to Windows administrators and one of the mainstay tools is Computer Management. Computer Management is a Windows administration application, actually a collection of Microsoft Management Console (MMC) tools, that can be used to configure, monitor and manage local and remote services and resources.  The MMC is an extensible framework of registered components, known as snap-ins, which allows Computer Management to provide comprehensive management features for both the local system and remote systems on the network. Supported Computer Management features include: Share ManagementSupport for share management is relatively complete.  You can create, delete, list and configure shares.  It's not yet possible to change the maximum allowed or number of users properties but other properties, including the Share Permissions, can be managed via the MMC. Users, Groups and ConnectionsYou can view local SMB users and groups, monitor user connections and see the list of open files. If necessary, you can also disconnect users and/or close files. ServicesYou can view the SMF services running on an OpenSolaris system.  This is a read-only view - we don't support service management (the ability to start or stop) SMF services from Computer Management (yet). To ensure that only the appropriate users have access to administrative operations there are some access restrictions on these remote management features. Regular users can: List shares Only members of the Administrators or Power Users groups can: Manage shares List connections Only members of the Administrators group can: List open files and close files Disconnect users View SMF services View the EventLog Here's a screenshot when I was using Computer Management and Server Manager (another Windows remote management application) on Windows XP to view some open files on an OpenSolaris system to prepare a slide presentation on MMC support.

    Read the article

  • Oracle Magazine, January/February 2009

    Oracle Magazine January/February features articles on Oracle Exadata, Oracle grid infrastructure, Oracle embedded databases, Oracle WebLogic Server, encrypting Tablespacess, managing database resources, Tom Kyte on Dynamic Sampling, easier interactive data entry, coding PL/SQL, tips on Oracle Application Express and much more.

    Read the article

  • Welcome to ubiquitous file sharing (December 08, 2009)

    - by user12612012
    The core of any file server is its file system and ZFS provides the foundation on which we have built our ubiquitous file sharing and single access control model.  ZFS has a rich, Windows and NFSv4 compatible, ACL implementation (ZFS only uses ACLs), it understands both UNIX IDs and Windows SIDs and it is integrated with the identity mapping service; it knows when a UNIX/NIS user and a Windows user are equivalent, and similarly for groups.  We have a single access control architecture, regardless of whether you are accessing the system via NFS or SMB/CIFS.The NFS and SMB protocol services are also integrated with the identity mapping service and shares are not restricted to UNIX permissions or Windows permissions.  All access control is performed by ZFS, the system can always share file systems simultaneously over both protocols and our model is native access to any share from either protocol.Modal architectures have unnecessary restrictions, confusing rules, administrative overhead and weird deployments to try to make them work; they exist as a compromise not because they offer a benefit.  Having some shares that only support UNIX permissions, others that only support ACLs and some that support both in a quirky way really doesn't seem like the sort of thing you'd want in a multi-protocol file server.  Perhaps because the server has been built on a file system that was designed for UNIX permissions, possibly with ACL support bolted on as an add-on afterthought, or because the protocol services are not truly integrated with the operating system, it may not be capable of supporting a single integrated model.With a single, integrated sharing and access control model: If you connect from Windows or another SMB/CIFS client: The system creates a credential containing both your Windows identity and your UNIX/NIS identity.  The credential includes UNIX/NIS IDs and SIDs, and UNIX/NIS groups and Windows groups. If your Windows identity is mapped to an ephemeral ID, files created by you will be owned by your Windows identity (ZFS understands both UNIX IDs and Windows SIDs). If your Windows identity is mapped to a real UNIX/NIS UID, files created by you will be owned by your UNIX/NIS identity. If you access a file that you previously created from UNIX, the system will map your UNIX identity to your Windows identity and recognize that you are the owner.  Identity mapping also supports access checking if you are being assessed for access via the ACL. If you connect via NFS (typically from a UNIX client): The system creates a credential containing your UNIX/NIS identity (including groups). Files you create will be owned by your UNIX/NIS identity. If you access a file that you previously created from Windows and the file is owned by your UID, no mapping is required. Otherwise the system will map your Windows identity to your UNIX/NIS identity and recognize that you are the owner.  Again, mapping is fully supported during ACL processing. The NFS, SMB/CIFS and ZFS services all work cooperatively to ensure that your UNIX identity and your Windows identity are equivalent when you access the system.  This, along with the single ACL-based access control implementation, results in a system that provides that elusive ubiquitous file sharing experience.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >