Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 105/2416 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Data Source Security Part 3

    - by Steve Felts
    In part one, I introduced the security features and talked about the default behavior.  In part two, I defined the two major approaches to security credentials: directly using database credentials and mapping WLS user credentials to database credentials.  Now it's time to get down to a couple of the security options (each of which can use database credentials or WLS credentials). Set Client Identifier on Connection When "Set Client Identifier" is enabled on the data source, a client property is associated with the connection.  The underlying SQL user remains unchanged for the life of the connection but the client value can change.  This information can be used for accounting, auditing, or debugging.  The client property is based on either the WebLogic user mapped to a database user using the credential map Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} or is the database user parameter directly from the getConnection() method, based on the “use database credentials” setting described earlier. To enable this feature, select “Set Client ID On Connection” in the Console.  See "Enable Set Client ID On Connection for a JDBC data source" http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableCredentialMapping.html in Oracle WebLogic Server Administration Console Help. The Set Client Identifier feature is only available for use with the Oracle thin driver and the IBM DB2 driver, based on the following interfaces. For pre-Oracle 12c, oracle.jdbc.OracleConnection.setClientIdentifier(client) is used.  See http://docs.oracle.com/cd/B28359_01/network.111/b28531/authentication.htm#i1009003 for more information about how to use this for auditing and debugging.   You can get the value using getClientIdentifier()  from the driver.  To get back the value from the database as part of a SQL query, use a statement like the following. “select sys_context('USERENV','CLIENT_IDENTIFIER') from DUAL”. Starting in Oracle 12c, java.sql.Connection.setClientInfo(“OCSID.CLIENTID", client) is used.  This is a JDBC standard API, although the property values are proprietary.  A problem with setClientIdentifier usage is that there are pieces of the Oracle technology stack that set and depend on this value.  If application code also sets this value, it can cause problems. This has been addressed with setClientInfo by making use of this method a privileged operation. A well-managed container can restrict the Java security policy grants to specific namespaces and code bases, and protect the container from out-of-control user code. When running with the Java security manager, permission must be granted in the Java security policy file for permission "oracle.jdbc.OracleSQLPermission" "clientInfo.OCSID.CLIENTID"; Using the name “OCSID.CLIENTID" allows for upward compatible use of “select sys_context('USERENV','CLIENT_IDENTIFIER') from DUAL” or use the JDBC standard API java.sql.getClientInfo(“OCSID.CLIENTID") to retrieve the value. This value in the Oracle USERENV context can be used to drive the Oracle Virtual Private Database (VPD) feature to create security policies to control database access at the row and column level. Essentially, Oracle Virtual Private Database adds a dynamic WHERE clause to a SQL statement that is issued against the table, view, or synonym to which an Oracle Virtual Private Database security policy was applied.  See Using Oracle Virtual Private Database to Control Data Access http://docs.oracle.com/cd/B28359_01/network.111/b28531/vpd.htm for more information about VPD.  Using this data source feature means that no programming is needed on the WLS side to set this context; it is set and cleared by the WLS data source code. For the IBM DB2 driver, com.ibm.db2.jcc.DB2Connection.setDB2ClientUser(client) is used for older releases (prior to version 9.5).  This specifies the current client user name for the connection. Note that the current client user name can change during a connection (unlike the user).  This value is also available in the CURRENT CLIENT_USERID special register.  You can select it using a statement like “select CURRENT CLIENT_USERID from SYSIBM.SYSTABLES”. When running the IBM DB2 driver with JDBC 4.0 (starting with version 9.5), java.sql.Connection.setClientInfo(“ClientUser”, client) is used.  You can retrieve the value using java.sql.Connection.getClientInfo(“ClientUser”) instead of the DB2 proprietary API (even if set setDB2ClientUser()).  Oracle Proxy Session Oracle proxy authentication allows one JDBC connection to act as a proxy for multiple (serial) light-weight user connections to an Oracle database with the thin driver.  You can configure a WebLogic data source to allow a client to connect to a database through an application server as a proxy user. The client authenticates with the application server and the application server authenticates with the Oracle database. This allows the client's user name to be maintained on the connection with the database. Use the following steps to configure proxy authentication on a connection to an Oracle database. 1. If you have not yet done so, create the necessary database users. 2. On the Oracle database, provide CONNECT THROUGH privileges. For example: SQL> ALTER USER connectionuser GRANT CONNECT THROUGH dbuser; where “connectionuser” is the name of the application user to be authenticated and “dbuser” is an Oracle database user. 3. Create a generic or GridLink data source and set the user to the value of dbuser. 4a. To use WLS credentials, create an entry in the credential map that maps the value of wlsuser to the value of dbuser, as described earlier.   4b. To use database credentials, enable “Use Database Credentials”, as described earlier. 5. Enable Oracle Proxy Authentication, see "Configure Oracle parameters" in Oracle WebLogic Server Administration Console Help. 6. Log on to a WebLogic Server instance using the value of wlsuser or dbuser. 6. Get a connection using getConnection(username, password).  The credentials are based on either the WebLogic user that is mapped to a database user or the database user directly, based on the “use database credentials” setting.  You can see the current user and proxy user by executing: “select user, sys_context('USERENV','PROXY_USER') from DUAL". Note: getConnection fails if “Use Database Credentials” is not enabled and the value of the user/password is not valid for a WebLogic Server user.  Conversely, it fails if “Use Database Credentials” is enabled and the value of the user/password is not valid for a database user. A proxy session is opened on the connection based on the user each time a connection request is made on the pool. The proxy session is closed when the connection is returned to the pool.  Opening or closing a proxy session has the following impact on JDBC objects. - Closes any existing statements (including result sets) from the original connection. - Clears the WebLogic Server statement cache. - Clears the client identifier, if set. -The WebLogic Server test statement for a connection is recreated for every proxy session. These behaviors may impact applications that share a connection across instances and expect some state to be associated with the connection. Oracle proxy session is also implicitly enabled when use-database-credentials is enabled and getConnection(user, password) is called,starting in WLS Release 10.3.6.  Remember that this only works when using the Oracle thin driver. To summarize, the definition of oracle-proxy-session is as follows. - If proxy authentication is enabled and identity based pooling is also enabled, it is an error. - If a user is specified on getConnection() and identity-based-connection-pooling-enabled is false, then oracle-proxy-session is treated as true implicitly (it can also be explicitly true). - If a user is specified on getConnection() and identity-based-connection-pooling-enabled is true, then oracle-proxy-session is treated as false.

    Read the article

  • Can't remove GPT data from MBR

    - by user2373121
    I am having difficulty getting the Ubuntu installer (and gparted) to recognize the partitions on my MBR type disk. Other operating systems and disk tools read the disk structure and the files on it fine. I have used fixparts to write a new MBR but the issue persists. I assume the issue stems from the Protective MBR data still present on the disk but I am at a loss as to how to remove it while preserving my NTFS data partition. Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. c:\Users\mike\Desktop\fixpartsfixparts 3: FixParts 0.8.8 Loading MBR data from 3: Warning: 0xEE partition doesn't start on sector 1. This can cause problems in some OSes. MBR command (? for help): Running gdisk shows Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. c:\Users\mike\Desktop\fixparts>gdisk 3: GPT fdisk (gdisk) version 0.8.7 Partition table scan: MBR: MBR only BSD: not present APM: not present GPT: not present *************************************************************** Found invalid GPT and valid MBR; converting MBR to GPT format in memory. THIS OPERATION IS POTENTIALLY DESTRUCTIVE! Exit by typing 'q' if you don't want to convert your MBR partitions to GPT format! *************************************************************** ************************************************************************ Most versions of Windows cannot boot from a GPT disk, and most varieties prior to Vista cannot read GPT disks. Therefore, you should exit now unless you understand the implications of converting MBR to GPT or creating a new GPT disk layout! ************************************************************************ Are you SURE you want to continue? (Y/N): y Command (? for help): p Disk 3:: 2930277168 sectors, 1.4 TiB Logical sector size: 512 bytes Disk identifier (GUID): BFE92CE8-F93D-4141-82B8-816AD06FB36E Partition table holds up to 128 entries First usable sector is 34, last usable sector is 2930277134 Partitions will be aligned on 2048-sector boundaries Total free space is 163846893 sectors (78.1 GiB) Number Start (sector) End (sector) Size Code Name 1 163842048 2930272255 1.3 TiB 0700 Microsoft basic data Command (? for help): r Recovery/transformation command (? for help): o Disk size is 2930277168 sectors (1.4 TiB) MBR disk identifier: 0x00000000 MBR partitions: Number Boot Start Sector End Sector Status Code 1 1 2930277167 primary 0xEE Recovery/transformation command (? for help): q

    Read the article

  • Parallelism in .NET – Part 6, Declarative Data Parallelism

    - by Reed
    When working with a problem that can be decomposed by data, we have a collection, and some operation being performed upon the collection.  I’ve demonstrated how this can be parallelized using the Task Parallel Library and imperative programming using imperative data parallelism via the Parallel class.  While this provides a huge step forward in terms of power and capabilities, in many cases, special care must still be given for relative common scenarios. C# 3.0 and Visual Basic 9.0 introduced a new, declarative programming model to .NET via the LINQ Project.  When working with collections, we can now write software that describes what we want to occur without having to explicitly state how the program should accomplish the task.  By taking advantage of LINQ, many operations become much shorter, more elegant, and easier to understand and maintain.  Version 4.0 of the .NET framework extends this concept into the parallel computation space by introducing Parallel LINQ. Before we delve into PLINQ, let’s begin with a short discussion of LINQ.  LINQ, the extensions to the .NET Framework which implement language integrated query, set, and transform operations, is implemented in many flavors.  For our purposes, we are interested in LINQ to Objects.  When dealing with parallelizing a routine, we typically are dealing with in-memory data storage.  More data-access oriented LINQ variants, such as LINQ to SQL and LINQ to Entities in the Entity Framework fall outside of our concern, since the parallelism there is the concern of the data base engine processing the query itself. LINQ (LINQ to Objects in particular) works by implementing a series of extension methods, most of which work on IEnumerable<T>.  The language enhancements use these extension methods to create a very concise, readable alternative to using traditional foreach statement.  For example, let’s revisit our minimum aggregation routine we wrote in Part 4: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re doing a very simple computation, but writing this in an imperative style.  This can be loosely translated to English as: Create a very large number, and save it in min Loop through each item in the collection. For every item: Perform some computation, and save the result If the computation is less than min, set min to the computation Although this is fairly easy to follow, it’s quite a few lines of code, and it requires us to read through the code, step by step, line by line, in order to understand the intention of the developer. We can rework this same statement, using LINQ: double min = collection.Min(item => item.PerformComputation()); Here, we’re after the same information.  However, this is written using a declarative programming style.  When we see this code, we’d naturally translate this to English as: Save the Min value of collection, determined via calling item.PerformComputation() That’s it – instead of multiple logical steps, we have one single, declarative request.  This makes the developer’s intentions very clear, and very easy to follow.  The system is free to implement this using whatever method required. Parallel LINQ (PLINQ) extends LINQ to Objects to support parallel operations.  This is a perfect fit in many cases when you have a problem that can be decomposed by data.  To show this, let’s again refer to our minimum aggregation routine from Part 4, but this time, let’s review our final, parallelized version: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Here, we’re doing the same computation as above, but fully parallelized.  Describing this in English becomes quite a feat: Create a very large number, and save it in min Create a temporary object we can use for locking Call Parallel.ForEach, specifying three delegates For the first delegate: Initialize a local variable to hold the local state to a very large number For the second delegate: For each item in the collection, perform some computation, save the result If the result is less than our local state, save the result in local state For the final delegate: Take a lock on our temporary object to protect our min variable Save the min of our min and local state variables Although this solves our problem, and does it in a very efficient way, we’ve created a set of code that is quite a bit more difficult to understand and maintain. PLINQ provides us with a very nice alternative.  In order to use PLINQ, we need to learn one new extension method that works on IEnumerable<T> – ParallelEnumerable.AsParallel(). That’s all we need to learn in order to use PLINQ: one single method.  We can write our minimum aggregation in PLINQ very simply: double min = collection.AsParallel().Min(item => item.PerformComputation()); By simply adding “.AsParallel()” to our LINQ to Objects query, we converted this to using PLINQ and running this computation in parallel!  This can be loosely translated into English easily, as well: Process the collection in parallel Get the Minimum value, determined by calling PerformComputation on each item Here, our intention is very clear and easy to understand.  We just want to perform the same operation we did in serial, but run it “as parallel”.  PLINQ completely extends LINQ to Objects: the entire functionality of LINQ to Objects is available.  By simply adding a call to AsParallel(), we can specify that a collection should be processed in parallel.  This is simple, safe, and incredibly useful.

    Read the article

  • Zend Pdf generation from an action

    - by Nisanth
    Hi all, I have a controller Employee , in that i have the action detail . The detail action prints like the attached image. Suppose there is a save pdf button, i need to print this page as .pdf.. How can i do this ? Pls help me Thanks in advance Nisanth

    Read the article

  • Validating a linked item&rsquo;s data template in Sitecore

    - by Kyle Burns
    I’ve been doing quite a bit of work in Sitecore recently and last week I encountered a situation that it appears many others have hit.  I was working with a field that had been configured originally as a grouped droplink, but now needed to be updated to support additional levels of hierarchy in the folder structure.  If you’ve done any work in Sitecore that statement makes sense, but if not it may seem a bit cryptic.  Sitecore offers a number of different field types and a subset of these field types focus on providing links either to other items on the content tree or to content that is not stored in Sitecore.  In the case of the grouped droplink, the field is configured with a “root” folder and each direct descendant of this folder is considered to be a header for a grouping of other items and displayed in a dropdown.  A picture is worth a thousand words, so consider the following piece of a content tree: If I configure a grouped droplink field to use the “Current” folder as its datasource, the control that gets to my content author looks like this: This presents a nicely organized display and limits the user to selecting only the direct grandchildren of the folder root.  It also presents the limitation that struck as we were thinking through the content architecture and how it would hold up over time – the authors cannot further organize content under the root folder because of the structure required for the dropdown to work.  Over time, not allowing the hierarchy to go any deeper would prevent out authors from being able to organize their content in a way that it would be found when needed, so the grouped droplink data type was not going to fit the bill. I needed to look for an alternative data type that allowed for selection of a single item and limited my choices to descendants of a specific node on the content tree.  After looking at the options available for links in Sitecore and considering them against each other, one option stood out as nearly perfect – the droptree.  This field type stores its data identically to the droplink and allows for the selection of zero or one items under a specific node in the content tree.  By changing my data template to use droptree instead of grouped droplink, the author is now presented with the following when selecting a linked item: Sounds great, but a did say almost perfect – there’s still one flaw.  The code intended to display the linked item is expecting the selection to use a specific data template (or more precisely it makes certain assumptions about the fields that will be present), but the droptree does nothing to prevent the author from selecting a folder (since folders are items too) instead of one of the items contained within a folder.  I looked to see if anyone had already solved this problem.  I found many people discussing the problem, but the closest that I found to a solution was the statement “the best thing would probably be to create a custom validator” with no further discussion in regards to what this validator might look like.  I needed to create my own validator to ensure that the user had not selected a folder.  Since so many people had the same issue, I decided to make the validator as reusable as possible and share it here. The validator that I created inherits from StandardValidator.  In order to make the validator more intuitive to developers that are familiar with the TreeList controls in Sitecore, I chose to implement the following parameters: ExcludeTemplatesForSelection – serves as a “deny list”.  If the data template of the selected item is in this list it will not validate IncludeTemplatesForSelection – this can either be empty to indicate that any template not contained in the exclusion list is acceptable or it can contain the list of acceptable templates Now that I’ve explained the parameters and the purpose of the validator, I’ll let the code do the rest of the talking: 1: /// <summary> 2: /// Validates that a link field value meets template requirements 3: /// specified using the following parameters: 4: /// - ExcludeTemplatesForSelection: If present, the item being 5: /// based on an excluded template will cause validation to fail. 6: /// - IncludeTemplatesForSelection: If present, the item not being 7: /// based on an included template will cause validation to fail 8: /// 9: /// ExcludeTemplatesForSelection trumps IncludeTemplatesForSelection 10: /// if the same value appears in both lists. Lists are comma seperated 11: /// </summary> 12: [Serializable] 13: public class LinkItemTemplateValidator : StandardValidator 14: { 15: public LinkItemTemplateValidator() 16: { 17: } 18:   19: /// <summary> 20: /// Serialization constructor is required by the runtime 21: /// </summary> 22: /// <param name="info"></param> 23: /// <param name="context"></param> 24: public LinkItemTemplateValidator(SerializationInfo info, StreamingContext context) : base(info, context) { } 25:   26: /// <summary> 27: /// Returns whether the linked item meets the template 28: /// constraints specified in the parameters 29: /// </summary> 30: /// <returns> 31: /// The result of the evaluation. 32: /// </returns> 33: protected override ValidatorResult Evaluate() 34: { 35: if (string.IsNullOrWhiteSpace(ControlValidationValue)) 36: { 37: return ValidatorResult.Valid; // let "required" validation handle 38: } 39:   40: var excludeString = Parameters["ExcludeTemplatesForSelection"]; 41: var includeString = Parameters["IncludeTemplatesForSelection"]; 42: if (string.IsNullOrWhiteSpace(excludeString) && string.IsNullOrWhiteSpace(includeString)) 43: { 44: return ValidatorResult.Valid; // "allow anything" if no params 45: } 46:   47: Guid linkedItemGuid; 48: if (!Guid.TryParse(ControlValidationValue, out linkedItemGuid)) 49: { 50: return ValidatorResult.Valid; // probably put validator on wrong field 51: } 52:   53: var item = GetItem(); 54: var linkedItem = item.Database.GetItem(new ID(linkedItemGuid)); 55:   56: if (linkedItem == null) 57: { 58: return ValidatorResult.Valid; // this validator isn't for broken links 59: } 60:   61: var exclusionList = (excludeString ?? string.Empty).Split(','); 62: var inclusionList = (includeString ?? string.Empty).Split(','); 63:   64: if ((inclusionList.Length == 0 || inclusionList.Contains(linkedItem.TemplateName)) 65: && !exclusionList.Contains(linkedItem.TemplateName)) 66: { 67: return ValidatorResult.Valid; 68: } 69:   70: Text = GetText("The field \"{0}\" specifies an item which is based on template \"{1}\". This template is not valid for selection", GetFieldDisplayName(), linkedItem.TemplateName); 71:   72: return GetFailedResult(ValidatorResult.FatalError); 73: } 74:   75: protected override ValidatorResult GetMaxValidatorResult() 76: { 77: return ValidatorResult.FatalError; 78: } 79:   80: public override string Name 81: { 82: get { return @"LinkItemTemplateValidator"; } 83: } 84: }   In this blog entry, I have shared some code that I found useful in solving a problem that seemed fairly common.  Hopefully the next person that is looking for this answer finds it useful as well.

    Read the article

  • Easy Listening = CRM On Demand Podcasts

    - by Anne
    OK, here's my NEW favorite resource for CRM On Demand info -- podcasts! Specifically, the CRM On Demand Podcast site -- signed, sealed, and delivered with humor and know-how. Yes, I admit, I know the cast of characters. But let's face it, sometimes dealing with software is just soooo dry! Not so when discussed by the two main commentators, Louis Peters and Robert Davidson, whom someone once referred to as CRM On Demand's "Click and Clack." (Thought that was too good not to pass along!) Anyhow, another huge plus about the site is the option to listen OR to read. Out walking my dog or doing the dishes? Just turn up the podcast. Listening to music or watching TV? I'll read Louis's entertaining write-ups to glean great info about CRM On Demand in a very short period of time. So that you get a better understanding of why I like this site so much, here's a sampling of what's discussed: Five Things about Books of Business As Louis Peters put it in his entry, when you see "Five Things" in the title, "you'll know you're going to get some concrete advice that you can put to work right away." Well, Louis and Robert do just that, pointing you in the right direction when using Books of Business to segment data. Moving to Indexed Fields - A Rough Guide (only an article, not a podcast) I've read all about performance and even helped develop material around it. But nowhere have I heard indexed custom fields referred to as "super heroes." Louis and Robert use imaginative language to describe the process for moving your data to indexed fields for optimal performance. Data Access QA from the Forums I think that everyone would admit that data access and visibility is the most difficult topic to understand in CRM On Demand. Following up on their previous podcast on the same topic, Louis and Robert answer a few key questions from the many postings on the Oracle CRM On Demand forums. And I bet that the scenarios match many companies' business requirements...maybe even yours! We Need to Talk About Adoption Another expert, Tim Koehler, joins Louis to talk about how to drive user adoption: aligning product usage with business results, communicating why and how to use the product, getting feedback on usability, and so on. Hope I've made my point -- turn to these podcasts to hear knowledgeable folks discuss CRM On Demand tips and tricks in entertaining ways. One podcast is even called "SaaS Talk"!

    Read the article

  • Mapping Your Data with Bing Maps and SQL Server 2008 – Part 1

    Jonas Stawski takes you step by step through a sample project that demonstrates how to create an application that can get GeoSpatial coordinate data for addresses within a SQL Server database, and then use that data to locate those addresses on a Bing Map on a website as pushpins, either grouped or ungrouped: And there is full source-code too, in the speech-bubble.

    Read the article

  • VB6 ImageList Frm Code Generation

    - by DAC
    Note: This is probably a shot in the dark, and its purely out of curiosity that I'm asking. When using the ImageList control from the Microsoft Common Control lib (mscomctl.ocx) I have found that VB6 generates FRM code that doesn't resolve to real property/method names and I am curious as to how the resolution is made. An example of the generated FRM code is given below with an ImageList containing 3 images: Begin MSComctlLib.ImageList ImageList1 BackColor = -2147483643 ImageWidth = 100 ImageHeight = 45 MaskColor = 12632256 BeginProperty Images {2C247F25-8591-11D1-B16A-00C0F0283628} NumListImages = 3 BeginProperty ListImage1 {2C247F27-8591-11D1-B16A-00C0F0283628} Picture = "Form1.frx":0054 Key = "" EndProperty BeginProperty ListImage2 {2C247F27-8591-11D1-B16A-00C0F0283628} Picture = "Form1.frx":3562 Key = "" EndProperty BeginProperty ListImage3 {2C247F27-8591-11D1-B16A-00C0F0283628} Picture = "Form1.frx":6A70 Key = "" EndProperty EndProperty End From my experience, a BeginProperty tag typically means a compound property (an object) is being assigned to, such as the Font object of most controls, for example: Begin VB.Form Form1 Caption = "Form1" ClientHeight = 10950 ClientLeft = 60 ClientTop = 450 ClientWidth = 7215 BeginProperty Font Name = "MS Serif" Size = 8.25 Charset = 0 Weight = 400 Underline = 0 'False Italic = -1 'True Strikethrough = 0 'False EndProperty End Which can be easily seen to resolve to VB.Form.Font.<Property Name>. With ImageList, there is no property called Images. The GUID associated with property Images indicates type ListImages which implements interface IImages. This type makes sense, as the ImageList control has a property called ListImages which is of type IImages. Secondly, properties ListImage1, ListImage2 and ListImage3 don't exist on type IImages, but the GUID associated with these properties indicates type ListImage which implements interface IImage. This type also makes sense, as IImages is in fact a collection of IImage. What doesn't make sense to me is how VB6 makes these associations. How does VB6 know to make the association between the name Images - ListImages purely because of an associated type (provided by the GUID) - perhaps because it's the only property of that type? Secondly, how does it resolve ListImage1, ListImage2 and ListImage3 into additions to the collection IImages, and does it use the Add method? Or perhaps the ControlDefault property? Perhaps VB6 has specific knowledge of this control and no logical resolution exists?

    Read the article

  • ASP.NET Web API - Screencast series Part 2: Getting Data

    - by Jon Galloway
    We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools. This second screencast starts to build out the Comments example - a JSON API that's accessed via jQuery. This sample uses a simple in-memory repository. At this early stage, the GET /api/values/ just returns an IEnumerable<Comment>. In part 4 we'll add on paging and filtering, and it gets more interesting.   The get by id (e.g. GET /api/values/5) case is a little more interesting. The method just returns a Comment if the Comment ID is valid, but if it's not found we throw an HttpResponseException with the correct HTTP status code (HTTP 404 Not Found). This is an important thing to get - HTTP defines common response status codes, so there's no need to implement any custom messaging here - we tell the requestor that the resource the requested wasn't there.  public Comment GetComment(int id) { Comment comment; if (!repository.TryGet(id, out comment)) throw new HttpResponseException(HttpStatusCode.NotFound); return comment; } This is great because it's standard, and any client should know how to handle it. There's no need to invent custom messaging here, and we can talk to any client that understands HTTP - not just jQuery, and not just browsers. But it's crazy easy to consume an HTTP API that returns JSON via jQuery. The example uses Knockout to bind the JSON values to HTML elements, but the thing to notice is that calling into this /api/coments is really simple, and the return from the $.get() method is just JSON data, which is really easy to work with in JavaScript (since JSON stands for JavaScript Object Notation and is the native serialization format in Javascript). $(function() { $("#getComments").click(function () { // We're using a Knockout model. This clears out the existing comments. viewModel.comments([]); $.get('/api/comments', function (data) { // Update the Knockout model (and thus the UI) with the comments received back // from the Web API call. viewModel.comments(data); }); }); }); That's it! Easy, huh? In Part 3, we'll start modifying data on the server using POST and DELETE.

    Read the article

  • Automatic INotifyPropertyChanged Implementation through T4 code generation?

    - by chrischu
    I'm currently working on setting up a new project of mine and was wondering how I could achieve that my ViewModel classes do have INotifyPropertyChanged support while not having to handcode all the properties myself. I looked into AOP frameworks but I think they would just blow up my project with another dependency. So I thought about generating the property implementations with T4. The setup would be this: I have a ViewModel class that declares just its Properties background variables and then I use T4 to generate the Property Implementations from it. For example this would be my ViewModel: public partial class ViewModel { private string p_SomeProperty; } Then T4 would go over the source file and look for member declarations named "p_" and generate a file like this: public partial class ViewModel { public string SomeProperty { get { return p_SomeProperty; } set { p_SomeProperty= value; NotifyPropertyChanged("SomeProperty"); } } } This approach has some advantages but I'm not sure if it can really work. So I wanted to post my idea here on StackOverflow as a question to get some feedback on it and maybe some advice how it can be done better/easier/safer.

    Read the article

  • WCF Proxy generation

    - by dragonfly
    Hi, I'm generating proxy using svcutil tool. My contract methods return objects of particular type. However generated proxy client interface has return value of type object. What is more I get exception with message: System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail] : The formatter threw an exception while trying to deserialize the message: There was an error while trying to deserialize parameter http://tempuri.org/:name. The InnerException message was 'XML 'Element' 'http://tempuri.org/:name' does not contain expected attribute 'http://schemas.microsoft.com/2003/10/Serialization/:Type'. The deserializer has no knowledge of which type to deserialize. Check that the type being serialized has the same contract as the type being deserialized.'. Please see InnerException for more details. Any ideas what's going on?

    Read the article

  • No search data in Google Analytics or Webmasters

    - by cjk
    I have a domain that has been registered in Google Webmasters and using Google Analytics for over 4 months. I get lots of analytics data, but am getting no information on Google searches in Webmasters, or Queries in Search Engine Optimisation in Analytics, even though I am getting keywords for traffic coming to my site from search engines. I have a test sub-domain with the same setup (except not HTTPS) that is getting some of this information through, even with much less data and visits. What could be wrong to stop me getting this information?

    Read the article

  • ASP.NET MVC 2 generation of the List/Index view

    - by Klas Mellbourn
    ASP.NET MVC 2 has powerful features for generating the model-dependent content of the Edit view (using EditorForModel) and Details view (using DisplayForModel) that automatically utilizes metadata and editor (or display) templates: <% using (Html.BeginForm()) {%> <%= Html.ValidationSummary(true) %> <fieldset> <legend><%= Html.LabelForModel() %></legend> <%= Html.EditorForModel() %> <p> <input type="submit" value="Save" /> </p> </fieldset> <% } %> However, I cannot find any comparable tools for the "last" step of generating the Index view (a.k.a. the List view). There I have to hard code the columns first in the row representing the headers and then inside the foreach loop: <h2>Index</h2> <table> <tr> <th></th> <th> ID </th> <th> Foo </th> <th> Bar </th> </tr> <% foreach (var item in Model) { %> <tr> <td> <%= Html.ActionLink("Edit", "Edit", new { id=item.ID }) %> | <%= Html.ActionLink("Details", "Details", new { id=item.ID })%> | <%= Html.ActionLink("Delete", "Delete", new { id=item.ID })%> </td> <td> <%= Html.Encode(item.ID) %> </td> <td> <%= Html.Encode(item.Foo) %> </td> <td> <%= Html.Encode(String.Format("{0:g}", item.Bar)) %> </td> </tr> <% } %> </table> What would be the best way to generate the columns (utlizing metadata such as HiddenInput), with the aim of making the Index view as free of model particulars as Edit and Details?

    Read the article

  • The Basics of Desktop Data Recovery

    Desktop data recovery is an important part of computer repairs, as it is pretty common for a hard drive or server RAID to fail and lose major amounts of data. With desktops especially, it is sometime... [Author: Richard Cuthbertson - Computers and Internet - April 07, 2010]

    Read the article

  • Seeking for faster $.(':data(key)')

    - by PoltoS
    I'm writing an extension to jQuery that adds data to DOM elements using el.data('lalala', my_data); and then uses that data to upload elements dynamically. Each time I get new data from the server I need to update all elements having el.data('lalala') != null; To get all needed elements I use an extension by James Padolsey: $(':data(lalala)').each(...); Everything was great until I came to the situation where I need to run that code 50 times - it is very slow! It takes about 8 seconds to execute on my page with 3640 DOM elements var x, t = (new Date).getTime(); for (n=0; n < 50; n++) { jQuery(':data(lalala)').each(function() { x++; }); }; console.log(((new Date).getTime()-t)/1000); Since I don't need RegExp as parameter of :data selector I've tried to replace this by var x, t = (new Date).getTime(); for (n=0; n < 50; n++) { jQuery('*').each(function() { if ($(this).data('lalala')) x++; }); }; console.log(((new Date).getTime()-t)/1000); This code is faster (5 sec), but I want get more. Q Are there any faster way to get all elements with this data key? In fact, I can keep an array with all elements I need, since I execute .data('key') in my module. Checking 100 elements having the desired .data('lalala') is better then checking 3640 :) So the solution would be like for (i in elements) { el = elements[i]; .... But sometimes elements are removed from the page (using jQuery .remove()). Both solutions described above [$(':data(lalala)') solution and if ($(this).data('lalala'))] will skip removed items (as I need), while the solution with array will still point to removed element (in fact, the element would not be really deleted - it will only be deleted from the DOM tree - because my array will still have a reference). I found that .remove() also removes data from the node, so my solution will change into var toRemove = []; for (vari in elements) { var el = elements[i]; if ($(el).data('lalala')) .... else toRemove.push(i); }; for (var ii in toRemove) elements.splice(toRemove[ii], 1); // remove element from array This solution is 100 times faster! Q Will the garbage collector release memory taken by DOM elements when deleted from that array? Remember, elements have been referenced by DOM tree, we made a new reference in our array, then removed with .remove() and then removed from the array. Is there a better way to do this?

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >