Search Results

Search found 15169 results on 607 pages for 'virtual attribute'.

Page 145/607 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • New Features in ASP.NET Web API 2 - Part I

    - by dwahlin
    I’m a big fan of ASP.NET Web API. It provides a quick yet powerful way to build RESTful HTTP services that can easily be consumed by a variety of clients. While it’s simple to get started using, it has a wealth of features such as filters, formatters, and message handlers that can be used to extend it when needed. In this post I’m going to provide a quick walk-through of some of the key new features in version 2. I’ll focus on some two of my favorite features that are related to routing and HTTP responses and cover additional features in a future post.   Attribute Routing Routing has been a core feature of Web API since it’s initial release and something that’s built into new Web API projects out-of-the-box. However, there are a few scenarios where defining routes can be challenging such as nested routes (more on that in a moment) and any situation where a lot of custom routes have to be defined. For this example, let’s assume that you’d like to define the following nested route:   /customers/1/orders   This type of route would select a customer with an Id of 1 and then return all of their orders. Defining this type of route in the standard WebApiConfig class is certainly possible, but it isn’t the easiest thing to do for people who don’t understand routing well. Here’s an example of how the route shown above could be defined:   public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "CustomerOrdersApiGet", routeTemplate: "api/customers/{custID}/orders", defaults: new { custID = 0, controller = "Customers", action = "Orders" } ); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); GlobalConfiguration.Configuration.Formatters.Insert(0, new JsonpFormatter()); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   With attribute based routing, defining these types of nested routes is greatly simplified. To get started you first need to make a call to the new MapHttpAttributeRoutes() method in the standard WebApiConfig class (or a custom class that you may have created that defines your routes) as shown next:   public static class WebApiConfig { public static void Register(HttpConfiguration config) { // Allow for attribute based routes config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } Once attribute based routes are configured, you can apply the Route attribute to one or more controller actions. Here’s an example:   [HttpGet] [Route("customers/{custId:int}/orders")] public List<Order> Orders(int custId) { var orders = _Repository.GetOrders(custId); if (orders == null) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return orders; }   This example maps the custId route parameter to the custId parameter in the Orders() method and also ensures that the route parameter is typed as an integer. The Orders() method can be called using the following route: /customers/2/orders   While this is extremely easy to use and gets the job done, it doesn’t include the default “api” string on the front of the route that you might be used to seeing. You could add “api” in front of the route and make it “api/customers/{custId:int}/orders” but then you’d have to repeat that across other attribute-based routes as well. To simply this type of task you can add the RoutePrefix attribute above the controller class as shown next so that “api” (or whatever the custom starting point of your route is) is applied to all attribute routes: [RoutePrefix("api")] public class CustomersController : ApiController { [HttpGet] [Route("customers/{custId:int}/orders")] public List<Order> Orders(int custId) { var orders = _Repository.GetOrders(custId); if (orders == null) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return orders; } }   There’s much more that you can do with attribute-based routing in ASP.NET. Check out the following post by Mike Wasson for more details.   Returning Responses with IHttpActionResult The first version of Web API provided a way to return custom HttpResponseMessage objects which were pretty easy to use overall. However, Web API 2 now wraps some of the functionality available in version 1 to simplify the process even more. A new interface named IHttpActionResult (similar to ActionResult in ASP.NET MVC) has been introduced which can be used as the return type for Web API controller actions. To return a custom response you can use new helper methods exposed through ApiController such as: Ok NotFound Exception Unauthorized BadRequest Conflict Redirect InvalidModelState Here’s an example of how IHttpActionResult and the helper methods can be used to cleanup code. This is the typical way to return a custom HTTP response in version 1:   public HttpResponseMessage Delete(int id) { var status = _Repository.DeleteCustomer(id); if (status) { return new HttpResponseMessage(HttpStatusCode.OK); } else { throw new HttpResponseException(HttpStatusCode.NotFound); } } With version 2 we can replace HttpResponseMessage with IHttpActionResult and simplify the code quite a bit:   public IHttpActionResult Delete(int id) { var status = _Repository.DeleteCustomer(id); if (status) { //return new HttpResponseMessage(HttpStatusCode.OK); return Ok(); } else { //throw new HttpResponseException(HttpStatusCode.NotFound); return NotFound(); } } You can also cleanup post (insert) operations as well using the helper methods. Here’s a version 1 post action:   public HttpResponseMessage Post([FromBody]Customer cust) { var newCust = _Repository.InsertCustomer(cust); if (newCust != null) { var msg = new HttpResponseMessage(HttpStatusCode.Created); msg.Headers.Location = new Uri(Request.RequestUri + newCust.ID.ToString()); return msg; } else { throw new HttpResponseException(HttpStatusCode.Conflict); } } This is what the code looks like in version 2:   public IHttpActionResult Post([FromBody]Customer cust) { var newCust = _Repository.InsertCustomer(cust); if (newCust != null) { return Created<Customer>(Request.RequestUri + newCust.ID.ToString(), newCust); } else { return Conflict(); } } More details on IHttpActionResult and the different helper methods provided by the ApiController base class can be found here. Conclusion Although there are several additional features available in Web API 2 that I could cover (CORS support for example), this post focused on two of my favorites features. If you have .NET 4.5.1 available then I definitely recommend checking the new features out. Additional articles that cover features in ASP.NET Web API 2 can be found here.

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Sync custom AD properties to SharePoint Profile

    - by KunaalKapoor
    Here are some step-by-step instructions regarding configuring SharePoint to sync with custom AD attributes:Add the custom attribute in Active DirectoryThis part will have to be your doing; here is some documentation regarding creating customattributes in AD:http://msdn.microsoft.com/en-us/library/ms675085(VS.85).aspxhttp://technet.microsoft.com/en-us/magazine/2008.05.schema.aspxhttp://blogs.technet.com/b/isingh/archive/2007/02/18/adding-custom-attributes-in-active-directory.aspx2. Open up the miisclient.exe (C:\Program Files\Microsoft Office Servers\14.0\Synchronization Service\UIShell\miisclient.exe)a. This will have to be opened up with the farm admin account3. Click on "Management Agents" in the ribbon4. Right-click the Active Directory Management Agent ("MOSS-<name of sync connection>") and click "Refresh Schema"a. When prompted, enter the credentials for the farm account5. Once complete, close out of miisclient.exe6. Go into Central Admin --> Application Management --> Manage Service Applications --> Go into the User Profile Service Application7. Click on "Manage User Properties"8. Click on "New Property"9. Put in the correct information regarding the attribute that was created10. At the bottom of this page, under the "Source Data Connection" drop down, select the AD synchronization connection you have already configured11. For the "Attribute" drop down, select the new attribute you have created12. For the "Direction" drop down, select "Import"13. Click "OK"14. Run a full synchronization for the user profile service application and the custom property will get synced (as long as the attribute is set in Active Directory for the desired users)

    Read the article

  • Update records in OAF Page

    - by PRajkumar
    1. Create a Search Page to Create a page please go through the following link https://blogs.oracle.com/prajkumar/entry/create_oaf_search_page   2. Implement Update Action in SearchPG Right click on ResultTable in SearchPG > New > Item   Set following properties for New Item   Attribute Property ID UpdateAction Item Style image Image URI updateicon_enabled.gif Atribute Set /oracle/apps/fnd/attributesets/Buttons/Update Prompt Update Additional Text Update record Height 24 Width 24 Action Type fireAction Event update Submit True Parameters Name – PColumn1 Value -- ${oa.SearchVO1.Column1} Name – PColumn2 Value -- ${oa.SearchVO1.Column2}   3. Create a Update Page Right click on SearchDemo > New > Web Tier > OA Components > Page Name – UpdatePG Package – prajkumar.oracle.apps.fnd.searchdemo.webui   4. Select the UpdatePG and go to the strcuture pane where a default region has been created   5. Select region1 and set the following properties:   Attribute Property ID PageLayoutRN Region Style PageLayout AM Definition prajkumar.oracle.apps.fnd.searchdemo.server.SearchAM Window Title Update Page Window Title Update Page Auto Footer True   6. Create the Second Region (Main Content Region) Select PageLayoutRN right click > New > Region ID – MainRN Region Style – messageComponentLayout   7. Create first Item (Empty Field) MainRN > New > messageTextInput   Attribute Property ID Column1 Style Property messageTextInput Prompt Column1 Data Type VARCHAR2 Length 20 Maximum Length 100 View Instance SearchVO1 View Attribute Column1   8. Create second Item (Empty Field) MainRN > New > messageTextInput   Attribute Property ID Column2 Style Property messageTextInput Prompt Column2 Data Type VARCHAR2 Length 20 Maximum Length 100 View Instance SearchVO1 View Attribute Column2   9. Create a container Region for Apply and Cancel Button in UpdatePG Select MainRN of UpdatePG MainRN > messageLayout   Attribute Property Region ButtonLayout   10. Create Apply Button Select ButtonLayout > New > Item   Attribute Property ID Apply Item Style submitButton Attribute /oracle/apps/fnd/attributesets/Buttons/Apply   11. Create Cancel Button Select ButtonLayout > New > Item   Attribute Property ID Cancel Item Style submitButton Attribute /oracle/apps/fnd/attributesets/Buttons/Cancel   12. Add Page Controller for SearchPG Right Click on PageLayoutRN of SearchPG > Set New Controller Name – SearchCO Package -- prajkumar.oracle.apps.fnd.searchdemo.webui   Add Following code in Search Page controller SearchCO    import oracle.apps.fnd.framework.webui.OAPageContext; import oracle.apps.fnd.framework.webui.beans.OAWebBean; import oracle.apps.fnd.framework.webui.OAWebBeanConstants; import oracle.apps.fnd.framework.webui.beans.layout.OAQueryBean; public void processRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processRequest(pageContext, webBean);  OAQueryBean queryBean = (OAQueryBean)webBean.findChildRecursive("QueryRN");  queryBean.clearSearchPersistenceCache(pageContext); }   public void processFormRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processFormRequest(pageContext, webBean);     if ("update".equals(pageContext.getParameter(EVENT_PARAM)))  {   pageContext.setForwardURL("OA.jsp?page=/prajkumar/oracle/apps/fnd/searchdemo/webui/UpdatePG",                                     null,                                     OAWebBeanConstants.KEEP_MENU_CONTEXT,                                                                 null,                                                                                        null,                                     true,                                                                 OAWebBeanConstants.ADD_BREAD_CRUMB_NO,                                     OAWebBeanConstants.IGNORE_MESSAGES);  }  } 13. Add Page Controller for UpdatePG Right Click on PageLayoutRN of UpdatePG > Set New Controller Name – UpdateCO Package -- prajkumar.oracle.apps.fnd.searchdemo.webui   Add Following code in Update Page controller UpdateCO    import oracle.apps.fnd.framework.webui.OAPageContext; import oracle.apps.fnd.framework.webui.beans.OAWebBean; import oracle.apps.fnd.framework.webui.OAWebBeanConstants; import oracle.apps.fnd.framework.OAApplicationModule; import java.io.Serializable;  public void processRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processRequest(pageContext, webBean);  OAApplicationModule am = pageContext.getApplicationModule(webBean);  String Column1 = pageContext.getParameter("PColumn1");  String Column2 = pageContext.getParameter("PColumn2");  Serializable[] params = { Column1, Column2 };  am.invokeMethod("updateRow", params); } public void processFormRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processFormRequest(pageContext, webBean);  OAApplicationModule am = pageContext.getApplicationModule(webBean);         if (pageContext.getParameter("Apply") != null)  {    am.invokeMethod("apply");   pageContext.forwardImmediately("OA.jsp?page=/prajkumar/oracle/apps/fnd/searchdemo/webui/SearchPG",                                          null,                                          OAWebBeanConstants.KEEP_MENU_CONTEXT,                                          null,                                          null,                                          false, // retain AM                                          OAWebBeanConstants.ADD_BREAD_CRUMB_NO);  }  else if (pageContext.getParameter("Cancel") != null)  {    am.invokeMethod("rollback");   pageContext.forwardImmediately("OA.jsp?page=/prajkumar/oracle/apps/fnd/searchdemo/webui/SearchPG",                                          null,                                          OAWebBeanConstants.KEEP_MENU_CONTEXT,                                          null,                                          null,                                          false, // retain AM                                          OAWebBeanConstants.ADD_BREAD_CRUMB_NO);  } }   14. Add following Code in SearchAMImpl   import oracle.apps.fnd.framework.server.OAApplicationModuleImpl; import oracle.apps.fnd.framework.server.OAViewObjectImpl;     public void updateRow(String Column1, String Column2) {  SearchVOImpl vo = (SearchVOImpl)getSearchVO1();  vo.initQuery(Column1, Column2); }     public void apply() {  getTransaction().commit(); } public void rollback() {  getTransaction().rollback(); }   15. Add following Code in SearchVOImpl   import oracle.apps.fnd.framework.server.OAViewObjectImpl;     public void initQuery(String Column1, String Column2) {  if ((Column1 != null) && (!("".equals(Column1.trim()))))  {   setWhereClause("column1 = :1 AND column2 = :2");   setWhereClauseParams(null); // Always reset   setWhereClauseParam(0, Column1);   setWhereClauseParam(1, Column2);   executeQuery();  } }   16. Congratulation you have successfully finished. Run Your Search page and Test Your Work                          

    Read the article

  • SSAS: Utility to check you have the correct data types and sizes in your cube definition

    - by DrJohn
    This blog describes a tool I developed which allows you to compare the data types and data sizes found in the cube’s data source view with the data types/sizes of the corresponding dimensional attribute.  Why is this important?  Well when creating named queries in a cube’s data source view, it is often necessary to use the SQL CAST or CONVERT operation to change the data type to something more appropriate for SSAS.  This is particularly important when your cube is based on an Oracle data source or using custom SQL queries rather than views in the relational database.   The problem with BIDS is that if you change the underlying SQL query, then the size of the data type in the dimension does not update automatically.  This then causes problems during deployment whereby processing the dimension fails because the data in the relational database is wider than that allowed by the dimensional attribute. In particular, if you use some string manipulation functions provided by SQL Server or Oracle in your queries, you may find that the 10 character string you expect suddenly turns into an 8,000 character monster.  For example, the SQL Server function REPLACE returns column with a width of 8,000 characters.  So if you use this function in the named query in your DSV, you will get a column width of 8,000 characters.  Although the Oracle REPLACE function is far more intelligent, the generated column size could still be way bigger than the maximum length of the data actually in the field. Now this may not be a problem when prototyping, but in your production cubes you really should clean up this kind of thing as these massive strings will add to processing times and storage space. Similarly, you do not want to forget to change the size of the dimension attribute if your database columns increase in size. Introducing CheckCubeDataTypes Utiltity The CheckCubeDataTypes application extracts all the data types and data sizes for all attributes in the cube and compares them to the data types and data sizes in the cube’s data source view.  It then generates an Excel CSV file which contains all this metadata along with a flag indicating if there is a mismatch between the DSV and the dimensional attribute.  Note that the app not only checks all the attribute keys but also the name and value columns for each attribute. Another benefit of having the metadata held in a CSV text file format is that you can place the file under source code control.  This allows you to compare the metadata of the previous cube release with your new release to highlight problems introduced by new development. You can download the C# source code from here: CheckCubeDataTypes.zip A typical example of the output Excel CSV file is shown below - note that the last column shows a data size mismatch by TRUE appearing in the column

    Read the article

  • Separate php.ini file for each Apache virtual host?

    - by Calvin L
    Is it possible to have a separate php.ini file that overrides the default php.ini file for each virtual host? I'm running Apache/2.2.14, PHP 5.3.2-1. For example I have several vhosts pointing to domains in my /var/www/ directory: /var/www/website1.com /var/www/website2.com What I'd like is to be able to place a custom php.ini file in each directory that would override the default values only for that vhost, but keep the original defaults if the value isn't specified: /var/www/website1.com/htdocs/ /var/www/website1.com/php.ini

    Read the article

  • Setup Convergence Address Book DisplayName Lookup

    - by user13332755
    At Convergence Address Book the default lookup for 'Display Name' is the LDAP attribute 'cn', which leads into confusion if you start to setup 'displayName' LDAP attributes for your users. LDAP User Entry with diaplyName attribute: This behavior can be controlled by the configuration file xlate-inetorgperson.xml. Change the default value of XPATH abperson\entry\displayname from 'cn' to 'displayName'. <convergence_deploy_location>/config/templates/ab/corp-dir/xlate-inetorgperson.xml Note: In case the user has no 'displayName' attribute in LDAP you might noticed '...' at the user entry, if found via 'mail' attribute.

    Read the article

  • How bad is it to use a virtual file system with VMWare? [closed]

    - by user30997
    IT is running a series of VMs that we'd like to see optimized further: if the VMs' are Windows XP, storing their NTFS images out to the virtual disk (ext3) provided by Linux/VMWare, how much of a hit are we taking - as opposed to having a partition of the host hard drive formatted NTFS to eliminate the translation layer and the extra level of operating system IO preparation?

    Read the article

  • C#: How would you only draw certain ListView Items while in Virtual Mode?

    - by Jonathan Richter
    C#: How would you only draw certain ListView Items while in Virtual Mode? I am trying to create a filter-like feature to use in listview so that if the user selects an imageindex from 0-5, it will loop through the listview items and only make it so that the items in question with the correct image index will be displayed and the other items will be hidden. How would I go upon creating such a routine?

    Read the article

  • How do I set a single virtual directory of my web app to not inherit web.config?

    - by Ryan
    I have a virtual directory setup in one of my web apps that needs to not inherit the web.config of the main app so it can run on it's own. I am wondering how I can do this because right now when I hit it (mainwebapp.domain.com/virdir) it throws an error saying it can't find some dependencies that are listed in the main apps web.config (shows main app web.config in the error message), this virdir contains it's own little app that needs to just run standalone.

    Read the article

  • Can I Move a MediaWiki site to a new virtual folder?

    - by Tyler
    I had MediaWiki installed in as the default site on my server using IIS 6 and I created a virtual directory and pointed it to the Wiki folder. The page loads, but all the links are pointing to the original location that is no longer pointing to the Wiki Folder. Info Server Name: tech Path to Wiki: http://tech/wiki/ Example Links Wrong Link - This is the current link displayed http://tech/index.php?title=Main_Page The link should look like this http://tech/wiki/index.php?title=Main_Page All of the links are not showing the /wiki Any ideas?

    Read the article

  • Caller Info Attributes in C# 5.0

    - by Jalpesh P. Vadgama
    In c# 5.0 Microsoft has introduced a Caller information attribute. It’s a new feature that is introduced in C# 5.0 and very useful if you want to log your code activities. With the help of this you can implement the log functionality very easily. It can help any programmer in tracing, debugging and diagnostic of any application. With the help of Caller Information we can get following information over there. CallerFilePathAttribute: With help of  this attribute we can get full path of source file that contains caller. This will be file path from which contains caller at compile time. CallerLineNumberAttribute:  With the help of this attribute we can get line number of source file which the method is called. CallerMemberNameAttribute: With the help of this attribute we can get the method or property name of the caller. Read more

    Read the article

  • c++ compile error

    - by Niranjan
    Hi, I am trying to develop abstract design pattern code for one of my project as below.. But, I am not able to compile the code ..giving some compile errors(like "conversion from 'ProductA1 *' to 'ProductA *' exists, but is inaccessible" ).. Can any one please help me out in this... #include "stdafx.h" #include <iostream> using namespace std; class ProductA { public: virtual void Operation1()=0; virtual void Operation2()=0; }; class ProductA1 : ProductA { public: virtual void Operation1() {cout<<"PD ProductA1 Operation1"<<endl; } virtual void Operation2() {cout<<"PD ProductA1 Operation2"<<endl; } }; class ProductA2 : ProductA { public: virtual void Operation1() {cout<<"DT ProductA2 Operation1"<<endl; } virtual void Operation2() {cout<<"DT ProductA2 Operation2"<<endl; } }; //------------------------------------------------------------- class ProductB { public: virtual void Operation3()=0; virtual void Operation4()=0; }; class ProductB1 : ProductB { public: void Operation3() { cout<<"PD ProductB1 Operation3"<<endl; } void Operation4() { cout<<"PD ProductB1 Operation4"<<endl; } }; class ProductB2 : ProductB { public: void Operation3() { cout<<"DT ProductB2 Operation3"<<endl; } void Operation4() { cout<<"DT ProductB2 Operation4"<<endl; } }; //--------------- abstrct factory --------------------------- class Factory { public: virtual ProductA* CreateA () =0; virtual ProductB* CreateB ()=0; }; class Factory1 : Factory { public: ProductA* CreateA () { return new ProductA1(); } ProductB* CreateB () { return new ProductB1(); } }; class Factory2 : Factory { public: ProductA* CreateA () { return new ProductA2(); } ProductB* CreateB () { return new ProductB2(); } }; //--------------------- client -------------------------------- int _tmain(int argc, _TCHAR* argv[]) { Factory* pf = new Factory1(); ProductA *pa = pf->CreateA(); pa->Operation1(); pa->Operation2(); ProductB *pb = pf->CreateB(); pb->Operation3(); pb->Operation4(); return 0; }

    Read the article

  • Problem creating levels using inherited classes/polymorphism

    - by Adam
    I'm trying to write my level classes by having a base class that each level class inherits from...The base class uses pure virtual functions. My base class is only going to be used as a vector that'll have the inherited level classes pushed onto it...This is what my code looks like at the moment, I've tried various things and get the same result (segmentation fault). //level.h class Level { protected: Mix_Music *music; SDL_Surface *background; SDL_Surface *background2; vector<Enemy> enemy; bool loaded; int time; public: Level(); virtual ~Level(); int bgX, bgY; int bg2X, bg2Y; int width, height; virtual void load(); virtual void unload(); virtual void update(); virtual void draw(); }; //level.cpp Level::Level() { bgX = 0; bgY = 0; bg2X = 0; bg2Y = 0; width = 2048; height = 480; loaded = false; time = 0; } Level::~Level() { } //virtual functions are empty... I'm not sure exactly what I'm supposed to include in the inherited class structure, but this is what I have at the moment... //level1.h class Level1: public Level { public: Level1(); ~Level1(); void load(); void unload(); void update(); void draw(); }; //level1.cpp Level1::Level1() { } Level1::~Level1() { enemy.clear(); Mix_FreeMusic(music); SDL_FreeSurface(background); SDL_FreeSurface(background2); music = NULL; background = NULL; background2 = NULL; Mix_CloseAudio(); } void Level1::load() { music = Mix_LoadMUS("music/song1.xm"); background = loadImage("image/background.png"); background2 = loadImage("image/background2.png"); Mix_OpenAudio(48000, MIX_DEFAULT_FORMAT, 2, 4096); Mix_PlayMusic(music, -1); } void Level1::unload() { } //functions have level-specific code in them... Right now for testing purposes, I just have the main loop call Level1 level1; and use the functions, but when I run the game I get a segmentation fault. This is the first time I've tried writing inherited classes, so I know I'm doing something wrong, but I can't seem to figure out what exactly.

    Read the article

  • Header Guard Issues - Getting Swallowed Alive

    - by gjnave
    I'm totally at wit's end: I can't figure out how my dependency issues. I've read countless posts and blogs and reworked my code so many times that I can't even remember what almost worked and what didnt. I continually get not only redefinition errors, but class not defined errors. I rework the header guards and remove some errors simply to find others. I somehow got everything down to one error but then even that got broke while trying to fix it. Would you please help me figure out the problem? card.cpp #include <iostream> #include <cctype> #include "card.h" using namespace std; // ====DECL====== Card::Card() { abilities = 0; flavorText = 0; keywords = 0; artifact = 0; classType = new char[strlen("Card") + 1]; classType = "Card"; } Card::~Card (){ delete name; delete abilities; delete flavorText; artifact = NULL; } // ------------ Card::Card(const Card & to_copy) { name = new char[strlen(to_copy.name) +1]; // creating dynamic array strcpy(to_copy.name, name); type = to_copy.type; color = to_copy.color; manaCost = to_copy.manaCost; abilities = new char[strlen(to_copy.abilities) +1]; strcpy(abilities, to_copy.abilities); flavorText = new char[strlen(to_copy.flavorText) +1]; strcpy(flavorText, to_copy.flavorText); keywords = new char[strlen(to_copy.keywords) +1]; strcpy(keywords, to_copy.keywords); inPlay = to_copy.inPlay; tapped = to_copy.tapped; enchanted = to_copy.enchanted; cursed = to_copy.cursed; if (to_copy.type != ARTIFACT) artifact = to_copy.artifact; } // ====DECL===== int Card::equipArtifact(Artifact* to_equip){ artifact = to_equip; } Artifact * Card::unequipArtifact(Card * unequip_from){ Artifact * to_remove = artifact; artifact = NULL; return to_remove; // put card in hand or in graveyard } int Card::enchant( Card * to_enchant){ to_enchant->enchanted = true; cout << "enchanted" << endl; } int Card::disenchant( Card * to_disenchant){ to_disenchant->enchanted = false; cout << "Enchantment Removed" << endl; } // ========DECL===== Spell::Spell() { currPower = basePower; currToughness = baseToughness; classType = new char[strlen("Spell") + 1]; classType = "Spell"; } Spell::~Spell(){} // --------------- Spell::Spell(const Spell & to_copy){ currPower = to_copy.currPower; basePower = to_copy.basePower; currToughness = to_copy.currToughness; baseToughness = to_copy.baseToughness; } // ========= int Spell::attack( Spell *& blocker ){ blocker->currToughness -= currPower; currToughness -= blocker->currToughness; } //========== int Spell::counter (Spell *& to_counter){ cout << to_counter->name << " was countered by " << name << endl; } // ============ int Spell::heal (Spell *& to_heal, int amountOfHealth){ to_heal->currToughness += amountOfHealth; } // ------- Creature::Creature(){ summoningSick = true; } // =====DECL====== Land::Land(){ color = NON; classType = new char[strlen("Land") + 1]; classType = "Land"; } // ------ int Land::generateMana(int mana){ // ... // } card.h #ifndef CARD_H #define CARD_H #include <cctype> #include <iostream> #include "conception.h" class Artifact; class Spell; class Card : public Conception { public: Card(); Card(const Card &); ~Card(); protected: char* name; enum CardType { INSTANT, CREATURE, LAND, ENCHANTMENT, ARTIFACT, PLANESWALKER}; enum CardColor { WHITE, BLUE, BLACK, RED, GREEN, NON }; CardType type; CardColor color; int manaCost; char* abilities; char* flavorText; char* keywords; bool inPlay; bool tapped; bool cursed; bool enchanted; Artifact* artifact; virtual int enchant( Card * ); virtual int disenchant (Card * ); virtual int equipArtifact( Artifact* ); virtual Artifact* unequipArtifact(Card * ); }; // ------------ class Spell: public Card { public: Spell(); ~Spell(); Spell(const Spell &); protected: virtual int heal( Spell *&, int ); virtual int attack( Spell *& ); virtual int counter( Spell*& ); int currToughness; int baseToughness; int currPower; int basePower; }; class Land: public Card { public: Land(); ~Land(); protected: virtual int generateMana(int); }; class Forest: public Land { public: Forest(); ~Forest(); protected: int generateMana(); }; class Creature: public Spell { public: Creature(); ~Creature(); protected: bool summoningSick; }; class Sorcery: public Spell { public: Sorcery(); ~Sorcery(); protected: }; #endif conception.h -- this is an "uber class" from which everything derives class Conception{ public: Conception(); ~Conception(); protected: char* classType; }; conception.cpp Conception::Conception{ Conception(){ classType = new char[11]; char = "Conception"; } game.cpp -- this is an incomplete class as of this code #include <iostream> #include <cctype> #include "game.h" #include "player.h" Battlefield::Battlefield(){ card = 0; } Battlefield::~Battlefield(){ delete card; } Battlefield::Battlefield(const Battlefield & to_copy){ } // =========== /* class Game(){ public: Game(); ~Game(); protected: Player** player; // for multiple players Battlefield* root; // for battlefield getPlayerMove(); // ask player what to do addToBattlefield(); removeFromBattlefield(); sendAttack(); } */ #endif game.h #ifndef GAME_H #define GAME_H #include "list.h" class CardList(); class Battlefield : CardList{ public: Battlefield(); ~Battlefield(); protected: Card* card; // make an array }; class Game : Conception{ public: Game(); ~Game(); protected: Player** player; // for multiple players Battlefield* root; // for battlefield getPlayerMove(); // ask player what to do addToBattlefield(); removeFromBattlefield(); sendAttack(); Battlefield* field; }; list.cpp #include <iostream> #include <cctype> #include "list.h" // ========== LinkedList::LinkedList(){ root = new Node; classType = new char[strlen("LinkedList") + 1]; classType = "LinkedList"; }; LinkedList::~LinkedList(){ delete root; } LinkedList::LinkedList(const LinkedList & obj) { // code to copy } // --------- // ========= int LinkedList::delete_all(Node* root){ if (root = 0) return 0; delete_all(root->next); root = 0; } int LinkedList::add( Conception*& is){ if (root == 0){ root = new Node; root->next = 0; } else { Node * curr = root; root = new Node; root->next=curr; root->it = is; } } int LinkedList::remove(Node * root, Node * prev, Conception* is){ if (root = 0) return -1; if (root->it == is){ root->next = root->next; return 0; } remove(root->next, root, is); return 0; } Conception* LinkedList::find(Node*& root, const Conception* is, Conception* holder = NULL) { if (root==0) return NULL; if (root->it == is){ return root-> it; } holder = find(root->next, is); return holder; } Node* LinkedList::goForward(Node * root){ if (root==0) return root; if (root->next == 0) return root; else return root->next; } // ============ Node* LinkedList::goBackward(Node * root){ root = root->prev; } list.h #ifndef LIST_H #define LIST_H #include <iostream> #include "conception.h" class Node : public Conception { public: Node() : next(0), prev(0), it(0) { it = 0; classType = new char[strlen("Node") + 1]; classType = "Node"; }; ~Node(){ delete it; delete next; delete prev; } Node* next; Node* prev; Conception* it; // generic object }; // ---------------------- class LinkedList : public Conception { public: LinkedList(); ~LinkedList(); LinkedList(const LinkedList&); friend bool operator== (Conception& thing_1, Conception& thing_2 ); protected: virtual int delete_all(Node*); virtual int add( Conception*& ); // virtual Conception* find(Node *&, const Conception*, Conception* ); // virtual int remove( Node *, Node *, Conception* ); // removes question with keyword int display_all(node*& ); virtual Node* goForward(Node *); virtual Node* goBackward(Node *); Node* root; // write copy constrcutor }; // ============= class CircularLinkedList : public LinkedList { public: // CircularLinkedList(); // ~CircularLinkedList(); // CircularLinkedList(const CircularLinkedList &); }; class DoubleLinkedList : public LinkedList { public: // DoubleLinkedList(); // ~DoubleLinkedList(); // DoubleLinkedList(const DoubleLinkedList &); protected: }; // END OF LIST Hierarchy #endif player.cpp #include <iostream> #include "player.h" #include "list.h" using namespace std; Library::Library(){ root = 0; } Library::~Library(){ delete card; } // ====DECL========= Player::~Player(){ delete fname; delete lname; delete deck; } Wizard::~Wizard(){ delete mana; delete rootL; delete rootH; } // =====Player====== void Player::changeName(const char[] first, const char[] last){ char* backup1 = new char[strlen(fname) + 1]; strcpy(backup1, fname); char* backup2 = new char[strlen(lname) + 1]; strcpy(backup1, lname); if (first != NULL){ fname = new char[strlen(first) +1]; strcpy(fname, first); } if (last != NULL){ lname = new char[strlen(last) +1]; strcpy(lname, last); } return 0; } // ========== void Player::seeStats(Stats*& to_put){ to_put->wins = stats->wins; to_put->losses = stats->losses; to_put->winRatio = stats->winRatio; } // ---------- void Player::displayDeck(const LinkedList* deck){ } // ================ void CardList::findCard(Node* root, int id, NodeCard*& is){ if (root == NULL) return; if (root->it.id == id){ copyCard(root->it, is); return; } else findCard(root->next, id, is); } // -------- void CardList::deleteAll(Node* root){ if (root == NULL) return; deleteAll(root->next); root->next = NULL; } // --------- void CardList::removeCard(Node* root, int id){ if (root == NULL) return; if (root->id = id){ root->prev->next = root->next; // the prev link of root, looks back to next of prev node, and sets to where root next is pointing } return; } // --------- void CardList::addCard(Card* to_add){ if (!root){ root = new Node; root->next = NULL; root->prev = NULL; root->it = &to_add; return; } else { Node* original = root; root = new Node; root->next = original; root->prev = NULL; original->prev = root; } } // ----------- void CardList::displayAll(Node*& root){ if (root == NULL) return; cout << "Card Name: " << root->it.cardName; cout << " || Type: " << root->it.type << endl; cout << " --------------- " << endl; if (root->classType == "Spell"){ cout << "Base Power: " << root->it.basePower; cout << " || Current Power: " << root->it.currPower << endl; cout << "Base Toughness: " << root->it.baseToughness; cout << " || Current Toughness: " << root->it.currToughness << endl; } cout << "Card Type: " << root->it.currPower; cout << " || Card Color: " << root->it.color << endl; cout << "Mana Cost" << root->it.manaCost << endl; cout << "Keywords: " << root->it.keywords << endl; cout << "Flavor Text: " << root->it.flavorText << endl; cout << " ----- Class Type: " << root->it.classType << " || ID: " << root->it.id << " ----- " << endl; cout << " ******************************************" << endl; cout << endl; // ------- void CardList::copyCard(const Card& to_get, Card& put_to){ put_to.type = to_get.type; put_to.color = to_get.color; put_to.manaCost = to_get.manaCost; put_to.inPlay = to_get.inPlay; put_to.tapped = to_get.tapped; put_to.class = to_get.class; put_to.id = to_get.id; put_to.enchanted = to_get.enchanted; put_to.artifact = to_get.artifact; put_to.class = to_get.class; put.to.abilities = new char[strlen(to_get.abilities) +1]; strcpy(put_to.abilities, to_get.abilities); put.to.keywords = new char[strlen(to_get.keywords) +1]; strcpy(put_to.keywords, to_get.keywords); put.to.flavorText = new char[strlen(to_get.flavorText) +1]; strcpy(put_to.flavorText, to_get.flavorText); if (to_get.class = "Spell"){ put_to.baseToughness = to_get.baseToughness; put_to.basePower = to_get.basePower; put_to.currToughness = to_get.currToughness; put_to.currPower = to_get.currPower; } } // ---------- player.h #ifndef player.h #define player.h #include "list.h" // ============ class CardList() : public LinkedList(){ public: CardList(); ~CardList(); protected: virtual void findCard(Card&); virtual void addCard(Card* ); virtual void removeCard(Node* root, int id); virtual void deleteAll(); virtual void displayAll(); virtual void copyCard(const Conception*, Node*&); Node* root; } // --------- class Library() : public CardList(){ public: Library(); ~Library(); protected: Card* card; int numCards; findCard(Card&); // get Card and fill empty template } // ----------- class Deck() : public CardList(){ public: Deck(); ~Deck(); protected: enum deckColor { WHITE, BLUE, BLACK, RED, GREEN, MIXED }; char* deckName; } // =============== class Mana(int amount) : public Conception { public: Mana() : displayTotal(0), classType(0) { displayTotal = 0; classType = new char[strlen("Mana") + 1]; classType = "Mana"; }; protected: int accrued; void add(); void remove(); int displayTotal(); } inline Mana::add(){ accrued += 1; } inline Mana::remove(){ accrued -= 1; } inline Mana::displayTotal(){ return accrued; } // ================ class Stats() : public Conception { public: friend class Player; friend class Game; Stats() : wins(0), losses(0), winRatio(0) { wins = 0; losses = 0; if ( (wins + losses != 0) winRatio = wins / (wins + losses); else winRatio = 0; classType = new char[strlen("Stats") + 1]; classType = "Stats"; } protected: int wins; int losses; float winRatio; void int getStats(Stats*& ); } // ================== class Player() : public Conception{ public: Player() : wins(0), losses(0), winRatio(0) { fname = NULL; lname = NULL; stats = NULL; CardList = NULL; classType = new char[strlen("Player") + 1]; classType = "Player"; }; ~Player(); Player(const Player & obj); protected: // member variables char* fname; char* lname; Stats stats; // holds previous game statistics CardList* deck[]; // hold multiple decks that player might use - put ll in this private: // member functions void changeName(const char[], const char[]); void shuffleDeck(int); void seeStats(Stats*& ); void displayDeck(int); chooseDeck(); } // -------------------- class Wizard(Card) : public Player(){ public: Wizard() : { mana = NULL; rootL = NULL; rootH = NULL}; ~Wizard(); protected: playCard(const Card &); removeCard(Card &); attackWithCard(Card &); enchantWithCard(Card &); disenchantWithCard(Card &); healWithCard(Card &); equipWithCard(Card &); Mana* mana[]; Library* rootL; // Library Library* rootH; // Hand } #endif

    Read the article

  • get-wmiobject sql join in powershell - trying to find physical memory vs. virtual memory of remote s

    - by Willy
    get-wmiobject -query "Select TotalPhysicalMemory from Win32_LogicalMemoryConfiguration" -computer COMPUTERNAME output.csv get-wmiobject -query "Select TotalPageFileSpace from Win32_LogicalMemoryConfiguration" -computer COMPUTERNAME output.csv I am trying to complete this script with an output as such: Computer Physical Memory Virtual Memory server1 4096mb 8000mb server2 2048mb 4000mb

    Read the article

  • c++ property class structure

    - by Without me Its just Aweso
    I have a c++ project being developed in QT. The problem I'm running in to is I am wanting to have a single base class that all my property classes inherit from so that I can store them all together. Right now I have: class AbstractProperty { public: AbstractProperty(QString propertyName); virtual QString toString() const = 0; virtual QString getName() = 0; virtual void fromString(QString str) = 0; virtual int toInteger() = 0; virtual bool operator==(const AbstractProperty &rightHand) = 0; virtual bool operator!=(const AbstractProperty &rightHand) = 0; virtual bool operator<(const AbstractProperty &rightHand) = 0; virtual bool operator>(const AbstractProperty &rightHand) = 0; virtual bool operator>=(const AbstractProperty &rightHand) = 0; virtual bool operator<=(const AbstractProperty &rightHand) = 0; protected: QString name; }; then I am implementing classes such as PropertyFloat and PropertyString and providing implementation for the comparator operators based on the assumption that only strings are being compared with strings and so on. However the problem with this is there would be no compiletime error thrown if i did if(propertyfloat a < propertystring b) however my implementation of the operators for each derived class relies on them both being the same derived class. So my problem is I cant figure out how to implement a property structure so that I can have them all inherit from some base type but code like what I have above would throw a compile time error. Any ideas on how this can be done? For those familiar with QT I tried using also a implementation with QVariant however QVariant doesn't have operators < and defined in itself only in some of its derived classes so it didn't work out. What my end goal is, is to be able to generically refer to properties. I have an element class that holds a hashmap of properties with string 'name' as key and the AbstractProperty as value. I want to be able to generically operate on the properties. i.e. if I want to get the max and min values of a property given its string name I have methods that are completely generic that will pull out the associated AbstactProperty from each element and find the max/min no matter what the type is. so properties although initially declared as PropertyFloat/PropertyString they will be held generically.

    Read the article

  • in Virtual C# 2010 express what is the most reliable way to detect windows OS Architecture (x86,x64)

    - by NightsEvil
    hi i am using Virtual C# 2010 express and i need the most reliable way (on button click) and in .NET 2.0 framework to detect if windows is currently x86 or x64 in a message box.. up till now i have been using this code but i need to know if there is a more accurate way? string target = @"C:\Windows\SysWow64"; { if (Directory.Exists(target)) { MessageBox.Show("x64"); } else { MessageBox.Show("x86"); }

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >