Search Results

Search found 8286 results on 332 pages for 'defined'.

Page 42/332 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • SOA Suite 11g Native Format Builder Complex Format Example

    - by bob.webster
    This rather long posting details the steps required to process a grouping of fixed length records using Format Builder.   If it’s 10 pm and you’re feeling beat you might want to leave this until tomorrow.  But if it’s 10 pm and you need to get a Format Builder Complex template done, read on… The goal is to process individual orders from a file using the 11g File Adapter and Format Builder Sample Data =========== 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 301Hexagon Widget         1150.98 ORD6735502/01/2011 The records are fixed length records representing a number of logical Order records. Each order record consists of a number of item records starting with a 3 digit number, followed by a single Summary Record which starts with the constant ORD. How can this file be processed so that the first poll returns the first order? 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 And the second poll returns the second order? 301Hexagon Widget           1150.98 ORD6735502/01/2011 Note: if you need more than one order per poll, that’s also possible, see the “Multiple Messages” field in the “File Adapter Step 6 of 9” snapshot further down.   To follow along with this example you will need - Studio Edition Version 11.1.1.4.0    with the   - SOA Extension for JDeveloper 11.1.1.4.0 installed Both can be downloaded from here:  http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html You will not need a running WebLogic Server domain to complete the steps and Format Builder tests in this article.     Start with a SOA Composite containing a File Adapter The Format Builder is part of the File Adapter so start by creating a new SOA Project and Composite. Here is a quick summary for those not familiar with these steps - Start JDeveloper - From the Main Menu choose File->New - In the New Gallery window that opens Expand the “General” category and Select the Applications node.   Then choose SOA Application from the Items section on the right.  Finally press the OK button. - In Step 1 of the “Create SOA Application wizard” that appears enter an Application Name and an Directory of your     choice,   then press the Next button. - In Step 2 of the “Create SOA Application wizard”, press the Next button leaving all entries as defaulted. - In Step 3 of the “Create SOA Application wizard”, Enter a composite name of your choice and Press the Finish   Button These steps result in a new Application and SOA Project. The SOA Project contains a composite.xml file which is opened and shown below. For our example we have not defined a Mediator or a BPEL process to minimize the steps, but one or the other would eventually be needed to use the File Adapter we are about to create. Drag and drop the File Adapter icon from the Component Pallette onto either the LEFT side of the diagram under “Exposed Services” or the right side under “External References”.  (See the Green Circle in the image below).  Placing the adapter on the left side would indicate the file being processed is inbound to the composite, if the adapter is placed on the right side then the data is outbound to a file.     Note that the same Format Builder definition can be used in both directions.  For example we could use the format with a File Adapter on the left side of the composite to parse fixed data into XML, modify the data in our Composite or BPEL process and then use the same Format Builder definition with a File adapter on the right side of the composite to write the data back out in the same fixed data format When the File Adapter is dropped on the Composite the File Adapter Wizard Appears. Skip Past the first page, Step 1 of 9 by pressing the Next button. In Step 2 enter a service name of your choice as shown below, then press Next   When the Native Format Builder appears, skip the welcome page by pressing next. Also press the Next button to accept the settings on Step 3 of 9 On Step 4, select Read File and press the Next button as shown below.   On Step 5 enter a directory that will contain a file with the input data, then  Press the Next button as shown below. In step 6, enter *.txt or another file format to select input files from the input directory mentioned in step 5. ALSO check the “Files contain Multiple Messages” checkbox and set the “Publish Messages in Batches of” field to 1.  The value can be set higher to increase the number of logical order group records returned on each poll of the file adapter.  In other words, it determines the number of Orders that will be sent to each instance of a Mediator or Composite processing using the File Adapter.   Skip Step 7 by pressing the Next button In Step 8 press the Gear Icon on the right side to load the Native Format Builder.       Native Format Builder  appears Before diving into the format, here is an overview of the process. Approach - Bottom up Assuming an Order is a grouping of item records and a summary record…. - Define a separate  Complex Type for each Record Type found in the group.    (One for itemRecord and one for summaryRecord) - Define a Complex Type to contain the Group of Record types defined above   (LogicalOrderRecord) - Define a top level element to represent an order.  (order)   The order element will be of type LogicalOrderRecord   Defining the Format In Step 1 select   “Create new”  and  “Complex Type” and “Next”   In Step two browse to and select a file containing the test data shown at the start of this article. A link is provided at the end of this article to download a file containing the test data. Press the Next button     In Step 3 Complex types must be define for each type of input record. Select the Root-Element and Click on the Add Complex Type icon This creates a new empty complex type definition shown below. The fastest way to create the definition is to highlight the first line of the Sample File data and drag the line onto the  <new_complex_type> Format Builder introspects the data and provides a grid to define additional fields. Change the “Complex Type Name” to  “itemRecord” Then click on the ruler to indicate the position of fixed columns.  Drag the red triangle icons to the exact columns if necessary. Double click on an existing red triangle to remove an unwanted entry. In the case below fields are define in columns 0-3, 4-28, 29-eol When the field definitions are correct, press the “Generate Fields” button. Field entries named C1, C2 and C3 will be created as shown below. Click on the field names and rename them from C1->itemNum, C2->itemDesc and C3->itemCost  When all the fields are correctly defined press OK to save the complex type.        Next, the process is repeated to define a Complex Type for the SummaryRecord. Select the Root-Element in the schema tree and press the new complex type icon Then highlight and drag the Summary Record from the sample data onto the <new_complex_type>   Change the complex type name to “summaryRecord” Mark the fixed fields for Order Number and Order Date. Press the Generate Fields button and rename C1 and C2 to itemNum and orderDate respectively.   The last complex type to be defined is a type to hold the group of items and the summary record. Select the Root-Element in the schema tree and click the new complex type icon Select the “<new_complex_type>” entry and click the pencil icon   On the Complex Type Details page change the name and type of each input field. Change line 1 to be named item and set the Type  to “itemRecord” Change line 2 to be named summary and set the Type to “summaryRecord” We also need to indicate that itemRecords repeat in the input file. Click the pencil icon at the right side of the item line. On the Edit Details page change the “Max Occurs” entry from 1 to UNBOUNDED. We also need to indicate how to identify an itemRecord.  Since each item record has “.” in column 32 we can use this fact to differentiate an item record from a summary record. Change the “Look Ahead” field to value 32 and enter a period in the “Look For” field Press the OK button to save entry.     Finally, its time to create a top level element to represent an order. Select the “Root-Element” in the schema tree and press the New element icon Click on the <new_element> and press the pencil icon.   Set the Element Name to “order” and change the Data Type to “logicalOrderRecord” Press the OK button to save the element definition.   The final definition should match the screenshot below. Press the Next Button to view the definition source.     Press the Test Button to test the definition   Press the Green Triangle Icon to run the test.   And we are presented with an unwelcome error. The error states that the processor ran out of data while working through the definition. The processor was unable to differentiate between itemRecords and summaryRecords and therefore treated the entire file as a list of itemRecords.  At end of file, the “summary” portion of the logicalOrderRecord remained unprocessed but mandatory.   This root cause of this error is the loss of our “lookAhead” definition used to identify itemRecords. This appears to be a bug in the  Native Format Builder 11.1.1.4.0 Luckily, a simple workaround exists. Press the Cancel button and return to the “Step 4 of 4” Window. Manually add    nxsd:lookAhead="32" nxsd:lookFor="."   attributes after the maxOccurs attribute of the item element. as shown in the highlighted text below.   When the lookAhead and lookFor attributes have been added Press the Test button and on the Test page press the Green Triangle. The test is now successful, the first order in the file is returned by the File Adapter.     Below is a complete listing of the Result XML from the right column of the screen above   Try running it The downloaded input test file and completed schema file can be used for testing without following all the Native Format Builder steps in this example. Use the following link to download a file containing the sample data. Download Sample Input Data This is the best approach rather than cutting and pasting the input data at the top of the article.  Since the data is fixed length it’s very important to watch out for trailing spaces in the data and to ensure an eol character at the end of every line. The download file is correctly formatted. The final schema definition can be downloaded at the following link Download Completed Schema Definition   - Save the inputData.txt file to a known location like the xsd folder in your project. - Save the inputData_6.xsd file to the xsd folder in your project. - At step 1 in the Native Format Builder wizard  (as shown above) check the “Edit existing” radio button,    then browse and select the inputData_6.xsd file - At step 2 of the Format Builder configuration Wizard (as shown above) supply the path and filename for    the inputData.txt file. - You can then proceed to the test page and run a test. - Remember the wizard bug will drop the lookAhead and lookFor attributes,  you will need to manually add   nxsd:lookAhead="32" nxsd:lookFor="."    after the maxOccurs attribute of the item element in the   LogicalOrderRecord Complex Type.  (as shown above)   Good Luck with your Format Project

    Read the article

  • Is your team is a high-performing team?

    As a child I can remember looking out of the car window as my father drove along the Interstate in Florida while seeing prisoners wearing bright orange jump suits and prison guards keeping a watchful eye on them. The prisoners were taking part in a prison road gang. These road gangs were formed to help the state maintain the state highway infrastructure. The prisoner’s primary responsibilities are to pick up trash and debris from the roadway. This is a prime example of a work group or working group used by most prison systems in the United States. Work groups or working groups can be defined as a collection of individuals or entities working together to achieve a specific goal or accomplish a specific set of tasks. Typically these groups are only established for a short period of time and are dissolved once the desired outcome has been achieved. More often than not group members usually feel as though they are expendable to the group and some even dread that they are even in the group. "A team is a small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they are mutually accountable." (Katzenbach and Smith, 1993) So how do you determine that a team is a high-performing team?  This can be determined by three base line criteria that include: consistently high quality output, the promotion of personal growth and well being of all team members, and most importantly the ability to learn and grow as a unit. Initially, a team can successfully create high-performing output without meeting all three criteria, however this will erode over time because team members will feel detached from the group or that they are not growing then the quality of the output will decline. High performing teams are similar to work groups because they both utilize a collection of individuals or entities to accomplish tasks. What distinguish a high-performing team from a work group are its characteristics. High-performing teams contain five core characteristics. These characteristics are what separate a group from a team. The five characteristics of a high-performing team include: Purpose, Performance Measures, People with Tasks and Relationship Skills, Process, and Preparation and Practice. A high-performing team is much more than a work group, and typically has a life cycle that can vary from team to team. The standard team lifecycle consists of five states and is comparable to a human life cycle. The five states of a high-performing team lifecycle include: Formulating, Storming, Normalizing, Performing, and Adjourning. The Formulating State of a team is first realized when the team members are first defined and roles are assigned to all members. This initial stage is very important because it can set the tone for the team and can ultimately determine its success or failure. In addition, this stage requires the team to have a strong leader because team members are normally unclear about specific roles, specific obstacles and goals that my lay ahead of them.  Finally, this stage is where most team members initially meet one another prior to working as a team unless the team members already know each other. The Storming State normally arrives directly after the formulation of a new team because there are still a lot of unknowns amongst the newly formed assembly. As a general rule most of the parties involved in the team are still getting used to the workload, pace of work, deadlines and the validity of various tasks that need to be performed by the group.  In this state everything is questioned because there are so many unknowns. Items commonly questioned include the credentials of others on the team, the actual validity of a project, and the leadership abilities of the team leader.  This can be exemplified by looking at the interactions between animals when they first meet.  If we look at a scenario where two people are walking directly toward each other with their dogs. The dogs will automatically enter the Storming State because they do not know the other dog. Typically in this situation, they attempt to define which is more dominating via play or fighting depending on how the dogs interact with each other. Once dominance has been defined and accepted by both dogs then they will either want to play or leave depending on how the dogs interacted and other environmental variables. Once the Storming State has been realized then the Normalizing State takes over. This state is entered by a team once all the questions of the Storming State have been answered and the team has been tested by a few tasks or projects.  Typically, participants in the team are filled with energy, and comradery, and a strong alliance with team goals and objectives.  A high school football team is a perfect example of the Normalizing State when they start their season.  The player positions have been assigned, the depth chart has been filled and everyone is focused on winning each game. All of the players encourage and expect each other to perform at the best of their abilities and are united by competition from other teams. The Performing State is achieved by a team when its history, working habits, and culture solidify the team as one working unit. In this state team members can anticipate specific behaviors, attitudes, reactions, and challenges are seen as opportunities and not problems. Additionally, each team member knows their role in the team’s success, and the roles of others. This is the most productive state of a group and is where all the time invested working together really pays off. If you look at an Olympic figure skating team skate you can easily see how the time spent working together benefits their performance. They skate as one unit even though it is comprised of two skaters. Each skater has their routine completely memorized as well as their partners. This allows them to anticipate each other’s moves on the ice makes their skating look effortless. The final state of a team is the Adjourning State. This state is where accomplishments by the team and each individual team member are recognized. Additionally, this state also allows for reflection of the interactions between team members, work accomplished and challenges that were faced. Finally, the team celebrates the challenges they have faced and overcome as a unit. Currently in the workplace teams are divided into two different types: Co-located and Distributed Teams. Co-located teams defined as the traditional group of people working together in an office, according to Andy Singleton of Assembla. This traditional type of a team has dominated business in the past due to inadequate technology, which forced workers to primarily interact with one another via face to face meetings.  Team meetings are primarily lead by the person with the highest status in the company. Having personally, participated in meetings of this type, usually a select few of the team members dominate the flow of communication which reduces the input of others in group discussions. Since discussions are dominated by a select few individuals the discussions and group discussion are skewed in favor of the individuals who communicate the most in meetings. In addition, Team members might not give their full opinions on a topic of discussion in part not to offend or create controversy amongst the team and can alter decision made in meetings towards those of the opinions of the dominating team members. Distributed teams are by definition spread across an area or subdivided into separate sections. That is exactly what distributed teams when compared to a more traditional team. It is common place for distributed teams to have team members across town, in the next state, across the country and even with the advances in technology over the last 20 year across the world. These teams allow for more diversity compared to the other type of teams because they allow for more flexibility regarding location. A team could consist of a 30 year old male Italian project manager from New York, a 50 year old female Hispanic from California and a collection of programmers from India because technology allows them to communicate as if they were standing next to one another.  In addition, distributed team members consult with more team members prior to making decisions compared to traditional teams, and take longer to come to decisions due to the changes in time zones and cultural events. However, team members feel more empowered to speak out when they do not agree with the team and to notify others of potential issues regarding the work that the team is doing. Virtual teams which are a subset of the distributed team type is changing organizational strategies due to the fact that a team can now in essence be working 24 hrs a day because of utilizing employees in various time zones and locations.  A primary example of this is with customer services departments, a company can have multiple call centers spread across multiple time zones allowing them to appear to be open 24 hours a day while all a employees work from 9AM to 5 PM every day. Virtual teams also allow human resources departments to go after the best talent for the company regardless of where the potential employee works because they will be a part of a virtual team all that is need is the proper technology to be setup to allow everyone to communicate. In addition to allowing employees to work from home, the company can save space and resources by not having to provide a desk for every team member. In fact, those team members that randomly come into the office can actually share one desk amongst multiple people. This is definitely a cost cutting plus given the current state of the economy. One thing that can turn a team into a high-performing team is leadership. High-performing team leaders need to focus on investing in ongoing personal development, provide team members with direction, structure, and resources needed to accomplish their work, make the right interventions at the right time, and help the team manage boundaries between the team and various external parties involved in the teams work. A team leader needs to invest in ongoing personal development in order to effectively manage their team. People have said that attitude is everything; this is very true about leaders and leadership. A team takes on the attitudes and behaviors of its leaders. This can potentially harm the team and the team’s output. Leaders must concentrate on self-awareness, and understanding their team’s group dynamics to fully understand how to lead them. In addition, always learning new leadership techniques from other effective leaders is also very beneficial. Providing team members with direction, structure, and resources that they need to accomplish their work collectively sounds easy, but it is not.  Leaders need to be able to effectively communicate with their team on how their work helps the company reach for its organizational vision. Conversely, the leader needs to allow his team to work autonomously within specific guidelines to turn the company’s vision into a reality.  This being said the team must be appropriately staffed according to the size of the team’s tasks and their complexity. These tasks should be clear, and be meaningful to the company’s objectives and allow for feedback to be exchanged with the leader and the team member and the leader and upper management. Now if the team is properly staffed, and has a clear and full understanding of what is to be done; the company also must supply the workers with the proper tools to achieve the tasks that they are asked to do. No one should be asked to dig a hole without being given a shovel.  Finally, leaders must reward their team members for accomplishments that they achieve. Awards could range from just a simple congratulatory email, a party to close the completion of a large project, or other monetary rewards. Managing boundaries is very important for team leaders because it can alter attitudes of team members and can add undue stress to the team which will force them to loose focus on the tasks at hand for the group. Team leaders should promote communication between team members so that burdens are shared amongst the team and solutions can be derived from hearing the opinions of multiple sources. This also reinforces team camaraderie and working as a unit. Team leaders must manage the type and timing of interventions as to not create an even bigger mess within the team. Poorly timed interventions can really deflate team members and make them question themselves. This could really increase further and undue interventions by the team leader. Typically, the best time for interventions is when the team is just starting to form so that all unproductive behaviors are removed from the team and that it can retain focus on its agenda. If an intervention is effectively executed the team will feel energized about the work that they are doing, promote communication and interaction amongst the group and improve moral overall. High-performing teams are very import to organizations because they consistently produce high quality output and develop a collective purpose for their work. This drive to succeed allows team members to utilize specific talents allowing for growth in these areas.  In addition, these team members usually take on a sense of ownership with their projects and feel that the other team members are irreplaceable. References: http://blog.assembla.com/assemblablog/tabid/12618/bid/3127/Three-ways-to-organize-your-team-co-located-outsourced-or-global.aspx Katzenbach, J.R. & Smith, D.K. (1993). The Wisdom of Teams: Creating the High-performance Organization. Boston: Harvard Business School.

    Read the article

  • LLBLGen Pro feature highlights: automatic element name construction

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) One of the things one might take for granted but which has a huge impact on the time spent in an entity modeling environment is the way the system creates names for elements out of the information provided, in short: automatic element name construction. Element names are created in both directions of modeling: database first and model first and the more names the system can create for you without you having to rename them, the better. LLBLGen Pro has a rich, fine grained system for creating element names out of the meta-data available, which I'll describe more in detail below. First the model element related element naming features are highlighted, in the section Automatic model element naming features and after that I'll go more into detail about the relational model element naming features LLBLGen Pro has to offer in the section Automatic relational model element naming features. Automatic model element naming features When working database first, the element names in the model, e.g. entity names, entity field names and so on, are in general determined from the relational model element (e.g. table, table field) they're mapped on, as the model elements are reverse engineered from these relational model elements. It doesn't take rocket science to automatically name an entity Customer if the entity was created after reverse engineering a table named Customer. It gets a little trickier when the entity which was created by reverse engineering a table called TBL_ORDER_LINES has to be named 'OrderLine' automatically. Automatic model element naming also takes into effect with model first development, where some settings are used to provide you with a default name, e.g. in the case of navigator name creation when you create a new relationship. The features below are available to you in the Project Settings. Open Project Settings on a loaded project and navigate to Conventions -> Element Name Construction. Strippers! The above example 'TBL_ORDER_LINES' shows that some parts of the table name might not be needed for name creation, in this case the 'TBL_' prefix. Some 'brilliant' DBAs even add suffixes to table names, fragments you might not want to appear in the entity names. LLBLGen Pro offers you to define both prefix and suffix fragments to strip off of table, view, stored procedure, parameter, table field and view field names. In the example above, the fragment 'TBL_' is a good candidate for such a strip pattern. You can specify more than one pattern for e.g. the table prefix strip pattern, so even a really messy schema can still be used to produce clean names. Underscores Be Gone Another thing you might get rid of are underscores. After all, most naming schemes for entities and their classes use PasCal casing rules and don't allow for underscores to appear. LLBLGen Pro can automatically strip out underscores for you. It's an optional feature, so if you like the underscores, you're not forced to see them go: LLBLGen Pro will leave them alone when ordered to to so. PasCal everywhere... or not, your call LLBLGen Pro can automatically PasCal case names on word breaks. It determines word breaks in a couple of ways: a space marks a word break, an underscore marks a word break and a case difference marks a word break. It will remove spaces in all cases, and based on the underscore removal setting, keep or remove the underscores, and upper-case the first character of a word break fragment, and lower case the rest. Say, we keep the defaults, which is remove underscores and PasCal case always and strip the TBL_ fragment, we get with our example TBL_ORDER_LINES, after stripping TBL_ from the table name two word fragments: ORDER and LINES. The underscores are removed, the first character of each fragment is upper-cased, the rest lower-cased, so this results in OrderLines. Almost there! Pluralization and Singularization In general entity names are singular, like Customer or OrderLine so LLBLGen Pro offers a way to singularize the names. This will convert OrderLines, the result we got after the PasCal casing functionality, into OrderLine, exactly what we're after. Show me the patterns! There are other situations in which you want more flexibility. Say, you have an entity Customer and an entity Order and there's a foreign key constraint defined from the target of Order and the target of Customer. This foreign key constraint results in a 1:n relationship between the entities Customer and Order. A relationship has navigators mapped onto the relationship in both entities the relationship is between. For this particular relationship we'd like to have Customer as navigator in Order and Orders as navigator in Customer, so the relationship becomes Customer.Orders 1:n Order.Customer. To control the naming of these navigators for the various relationship types, LLBLGen Pro defines a set of patterns which allow you, using macros, to define how the auto-created navigator names will look like. For example, if you rather have Customer.OrderCollection, you can do so, by changing the pattern from {$EndEntityName$P} to {$EndEntityName}Collection. The $P directive makes sure the name is pluralized, which is not what you want if you're going for <EntityName>Collection, hence it's removed. When working model first, it's a given you'll create foreign key fields along the way when you define relationships. For example, you've defined two entities: Customer and Order, and they have their fields setup properly. Now you want to define a relationship between them. This will automatically create a foreign key field in the Order entity, which reflects the value of the PK field in Customer. (No worries if you hate the foreign key fields in your classes, on NHibernate and EF these can be hidden in the generated code if you want to). A specific pattern is available for you to direct LLBLGen Pro how to name this foreign key field. For example, if all your entities have Id as PK field, you might want to have a different name than Id as foreign key field. In our Customer - Order example, you might want to have CustomerId instead as foreign key name in Order. The pattern for foreign key fields gives you that freedom. Abbreviations... make sense of OrdNr and friends I already described word breaks in the PasCal casing paragraph, how they're used for the PasCal casing in the constructed name. Word breaks are used for another neat feature LLBLGen Pro has to offer: abbreviation support. Burt, your friendly DBA in the dungeons below the office has a hate-hate relationship with his keyboard: he can't stand it: typing is something he avoids like the plague. This has resulted in tables and fields which have names which are very short, but also very unreadable. Example: our TBL_ORDER_LINES example has a lovely field called ORD_NR. What you would like to see in your fancy new OrderLine entity mapped onto this table is a field called OrderNumber, not a field called OrdNr. What you also like is to not have to rename that field manually. There are better things to do with your time, after all. LLBLGen Pro has you covered. All it takes is to define some abbreviation - full word pairs and during reverse engineering model elements from tables/views, LLBLGen Pro will take care of the rest. For the ORD_NR field, you need two values: ORD as abbreviation and Order as full word, and NR as abbreviation and Number as full word. LLBLGen Pro will now convert every word fragment found with the word breaks which matches an abbreviation to the given full word. They're case sensitive and can be found in the Project Settings: Navigate to Conventions -> Element Name Construction -> Abbreviations. Automatic relational model element naming features Not everyone works database first: it may very well be the case you start from scratch, or have to add additional tables to an existing database. For these situations, it's key you have the flexibility that you can control the created table names and table fields without any work: let the designer create these names based on the entity model you defined and a set of rules. LLBLGen Pro offers several features in this area, which are described in more detail below. These features are found in Project Settings: navigate to Conventions -> Model First Development. Underscores, welcome back! Not every database is case insensitive, and not every organization requires PasCal cased table/field names, some demand all lower or all uppercase names with underscores at word breaks. Say you create an entity model with an entity called OrderLine. You work with Oracle and your organization requires underscores at word breaks: a table created from OrderLine should be called ORDER_LINE. LLBLGen Pro allows you to do that: with a simple checkbox you can order LLBLGen Pro to insert an underscore at each word break for the type of database you're working with: case sensitive or case insensitive. Checking the checkbox Insert underscore at word break case insensitive dbs will let LLBLGen Pro create a table from the entity called Order_Line. Half-way there, as there are still lower case characters there and you need all caps. No worries, see below Casing directives so everyone can sleep well at night For case sensitive databases and case insensitive databases there is one setting for each of them which controls the casing of the name created from a model element (e.g. a table created from an entity definition using the auto-mapping feature). The settings can have the following values: AsProjectElement, AllUpperCase or AllLowerCase. AsProjectElement is the default, and it keeps the casing as-is. In our example, we need to get all upper case characters, so we select AllUpperCase for the setting for case sensitive databases. This will produce the name ORDER_LINE. Sequence naming after a pattern Some databases support sequences, and using model-first development it's key to have sequences, when needed, to be created automatically and if possible using a name which shows where they're used. Say you have an entity Order and you want to have the PK values be created by the database using a sequence. The database you're using supports sequences (e.g. Oracle) and as you want all numeric PK fields to be sequenced, you have enabled this by the setting Auto assign sequences to integer pks. When you're using LLBLGen Pro's auto-map feature, to create new tables and constraints from the model, it will create a new table, ORDER, based on your settings I previously discussed above, with a PK field ID and it also creates a sequence, SEQ_ORDER, which is auto-assigns to the ID field mapping. The name of the sequence is created by using a pattern, defined in the Model First Development setting Sequence pattern, which uses plain text and macros like with the other patterns previously discussed. Grouping and schemas When you start from scratch, and you're working model first, the tables created by LLBLGen Pro will be in a catalog and / or schema created by LLBLGen Pro as well. If you use LLBLGen Pro's grouping feature, which allows you to group entities and other model elements into groups in the project (described in a future blog post), you might want to have that group name reflected in the schema name the targets of the model elements are in. Say you have a model with a group CRM and a group HRM, both with entities unique for these groups, e.g. Employee in HRM, Customer in CRM. When auto-mapping this model to create tables, you might want to have the table created for Employee in the HRM schema but the table created for Customer in the CRM schema. LLBLGen Pro will do just that when you check the setting Set schema name after group name to true (default). This gives you total control over where what is placed in the database from your model. But I want plural table names... and TBL_ prefixes! For now we follow best practices which suggest singular table names and no prefixes/suffixes for names. Of course that won't keep everyone happy, so we're looking into making it possible to have that in a future version. Conclusion LLBLGen Pro offers a variety of options to let the modeling system do as much work for you as possible. Hopefully you enjoyed this little highlight post and that it has given you new insights in the smaller features available to you in LLBLGen Pro, ones you might not have thought off in the first place. Enjoy!

    Read the article

  • C#/.NET Little Wonders: The Joy of Anonymous Types

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the .NET 3 Framework, Microsoft introduced the concept of anonymous types, which provide a way to create a quick, compiler-generated types at the point of instantiation.  These may seem trivial, but are very handy for concisely creating lightweight, strongly-typed objects containing only read-only properties that can be used within a given scope. Creating an Anonymous Type In short, an anonymous type is a reference type that derives directly from object and is defined by its set of properties base on their names, number, types, and order given at initialization.  In addition to just holding these properties, it is also given appropriate overridden implementations for Equals() and GetHashCode() that take into account all of the properties to correctly perform property comparisons and hashing.  Also overridden is an implementation of ToString() which makes it easy to display the contents of an anonymous type instance in a fairly concise manner. To construct an anonymous type instance, you use basically the same initialization syntax as with a regular type.  So, for example, if we wanted to create an anonymous type to represent a particular point, we could do this: 1: var point = new { X = 13, Y = 7 }; Note the similarity between anonymous type initialization and regular initialization.  The main difference is that the compiler generates the type name and the properties (as readonly) based on the names and order provided, and inferring their types from the expressions they are assigned to. It is key to remember that all of those factors (number, names, types, order of properties) determine the anonymous type.  This is important, because while these two instances share the same anonymous type: 1: // same names, types, and order 2: var point1 = new { X = 13, Y = 7 }; 3: var point2 = new { X = 5, Y = 0 }; These similar ones do not: 1: var point3 = new { Y = 3, X = 5 }; // different order 2: var point4 = new { X = 3, Y = 5.0 }; // different type for Y 3: var point5 = new {MyX = 3, MyY = 5 }; // different names 4: var point6 = new { X = 1, Y = 2, Z = 3 }; // different count Limitations on Property Initialization Expressions The expression for a property in an anonymous type initialization cannot be null (though it can evaluate to null) or an anonymous function.  For example, the following are illegal: 1: // Null can't be used directly. Null reference of what type? 2: var cantUseNull = new { Value = null }; 3:  4: // Anonymous methods cannot be used. 5: var cantUseAnonymousFxn = new { Value = () => Console.WriteLine(“Can’t.”) }; Note that the restriction on null is just that you can’t use it directly as the expression, because otherwise how would it be able to determine the type?  You can, however, use it indirectly assigning a null expression such as a typed variable with the value null, or by casting null to a specific type: 1: string str = null; 2: var fineIndirectly = new { Value = str }; 3: var fineCast = new { Value = (string)null }; All of the examples above name the properties explicitly, but you can also implicitly name properties if they are being set from a property, field, or variable.  In these cases, when a field, property, or variable is used alone, and you don’t specify a property name assigned to it, the new property will have the same name.  For example: 1: int variable = 42; 2:  3: // creates two properties named varriable and Now 4: var implicitProperties = new { variable, DateTime.Now }; Is the same type as: 1: var explicitProperties = new { variable = variable, Now = DateTime.Now }; But this only works if you are using an existing field, variable, or property directly as the expression.  If you use a more complex expression then the name cannot be inferred: 1: // can't infer the name variable from variable * 2, must name explicitly 2: var wontWork = new { variable * 2, DateTime.Now }; In the example above, since we typed variable * 2, it is no longer just a variable and thus we would have to assign the property a name explicitly. ToString() on Anonymous Types One of the more trivial overrides that an anonymous type provides you is a ToString() method that prints the value of the anonymous type instance in much the same format as it was initialized (except actual values instead of expressions as appropriate of course). For example, if you had: 1: var point = new { X = 13, Y = 42 }; And then print it out: 1: Console.WriteLine(point.ToString()); You will get: 1: { X = 13, Y = 42 } While this isn’t necessarily the most stunning feature of anonymous types, it can be handy for debugging or logging values in a fairly easy to read format. Comparing Anonymous Type Instances Because anonymous types automatically create appropriate overrides of Equals() and GetHashCode() based on the underlying properties, we can reliably compare two instances or get hash codes.  For example, if we had the following 3 points: 1: var point1 = new { X = 1, Y = 2 }; 2: var point2 = new { X = 1, Y = 2 }; 3: var point3 = new { Y = 2, X = 1 }; If we compare point1 and point2 we’ll see that Equals() returns true because they overridden version of Equals() sees that the types are the same (same number, names, types, and order of properties) and that the values are the same.   In addition, because all equal objects should have the same hash code, we’ll see that the hash codes evaluate to the same as well: 1: // true, same type, same values 2: Console.WriteLine(point1.Equals(point2)); 3:  4: // true, equal anonymous type instances always have same hash code 5: Console.WriteLine(point1.GetHashCode() == point2.GetHashCode()); However, if we compare point2 and point3 we get false.  Even though the names, types, and values of the properties are the same, the order is not, thus they are two different types and cannot be compared (and thus return false).  And, since they are not equal objects (even though they have the same value) there is a good chance their hash codes are different as well (though not guaranteed): 1: // false, different types 2: Console.WriteLine(point2.Equals(point3)); 3:  4: // quite possibly false (was false on my machine) 5: Console.WriteLine(point2.GetHashCode() == point3.GetHashCode()); Using Anonymous Types Now that we’ve created instances of anonymous types, let’s actually use them.  The property names (whether implicit or explicit) are used to access the individual properties of the anonymous type.  The main thing, once again, to keep in mind is that the properties are readonly, so you cannot assign the properties a new value (note: this does not mean that instances referred to by a property are immutable – for more information check out C#/.NET Fundamentals: Returning Data Immutably in a Mutable World). Thus, if we have the following anonymous type instance: 1: var point = new { X = 13, Y = 42 }; We can get the properties as you’d expect: 1: Console.WriteLine(“The point is: ({0},{1})”, point.X, point.Y); But we cannot alter the property values: 1: // compiler error, properties are readonly 2: point.X = 99; Further, since the anonymous type name is only known by the compiler, there is no easy way to pass anonymous type instances outside of a given scope.  The only real choices are to pass them as object or dynamic.  But really that is not the intention of using anonymous types.  If you find yourself needing to pass an anonymous type outside of a given scope, you should really consider making a POCO (Plain Old CLR Type – i.e. a class that contains just properties to hold data with little/no business logic) instead. Given that, why use them at all?  Couldn’t you always just create a POCO to represent every anonymous type you needed?  Sure you could, but then you might litter your solution with many small POCO classes that have very localized uses. It turns out this is the key to when to use anonymous types to your advantage: when you just need a lightweight type in a local context to store intermediate results, consider an anonymous type – but when that result is more long-lived and used outside of the current scope, consider a POCO instead. So what do we mean by intermediate results in a local context?  Well, a classic example would be filtering down results from a LINQ expression.  For example, let’s say we had a List<Transaction>, where Transaction is defined something like: 1: public class Transaction 2: { 3: public string UserId { get; set; } 4: public DateTime At { get; set; } 5: public decimal Amount { get; set; } 6: // … 7: } And let’s say we had this data in our List<Transaction>: 1: var transactions = new List<Transaction> 2: { 3: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = 2200.00m }, 4: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = -1100.00m }, 5: new Transaction { UserId = "Jim", At = DateTime.Now.AddDays(-1), Amount = 900.00m }, 6: new Transaction { UserId = "John", At = DateTime.Now.AddDays(-2), Amount = 300.00m }, 7: new Transaction { UserId = "John", At = DateTime.Now, Amount = -10.00m }, 8: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = 200.00m }, 9: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = -50.00m }, 10: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = -100.00m }, 11: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = 300.00m }, 12: }; So let’s say we wanted to get the transactions for each day for each user.  That is, for each day we’d want to see the transactions each user performed.  We could do this very simply with a nice LINQ expression, without the need of creating any POCOs: 1: // group the transactions based on an anonymous type with properties UserId and Date: 2: byUserAndDay = transactions 3: .GroupBy(tx => new { tx.UserId, tx.At.Date }) 4: .OrderBy(grp => grp.Key.Date) 5: .ThenBy(grp => grp.Key.UserId); Now, those of you who have attempted to use custom classes as a grouping type before (such as GroupBy(), Distinct(), etc.) may have discovered the hard way that LINQ gets a lot of its speed by utilizing not on Equals(), but also GetHashCode() on the type you are grouping by.  Thus, when you use custom types for these purposes, you generally end up having to write custom Equals() and GetHashCode() implementations or you won’t get the results you were expecting (the default implementations of Equals() and GetHashCode() are reference equality and reference identity based respectively). As we said before, it turns out that anonymous types already do these critical overrides for you.  This makes them even more convenient to use!  Instead of creating a small POCO to handle this grouping, and then having to implement a custom Equals() and GetHashCode() every time, we can just take advantage of the fact that anonymous types automatically override these methods with appropriate implementations that take into account the values of all of the properties. Now, we can look at our results: 1: foreach (var group in byUserAndDay) 2: { 3: // the group’s Key is an instance of our anonymous type 4: Console.WriteLine("{0} on {1:MM/dd/yyyy} did:", group.Key.UserId, group.Key.Date); 5:  6: // each grouping contains a sequence of the items. 7: foreach (var tx in group) 8: { 9: Console.WriteLine("\t{0}", tx.Amount); 10: } 11: } And see: 1: Jaime on 06/18/2012 did: 2: -100.00 3: 300.00 4:  5: John on 06/19/2012 did: 6: 300.00 7:  8: Jim on 06/20/2012 did: 9: 900.00 10:  11: Jane on 06/21/2012 did: 12: 200.00 13: -50.00 14:  15: Jim on 06/21/2012 did: 16: 2200.00 17: -1100.00 18:  19: John on 06/21/2012 did: 20: -10.00 Again, sure we could have just built a POCO to do this, given it an appropriate Equals() and GetHashCode() method, but that would have bloated our code with so many extra lines and been more difficult to maintain if the properties change.  Summary Anonymous types are one of those Little Wonders of the .NET language that are perfect at exactly that time when you need a temporary type to hold a set of properties together for an intermediate result.  While they are not very useful beyond the scope in which they are defined, they are excellent in LINQ expressions as a way to create and us intermediary values for further expressions and analysis. Anonymous types are defined by the compiler based on the number, type, names, and order of properties created, and they automatically implement appropriate Equals() and GetHashCode() overrides (as well as ToString()) which makes them ideal for LINQ expressions where you need to create a set of properties to group, evaluate, etc. Technorati Tags: C#,CSharp,.NET,Little Wonders,Anonymous Types,LINQ

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • CodePlex Daily Summary for Tuesday, November 01, 2011

    CodePlex Daily Summary for Tuesday, November 01, 2011Popular ReleasesAgFx Windows Phone Application Data Caching Framework: AgFx November Update: This update is primarily a bug fix release, which provides a number of stability and performance enhancements to AgFx. Available as a NuGet package here. Release Notes Fix IsoStore exceptions during shutdown (Changelist 81729) Thread safety and sequencings issues (Changelists 78682,79252, 80707) Fix landscape layout issue in auth sample Fix async keyword collision Supporting overlapping/multiple notifications for Load requestsiTuner - The iTunes Companion: iTuner 1.4.4322: Added German (unverified, apologies if incorrect) Properly source invariant resources with correct resIDs Replaced obsolete lyric providers with working providers Fix Pseudolater to correctly morph every third char Fix null reference in CatalogBaseDevpad: 4.6: Whats new for Devpad 4.6: New Recent Files New Run in Safari Minor Bug Fix's, improvements and speed upsWindows Workflow Foundation on Codeplex: Microsoft.Activities v1.8.8: Microsoft.Activities Overview How do I install Microsoft.Activities? Updates in this release9318Technical Analysis Engine for .NET: Technical Analysis Engine 1.25: What's new in the 1.25 release (2011-11-01): - Added Williams %R indicator - Added Moving Average Envelopes indicatorBF3Rcon.NET: BF3Rcon 3.0: This release is targeted for RCON documentation based on R3. Everything should be beta stable, but it's alpha because I haven't been able to fully test it. When a stable release is ready, a proper changelog will be kept. Important Edit: There's one method that will keep this from working in Mono. GeneratePasswordHash uses void HashAlgorithm.Dispose(), which isn't in Mono. This will have to be changed to Clear() in the next release. If anyone needs a Mono version of this immediately, you can...BoxWorld: BoxWorld_2011.10.30: BoxWorld - 8.0.1110.30 This is the initial release of BoxWorld. I'd recommend downloading the installer as it contains the compiled code and everything all nicely contained. By default, you end up with this directory structure: C:\Program Files\ViperWorks\BoxWorld C:\Program Files\ViperWorks\BoxWorld\Data C:\Program Files\ViperWorks\BoxWorld\Interface C:\Program Files\ViperWorks\BoxWorld\Source In the root you have the compiled EXE files, one for the main release, one for the LITE release ...VidCoder: 1.2.1: Fixed a couple regressions: video encoder was blank in queue and crashes with the High Profile preset when opening the Settings window. Fixed problem with auto-update introduced in 1.2.0. If you have 1.2.0 you will need to update manually to get this.AssaultCube Reloaded: Release 2.3: THE RELEASE YOU'VE ALL BEEN WAITING FOR! IT CAN NOW BE CONSIDERED STABLE Linux has Debian 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, download the Linux package. The server pack is ready for both Windows and Linux, but you might need to compile your own for linux (source included) If you are using Windows and require the source code, download the source package!Koober: Koober - The Ebook Creator 0.2: The official release of Koober as Open source. Koober is a ebook creator for Windows, and Koob Reader is the reader.A Microblog API (SINA weibo.com open API in C#, .Net???????API): AMicroblogAPI v1.0: AMicroBlogAPI is a C# implementation, a strong-typed deep encapsulation, a easy-to-use .net wrapper of SINA microblog API v1.0. App developers no longer need to parse various HTTP responses -- all responses are parsed into strong types; No longer need to construt the request query strings -- just simply gives the values of parameters; No longer need to implement the OAuth -- a single call could logs the user on. In this release (AMicroblogAPI v1.0), all basic data APIs are implemented. For ...patterns & practices: Enterprise Library Contrib: Enterprise Library Contrib - 5.0 (Oct 2011): This release of Enterprise Library Contrib is based on the Microsoft patterns & practices Enterprise Library 5.0 core and contains the following: Common extensionsTypeConfigurationElement<T> - A Polymorphic Configuration Element without having to be part of a PolymorphicConfigurationElementCollection. AnonymousConfigurationElement - A Configuration element that can be uniquely identified without having to define its name explicitly. Data Access Application Block extensionsMySql Provider - ...Network Monitor Open Source Parsers: Network Monitor Parsers 3.4.2748: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, NetowrkMonitor_Parsers.msi continues to improve quality and fix bugs. It has included the fo...Duckworth Lewis Professional Edition Calculator: DLcalc 3.0: DLcalc 3.0 can perform Duckworth/Lewis Professional Edition calculations 100% accurately. It also produces over-by-over and ball-by-ball PAR score tables.Media Companion: MC 3.420b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Movies Fixed: Fanart and poster scraping issues TV Shows (Re)Added: Rebuild single show Fixed: Issue when shows are moved from original location Ability to handle " for actor nicknames Crash when episode name contains "<" (does not scrape yet) Clears fanart when switch...patterns & practices - Unity: Unity 3.0 for .NET4.5 Preview: The Unity 3.0.1026.0 Preview enables Unity to work on .NET 4.5 with both the WinRT and desktop profiles. The major changes include: Unity projects updated to target .NET 4.5. Dynamic build plans modified to use compiled lambda expressions instead of Reflection.Emit Converting reflection to use the new TypeInfo for reflection. Projects updated to work with the Microsoft Visual Studio 2011 Preview Notes/Known Issues: The Microsoft.Practices.Unity.UnityServiceLocator class cannot be use...Managed Extensibility Framework: MEF 2 Preview 4: Detailed information on this release is available on the BCL team blog.AcDown????? - Anime&Comic Downloader: AcDown????? v3.6: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6?? ??“????”...MVC Controls Toolkit: Mvc Controls Toolkit 1.5.0: Added: The new Client Blocks feaure of Views A new "move" js method for the TreeViews The NewHtmlCreated js event to the DataGrid Improved the ChoiceList structure that now allows also the selection list of a dropdown to be chosen with a lambda expression Improved the AcceptViewHintAttribute controller filter. Now a client can specify not only the name of a View or Partial View it prefers, but also to receive just the rough data in Json format. Now the the SMinimum and SMaximum par...Free SharePoint Master Pages: Buried Alive (Halloween) Theme: Release Notes *Created for Halloween, you will find theme file, custom css file and images. *Created by Al Roome @AlstarRoome Features: Custom styling for web part Custom background *Screenshot https://s3.amazonaws.com/kkhipple/post/sharepoint-showcase-halloween.pngNew ProjectsarlTH: "Airport Link Thailand" Application for Windows Phone 7.1Audio Tags Editor For Excel: The purpose of the project is to create a tool to edit tags of audio files through the excel application. The program is an Excel add-in.AW Load Tester: AW Load Tester is a simple and easy to use website load tester that very closely mimics a regular users browsing session on your website. Many popular load tester only download HTML. AW Load Tester downloads all the related items on a page simultaneously. AzMan - Be Good Do Right !: The AzMan is a collection of reusable software components designed to assist software developers with common enterprise development challenges. http://bbs.fengshupo.com/BestWayWP7UI: The User interface of best wat wp7 applicationCheckout Subsystem Optimizer: Checkout Subsystem OptimizerDBScripter - Library for scripting SQL Server database objects: This project is library that allows users to script SQL Server database objects. Library uses dynamic management views for extracting data about databases objects. This library could be used in various situations. The most interesting areas are comparing database objects and generation database documentation. For both cases examples have been prepared.Dependency Checker: Dependency Checker is a utility for verifying that a set of prerequisites is present on a machine. The set of dependencies is defined in a configuration file. For more information, see http://dependencychecker.codeplex.com/documentation DibTer: DibTer is a brand new CMS for ASP.NET Mainly features are NHibernate MVC JQuery HTML5Dynamics CRM Entity Based Stress Tool: Entity Based Stress Tool, is command line program that creates configurable server agents in order to stress a defined entity at a defined Message/Stage. This tool allows CRM Application Stressing, Plugin Stressing, and enlaborate an execution report.Edinger: Edinger is a simple text editor.FreeboxHDVideoPlayer: FreeboxHDVideoPlayer makes it easier for free.fr french ISP users to play and manage videos stored on the HDD of the Freebox HD. FreeboxHDVideoPlayer simplifie, pour les utilisateurs de free.fr, la lecture et la gestion des vidéos stockées sur le disque du boitier Freebox HD.guglex - google docs xtraz: my studying project for asp.net mvc 3. now displaying spreadsheet rows in card viewIP-Camera MJPEG HTTP Web-Server Emulator: Turns web-camera into IP-camera with mjpeg streaming and access it over network. Usage of i-Lids (Imagery Library for Intelligent Detection Systems) and other data for surveillance systems development. Sends WMV, folder with images, web-camera data over the wire.LIM: LIM is .net software for archive manage.Luminji.CorWeb: luminji's company web.Map Generator: Map Generator for your Civ/Col/Roguelike games in VB.NET.Microformat Parsers for .NET: Microformat's Parsers for .NET helps you to collect information you run into on the web, that is stored by means of microformats. It's written in C# 3.0.Mini Rx: Mini Rx provides LINQ like queries over events for C# and F# using extension methods, somewhat like the Reactive Extensions (Rx) but in a lightweight open source package with one non-strong named assembly.MVC Music Store by NNT: Demo Music Store Follow tut from Microsoft. Team Explorer VS 2010 .NET 4PHP MySQL Web Site Creator: PHP MySQL Web Site Creator helps you create a basic template for your web site using a UI form. I creates all PHP, CSS, HTML basic pages to connect to MySQL database.PQLRD: Paco KLk!Project Web And Application: Project MobileStore in ASP use ASP.NET MVC 3RoboZip: RoboZip combines MS Robocopy and Zip-functionality. Create a job and RoboZip will zip all files (from a list of filetypes) within a time span in the past. The zip file name can contain date and time values of the time period it covers. It's developed in C#. The zip component is the DotNetZip Library from http://dotnetzip.codeplex.com. The logging is done using Apache log4net from http://logging.apache.org/log4net. The idea here is not to only present the code, but also to document how the de...Scriptster Gearbox: Scriptster Gearbox is a full C# scripting system based on Scriptster (also on CodePlex). It is aimed at Visual Studio developers and power users who need to automate the sorts of things that DOS is generally good at, but which using C# can make more powerful and more familiar.Technical Analysis Engine for .NET: Technical Analysis Engine is a .NET based library for technical analysis. It is fast, free, easy to use and provides functionality for financial market analysis.ValidationDSL: A Fluent validation engine with ease to use and reusable around the multi-tenant applicationsWebFormsUtilities: WebFormsUtilities is a collection of tools that assist in using MVC/MVP functionality to webforms in an unobtrusive manner. This differs from a hybrid MVC/WebForms page in that you can use methods like UpdateModel() or TryValidateModel() in a WebForms Page class.Wen - WPF 3D Navigator: Control the camera and the lights of a WPF 3D application without any code modification.WindStudios: WindStudiosWrite My Expect Statement: Have Rhino Mocks automatically generate your expect() and return() statements for you.zion xtraz: some basic helpful classes for use in every project

    Read the article

  • Simple GET operation with JSON data in ADF Mobile

    - by PadmajaBhat
    Usecase: This sample uses a RESTful service which contains a GET method that fetches employee details for an employee with given employee ID along with other methods. The data is fetched in JSON format. This RESTful service is then invoked via ADF Mobile and the JSON data thus obtained is parsed and rendered in mobile in a table. Prerequisite: Download JDev build JDEVADF_11.1.2.4.0_GENERIC_130421.1600.6436.1 or higher with mobile support.  Steps: Run EmployeeService.java in JSONService.zip. This is a simple service with a method, getEmpById(id) that takes employee ID as parameter and produces employee details in JSON format. Copy the target URL generated on running this service. The target URL will be as shown below: http://127.0.0.1:7101/JSONService-Project1-context-root/jersey/project1 Now, let us invoke this service in our mobile application. For this, create an ADF Mobile application.  Name the application JSON_SearchByEmpID and finish the wizard. Now, let us create a connection to our service. To do this, we create a URL Connection. Invoke new gallery wizard on ApplicationController project.  Select URL Connection option. In the Create URL Connection window, enter connection name as ‘conn’. For URL endpoint, supply the URL you copied earlier on running the service. Remember to use your system IP instead of localhost. Test the connection and click OK. At this point, a connection to the REST service has been created. Since JSON data is not supported directly in WSDC wizard, we need to invoke the operation through Java code using RestServiceAdapter. For this, in the ApplicationController project, create a Java class called ‘EmployeeDC’. We will be creating DC from this class. Add the following code to the newly created class to invoke the getEmpById method. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 public Employee fetchEmpDetails(){ RestServiceAdapter restServiceAdapter = Model.createRestServiceAdapter(); restServiceAdapter.clearRequestProperties(); restServiceAdapter.setConnectionName("conn"); //URL connection created with this name restServiceAdapter.setRequestType(RestServiceAdapter.REQUEST_TYPE_GET); restServiceAdapter.addRequestProperty("Content-Type", "application/json"); restServiceAdapter.addRequestProperty("Accept", "application/json; charset=UTF-8"); restServiceAdapter.setRetryLimit(0); restServiceAdapter.setRequestURI("/getById/"+inputEmpID); String response = ""; JSONBeanSerializationHelper jsonHelper = new JSONBeanSerializationHelper(); try { response = restServiceAdapter.send(""); //Invoke the GET operation System.out.println("Response received!"); Employee responseObject = (Employee) jsonHelper.fromJSON(Employee.class, response); return responseObject; } catch (Exception e) { } return null; } Here, in lines 2 to 9, we create the RestServiceAdapter and set various properties required to invoke the web service. At line 4, we are pointing to the connection ‘conn’ created previously. Since we want to invoke getEmpById method of the service, which is defined by the URL http://IP:7101/REST_Sanity_JSON-Project1-context-root/resources/project1/getById/{id} we are updating the request URI to point to this URI at line 9. inputEmpID is a variable that will hold the value input by the user for employee ID. This we will be creating in a while. As the method we are invoking is a GET operation and consumes json data, these properties are being set in lines 5 through 7. Finally, we are sending the request in line 13. In line 15, we use jsonHelper.fromJSON to convert received JSON data to a Java object. The required Java objects' structure is defined in class Employee.java whose structure is provided later. Since the response from our service is a simple response consisting of attributes like employee Id, name, design etc, we will just return this parsed response (line 16) and use it to create DC. As mentioned previously, we would like the user to input the employee ID for which he/she wants to perform search. So, in the same class, define a variable inputEmpID which will hold the value input by the user. Generate accessors for this variable. Lastly, we need to create Employee class. Employee class will define how we want to structure the JSON object received from the service. To design the Employee class, run the services’ method in the browser or via analyzer using path parameter as 1. This will give you the output JSON structure. Ours is a simple service that returns a JSONObject with a set of data. Hence, Employee class will just contain this set of data defined with the proper data types. Create Employee.java in the same project as EmployeeDC.java and write the below code: package application; import oracle.adfmf.java.beans.PropertyChangeListener; import oracle.adfmf.java.beans.PropertyChangeSupport; public class Employee { private String dept; private String desig; private int id; private String name; private int salary; private PropertyChangeSupport propertyChangeSupport = new PropertyChangeSupport(this); public void setDept(String dept) {         String oldDept = this.dept; this.dept = dept; propertyChangeSupport.firePropertyChange("dept", oldDept, dept); } public String getDept() { return dept; } public void setDesig(String desig) { String oldDesig = this.desig; this.desig = desig; propertyChangeSupport.firePropertyChange("desig", oldDesig, desig); } public String getDesig() { return desig; } public void setId(int id) { int oldId = this.id; this.id = id; propertyChangeSupport.firePropertyChange("id", oldId, id); } public int getId() { return id; } public void setName(String name) { String oldName = this.name; this.name = name; propertyChangeSupport.firePropertyChange("name", oldName, name); } public String getName() { return name; } public void setSalary(int salary) { int oldSalary = this.salary; this.salary = salary; propertyChangeSupport.firePropertyChange("salary", oldSalary, salary); } public int getSalary() { return salary; } public void addPropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.addPropertyChangeListener(l); } public void removePropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.removePropertyChangeListener(l);     } } Now, let us create a DC out of EmployeeDC.java.  DC as shown below is created. Now, you can design the mobile page as usual and invoke the operation of the service. To design the page, go to ViewController project and locate adfmf-feature.xml. Create a new feature called ‘SearchFeature’ by clicking the plus icon. Go the content tab and add an amx page. Call it SearchPage.amx. Call it SearchPage.amx. Remove primary and secondary buttons as we don’t need them and rename the header. Drag and drop inputEmpID from the DC palette onto Panel Page in the structure pane as input text with label. Next, drop fetchEmpDetails method as an ADF button. For a change, let us display the output in a table component instead of the usual form. However, you will notice that if you drag and drop Employee onto the structure pane, there is no option for ADF Mobile Table. Hence, we will need to create the table on our own. To do this, let us first drop Employee as an ADF Read -Only form. This step is needed to get the required bindings. We will be deleting this form in a while. Now, from the Component palette, search for ‘Table Layout’. Drag and drop this below the command button.  Within the tablelayout, insert ‘Row Layout’ and ‘Cell Format’ components. Final table structure should be as shown below. Here, we have also defined some inline styling to render the UI in a nice manner. <amx:tableLayout id="tl1" borderWidth="2" halign="center" inlineStyle="vertical-align:middle;" width="100%" cellPadding="10"> <amx:rowLayout id="rl1" > <amx:cellFormat id="cf1" width="30%"> <amx:outputText value="#{bindings.dept.hints.label}" id="ot7" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf2"> <amx:outputText value="#{bindings.dept.inputValue}" id="ot8" /> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl2"> <amx:cellFormat id="cf3" width="30%"> <amx:outputText value="#{bindings.desig.hints.label}" id="ot9" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf4" > <amx:outputText value="#{bindings.desig.inputValue}" id="ot10"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl3"> <amx:cellFormat id="cf5" width="30%"> <amx:outputText value="#{bindings.id.hints.label}" id="ot11" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf6" > <amx:outputText value="#{bindings.id.inputValue}" id="ot12"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl4"> <amx:cellFormat id="cf7" width="30%"> <amx:outputText value="#{bindings.name.hints.label}" id="ot13" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf8"> <amx:outputText value="#{bindings.name.inputValue}" id="ot14"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl5"> <amx:cellFormat id="cf9" width="30%"> <amx:outputText value="#{bindings.salary.hints.label}" id="ot15" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf10"> <amx:outputText value="#{bindings.salary.inputValue}" id="ot16"/> </amx:cellFormat> </amx:rowLayout>     </amx:tableLayout> The values used in the output text of the table come from the bindings obtained from the ADF Form created earlier. As we have used the bindings and don’t need the form anymore, let us delete the form.  One last thing before we deploy. When user changes employee ID, we want to clear the table contents. For this we associate a value change listener with the input text box. Click New in the resulting dialog to create a managed bean. Next, we create a method within the managed bean. For this, click on the New button associated with method. Call the method ‘empIDChange’. Open myClass.java and write the below code in empIDChange(). public void empIDChange(ValueChangeEvent valueChangeEvent) { // Add event code here... //Resetting the values to blank values when employee id changes AdfELContext adfELContext = AdfmfJavaUtilities.getAdfELContext(); ValueExpression ve = AdfmfJavaUtilities.getValueExpression("#{bindings.dept.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.desig.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.id.inputValue}", int.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.name.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.salary.inputValue}", int.class); ve.setValue(adfELContext, ""); } That’s it. Deploy the application to android emulator or device. Some snippets from the app.

    Read the article

  • Ninject WithConstructorArgument : No matching bindings are available, and the type is not self-bindable

    - by Jean-François Beauchamp
    My understanding of WithConstructorArgument is probably erroneous, because the following is not working: I have a service, lets call it MyService, whose constructor is taking multiple objects, and a string parameter called testEmail. For this string parameter, I added the following Ninject binding: string testEmail = "[email protected]"; kernel.Bind<IMyService>().To<MyService>().WithConstructorArgument("testEmail", testEmail); However, when executing the following line of code, I get an exception: var myService = kernel.Get<MyService>(); Here is the exception I get: Error activating string No matching bindings are available, and the type is not self-bindable. Activation path: 2) Injection of dependency string into parameter testEmail of constructor of type MyService 1) Request for MyService Suggestions: 1) Ensure that you have defined a binding for string. 2) If the binding was defined in a module, ensure that the module has been loaded into the kernel. 3) Ensure you have not accidentally created more than one kernel. 4) If you are using constructor arguments, ensure that the parameter name matches the constructors parameter name. 5) If you are using automatic module loading, ensure the search path and filters are correct. What am I doing wrong here? UPDATE: Here is the MyService constructor: [Ninject.Inject] public MyService(IMyRepository myRepository, IMyEventService myEventService, IUnitOfWork unitOfWork, ILoggingService log, IEmailService emailService, IConfigurationManager config, HttpContextBase httpContext, string testEmail) { this.myRepository = myRepository; this.myEventService = myEventService; this.unitOfWork = unitOfWork; this.log = log; this.emailService = emailService; this.config = config; this.httpContext = httpContext; this.testEmail = testEmail; } I have standard bindings for all the constructor parameter types. Only 'string' has no binding, and HttpContextBase has a binding that is a bit different: kernel.Bind<HttpContextBase>().ToMethod(context => new HttpContextWrapper(new HttpContext(new MyHttpRequest("", "", "", null, new StringWriter())))); and MyHttpRequest is defined as follows: public class MyHttpRequest : SimpleWorkerRequest { public string UserHostAddress; public string RawUrl; public MyHttpRequest(string appVirtualDir, string appPhysicalDir, string page, string query, TextWriter output) : base(appVirtualDir, appPhysicalDir, page, query, output) { this.UserHostAddress = "127.0.0.1"; this.RawUrl = null; } }

    Read the article

  • Deserializing JSON data to C# using JSON.NET

    - by Derek Utah
    I'm relatively new to working with C# and JSON data and am seeking guidance. I'm using C# 3.0, with .NET3.5SP1, and JSON.NET 3.5r6. I have a defined C# class that I need to populate from a JSON structure. However, not every JSON structure for an entry that is retrieved from the web service contains all possible attributes that are defined within the C# class. I've been being doing what seems to be the wrong, hard way and just picking out each value one by one from the JObject and transforming the string into the desired class property. JsonSerializer serializer = new JsonSerializer(); var o = (JObject)serializer.Deserialize(myjsondata); MyAccount.EmployeeID = (string)o["employeeid"][0]; What is the best way to deserialize a JSON structure into the C# class and handling possible missing data from the JSON source? My class is defined as: public class MyAccount { [JsonProperty(PropertyName = "username")] public string UserID { get; set; } [JsonProperty(PropertyName = "givenname")] public string GivenName { get; set; } [JsonProperty(PropertyName = "sn")] public string Surname { get; set; } [JsonProperty(PropertyName = "passwordexpired")] public DateTime PasswordExpire { get; set; } [JsonProperty(PropertyName = "primaryaffiliation")] public string PrimaryAffiliation { get; set; } [JsonProperty(PropertyName = "affiliation")] public string[] Affiliation { get; set; } [JsonProperty(PropertyName = "affiliationstatus")] public string AffiliationStatus { get; set; } [JsonProperty(PropertyName = "affiliationmodifytimestamp")] public DateTime AffiliationLastModified { get; set; } [JsonProperty(PropertyName = "employeeid")] public string EmployeeID { get; set; } [JsonProperty(PropertyName = "accountstatus")] public string AccountStatus { get; set; } [JsonProperty(PropertyName = "accountstatusexpiration")] public DateTime AccountStatusExpiration { get; set; } [JsonProperty(PropertyName = "accountstatusexpmaxdate")] public DateTime AccountStatusExpirationMaxDate { get; set; } [JsonProperty(PropertyName = "accountstatusmodifytimestamp")] public DateTime AccountStatusModified { get; set; } [JsonProperty(PropertyName = "accountstatusexpnotice")] public string AccountStatusExpNotice { get; set; } [JsonProperty(PropertyName = "accountstatusmodifiedby")] public Dictionary<DateTime, string> AccountStatusModifiedBy { get; set; } [JsonProperty(PropertyName = "entrycreatedate")] public DateTime EntryCreatedate { get; set; } [JsonProperty(PropertyName = "entrydeactivationdate")] public DateTime EntryDeactivationDate { get; set; } } And a sample of the JSON to parse is: { "givenname": [ "Robert" ], "passwordexpired": "20091031041550Z", "accountstatus": [ "active" ], "accountstatusexpiration": [ "20100612000000Z" ], "accountstatusexpmaxdate": [ "20110410000000Z" ], "accountstatusmodifiedby": { "20100214173242Z": "tdecker", "20100304003242Z": "jsmith", "20100324103242Z": "jsmith", "20100325000005Z": "rjones", "20100326210634Z": "jsmith", "20100326211130Z": "jsmith" }, "accountstatusmodifytimestamp": [ "20100312001213Z" ], "affiliation": [ "Employee", "Contractor", "Staff" ], "affiliationmodifytimestamp": [ "20100312001213Z" ], "affiliationstatus": [ "detached" ], "entrycreatedate": [ "20000922072747Z" ], "username": [ "rjohnson" ], "primaryaffiliation": [ "Staff" ], "employeeid": [ "999777666" ], "sn": [ "Johnson" ] }

    Read the article

  • WPF DataGrid binding to UserControl

    - by Trindaz
    I have a DataGrid with one column using a UserControl via a styled DataGridTemplateColumn. I can't seem to get the UserControl to 'see' the object that is in it's containing DataGridCell though. What kind of bindings can I create on the TextBox in my UserControl so that it can look up and see that object?! My UserControl and TemplateColumn Style are defined as: <Window.Resources> <local:UCTest x:Key="UCTest" /> <Style x:Key="TestStyle" TargetType="{x:Type WpfToolkit:DataGridCell}"> <Style.Setters> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type WpfToolkit:DataGridCell}"> <Grid Background="{Binding RelativeSource={RelativeSource TemplatedParent}, Converter={StaticResource drc}, Path=DataContext}"> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <local:UCTest /> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style.Setters> </Style> </Window.Resources> and my sample DataGrid is defined as: <WpfToolkit:DataGrid Name="dgSampleData" ItemsSource="{Binding}" AutoGenerateColumns="True" Margin="0,75,0,0"> <WpfToolkit:DataGrid.Columns> <WpfToolkit:DataGridTemplateColumn Header="Col2" CellStyle="{StaticResource TestStyle}" /> </WpfToolkit:DataGrid.Columns> </WpfToolkit:DataGrid> and my User Control is defined in a separate file as: <UserControl x:Class="UCTest" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:local="clr-namespace:WpfApplication1" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Width="104" Height="51"> <UserControl.Resources> <local:DataRowConverter x:Key="drc" /> </UserControl.Resources> <Grid> <TextBox Margin="12,12,-155,16" Name="TextBox1" Text="" /> </Grid> EDIT: My implementation of TestClass, which has the Test Property, which I want UCTest.TextBox1 to bind do: Public Class TestClass Private _Test As String = "Hello World Property!" Public Property Test() As String Get Return _Test End Get Set(ByVal value As String) _Test = value End Set End Property End Class Thanks in advance!

    Read the article

  • Problem with Android emulator

    - by benasio
    Projects do not run, on screen emulator only "ANDROID" WinXP pro SP3/Eclipse Galileo java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Java HotSpot(TM) Client VM (build 16.3-b01, mixed mode, sharing) My actions: 1.Start the emulator(Platform:2.1 API Level:7), wait until the window DDMS status will change to ONLINE 2.Launches helloandroid from examples - Run as Android Application Console: Android Launch! [2010-05-03 21:44:34 - HelloAndroid] adb is running normally. [2010-05-03 21:44:34 - HelloAndroid] Performing com.example.helloandroid.HelloAndroid activity launch [2010-05-03 21:44:34 - HelloAndroid] Automatic Target Mode: using existing emulator 'emulator-5554' running compatible AVD 'my_vm' [2010-05-03 21:44:34 - HelloAndroid] WARNING: Application does not specify an API level requirement! [2010-05-03 21:44:34 - HelloAndroid] Device API version is 7 (Android 2.1) [2010-05-03 21:44:34 - HelloAndroid] Uploading HelloAndroid.apk onto device 'emulator-5554' [2010-05-03 21:44:35 - HelloAndroid] Installing HelloAndroid.apk... [2010-05-03 21:45:07 - HelloAndroid] Success! [2010-05-03 21:45:08 - HelloAndroid] Starting activity com.example.helloandroid.HelloAndroid on device [2010-05-03 21:45:28 - HelloAndroid] ActivityManager: DDM dispatch reg wait timeout [2010-05-03 21:45:28 - HelloAndroid] ActivityManager: Can't dispatch DDM chunk 52454151: no handler defined [2010-05-03 21:45:28 - HelloAndroid] ActivityManager: Can't dispatch DDM chunk 48454c4f: no handler defined [2010-05-03 21:45:28 - HelloAndroid] ActivityManager: Can't dispatch DDM chunk 46454154: no handler defined [2010-05-03 21:45:28 - HelloAndroid] ActivityManager: Can't dispatch DDM chunk 4d505251: no handler defined [2010-05-03 21:45:52 - HelloAndroid] Device not ready. Waiting 3 seconds before next attempt. [2010-05-03 21:45:52 - HelloAndroid] ActivityManager: android.util.AndroidException: Can't connect to activity manager; is the system running? [2010-05-03 21:45:55 - HelloAndroid] Starting activity com.example.helloandroid.HelloAndroid on device [2010-05-03 21:46:11 - HelloAndroid] ActivityManager: DDM dispatch reg wait timeout ...... DDMS console (only errors and warnings) 05-03 17:43:52.437: ERROR/vold(26): Error opening switch name path '/sys/class/switch/test2' (No such file or directory) 05-03 17:43:52.437: ERROR/vold(26): Error bootstrapping switch '/sys/class/switch/test2' (No such file or directory) 05-03 17:43:52.437: ERROR/vold(26): Error opening switch name path '/sys/class/switch/test' (No such file or directory) 05-03 17:43:52.437: ERROR/vold(26): Error bootstrapping switch '/sys/class/switch/test' (No such file or directory) 05-03 17:48:34.036: WARN/Zygote(29): Preloaded drawable resource #0x1080093 (res/drawable-mdpi/sym_def_app_icon.png) that varies with configuration!! 05-03 17:48:34.406: WARN/Zygote(29): Preloaded drawable resource #0x1080002 (res/drawable-mdpi/arrow_down_float.png) that varies with configuration!! 05-03 17:48:35.836: WARN/Zygote(29): Preloaded drawable resource #0x10800b4 (res/drawable/btn_check.xml) that varies with configuration!! 05-03 17:48:36.076: WARN/Zygote(29): Preloaded drawable resource #0x10800b7 (res/drawable-mdpi/btn_check_label_background.9.png) that varies with configuration!! 05-03 17:48:36.106: WARN/Zygote(29): Preloaded drawable resource #0x10800b8 (res/drawable-mdpi/btn_check_off.png) that varies with configuration!! 05-03 17:48:36.147: WARN/Zygote(29): Preloaded drawable resource #0x10800bd (res/drawable-mdpi/btn_check_on.png) that varies with configuration!! 05-03 17:48:36.437: WARN/Zygote(29): Preloaded drawable resource #0x1080004 (res/drawable/btn_default.xml) that varies with configuration!! 05-03 17:48:36.716: WARN/Zygote(29): Preloaded drawable resource #0x1080005 (res/drawable/btn_default_small.xml) that varies with configuration!! 05-03 17:48:36.966: WARN/Zygote(29): Preloaded drawable resource #0x1080006 (res/drawable/btn_dropdown.xml) that varies with configuration!! 05-03 17:48:37.326: WARN/Zygote(29): Preloaded drawable resource #0x1080008 (res/drawable/btn_plus.xml) that varies with configuration!! 05-03 17:48:37.707: WARN/Zygote(29): Preloaded drawable resource #0x1080007 (res/drawable/btn_minus.xml) that varies with configuration!! 05-03 17:48:38.057: WARN/Zygote(29): Preloaded drawable resource #0x1080009 (res/drawable/btn_radio.xml) that varies with configuration!! 05-03 17:48:38.776: WARN/Zygote(29): Preloaded drawable resource #0x108000a (res/drawable/btn_star.xml) that varies with configuration!! 05-03 17:48:39.327: WARN/Zygote(29): Preloaded drawable resource #0x1080125 (res/drawable/btn_toggle.xml) that varies with configuration!! 05-03 17:48:39.416: WARN/Zygote(29): Preloaded drawable resource #0x1080187 (res/drawable-mdpi/ic_emergency.png) that varies with configuration!! 05-03 17:48:39.506: WARN/Zygote(29): Preloaded drawable resource #0x1080012 (res/drawable-mdpi/divider_horizontal_bright.9.png) that varies with configuration!! 05-03 17:48:39.576: WARN/Zygote(29): Preloaded drawable resource #0x1080014 (res/drawable-mdpi/divider_horizontal_dark.9.png) that varies with configuration!! 05-03 17:48:40.126: WARN/Zygote(29): Preloaded drawable resource #0x1080016 (res/drawable/edit_text.xml) that varies with configuration!! 05-03 17:48:40.507: WARN/Zygote(29): Preloaded drawable resource #0x1080161 (res/drawable/expander_group.xml) that varies with configuration!! 05-03 17:48:41.036: WARN/Zygote(29): Preloaded drawable resource #0x1080062 (res/drawable/list_selector_background.xml) that varies with configuration!! 05-03 17:48:41.177: WARN/Zygote(29): Preloaded drawable resource #0x1080217 (res/drawable-mdpi/menu_background.9.png) that varies with configuration!! 05-03 17:48:41.256: WARN/Zygote(29): Preloaded drawable resource #0x1080218 (res/drawable-mdpi/menu_background_fill_parent_width.9.png) that varies with configuration!! 05-03 17:48:41.567: WARN/Zygote(29): Preloaded drawable resource #0x1080219 (res/drawable/menu_selector.xml) that varies with configuration!! 05-03 17:48:41.706: WARN/Zygote(29): Preloaded drawable resource #0x1080224 (res/drawable-mdpi/panel_background.9.png) that varies with configuration!! 05-03 17:48:41.849: WARN/Zygote(29): Preloaded drawable resource #0x108022e (res/drawable-mdpi/popup_bottom_bright.9.png) that varies with configuration!! 05-03 17:48:42.026: WARN/Zygote(29): Preloaded drawable resource #0x108022f (res/drawable-mdpi/popup_bottom_dark.9.png) that varies with configuration!! 05-03 17:48:42.156: WARN/Zygote(29): Preloaded drawable resource #0x1080230 (res/drawable-mdpi/popup_bottom_medium.9.png) that varies with configuration!! 05-03 17:48:42.276: WARN/Zygote(29): Preloaded drawable resource #0x1080231 (res/drawable-mdpi/popup_center_bright.9.png) that varies with configuration!! 05-03 17:48:42.376: WARN/Zygote(29): Preloaded drawable resource #0x1080232 (res/drawable-mdpi/popup_center_dark.9.png) that varies with configuration!! 05-03 17:48:42.507: WARN/Zygote(29): Preloaded drawable resource #0x1080235 (res/drawable-mdpi/popup_full_dark.9.png) that varies with configuration!! 05-03 17:48:42.606: WARN/Zygote(29): Preloaded drawable resource #0x1080238 (res/drawable-mdpi/popup_top_bright.9.png) that varies with configuration!! 05-03 17:48:42.696: WARN/Zygote(29): Preloaded drawable resource #0x1080239 (res/drawable-mdpi/popup_top_dark.9.png) that varies with configuration!! 05-03 17:48:42.946: WARN/Zygote(29): Preloaded drawable resource #0x108006d (res/drawable/progress_indeterminate_horizontal.xml) that varies with configuration!! 05-03 17:48:43.076: WARN/Zygote(29): Preloaded drawable resource #0x108023f (res/drawable/progress_small.xml) that varies with configuration!! 05-03 17:48:43.456: WARN/Zygote(29): Preloaded drawable resource #0x1080240 (res/drawable/progress_small_titlebar.xml) that varies with configuration!! 05-03 17:48:43.957: WARN/Zygote(29): Preloaded drawable resource #0x1080262 (res/drawable-mdpi/scrollbar_handle_horizontal.9.png) that varies with configuration!! 05-03 17:48:44.036: WARN/Zygote(29): Preloaded drawable resource #0x1080263 (res/drawable-mdpi/scrollbar_handle_vertical.9.png) that varies with configuration!! 05-03 17:48:44.176: WARN/Zygote(29): Preloaded drawable resource #0x1080071 (res/drawable/spinner_dropdown_background.xml) that varies with configuration!! 05-03 17:48:44.317: WARN/Zygote(29): Preloaded drawable resource #0x1080326 (res/drawable-mdpi/title_bar_shadow.9.png) that varies with configuration!! 05-03 17:48:44.496: WARN/Zygote(29): Preloaded drawable resource #0x10801c6 (res/drawable-mdpi/indicator_code_lock_drag_direction_green_up.png) that varies with configuration!! 05-03 17:48:44.607: WARN/Zygote(29): Preloaded drawable resource #0x10801c7 (res/drawable-mdpi/indicator_code_lock_drag_direction_red_up.png) that varies with configuration!! 05-03 17:48:45.956: WARN/Zygote(29): Preloaded drawable resource #0x10801c8 (res/drawable-mdpi/indicator_code_lock_point_area_default.png) that varies with configuration!! 05-03 17:48:46.407: WARN/Zygote(29): Preloaded drawable resource #0x10801c9 (res/drawable-mdpi/indicator_code_lock_point_area_green.png) that varies with configuration!! 05-03 17:48:46.696: WARN/Zygote(29): Preloaded drawable resource #0x10801ca (res/drawable-mdpi/indicator_code_lock_point_area_red.png) that varies with configuration!! 05-03 17:48:56.307: ERROR/BatteryService(170): usbOnlinePath not found 05-03 17:48:56.336: ERROR/BatteryService(170): batteryVoltagePath not found 05-03 17:48:56.350: ERROR/BatteryService(170): batteryTemperaturePath not found 05-03 17:48:56.696: ERROR/SurfaceFlinger(170): Couldn't open /sys/power/wait_for_fb_sleep or /sys/power/wait_for_fb_wake 05-03 17:48:57.847: WARN/SurfaceFlinger(170): ro.sf.lcd_density not defined, using 160 dpi by default. 05-03 17:49:02.116: WARN/UsageStats(170): Usage stats version changed; dropping 05-03 17:49:05.036: WARN/zipro(182): Unable to open zip '/data/local/bootanimation.zip': No such file or directory 05-03 17:49:06.297: WARN/zipro(182): Unable to open zip '/system/media/bootanimation.zip': No such file or directory 05-03 17:49:50.637: WARN/PackageManager(170): Running ENG build: no pre-dexopt! 05-03 17:53:59.196: WARN/PackageManager(170): Unknown permission com.google.android.providers.gmail.permission.WRITE_GMAIL in package com.android.settings 05-03 17:53:59.238: WARN/PackageManager(170): Unknown permission com.google.android.providers.gmail.permission.READ_GMAIL in package com.android.settings 05-03 17:53:59.286: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.settings 05-03 17:53:59.517: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.providers.contacts 05-03 17:53:59.656: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.cp in package com.android.providers.contacts 05-03 17:53:59.717: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.mail in package com.android.contacts 05-03 17:53:59.796: WARN/PackageManager(170): Unknown permission android.permission.ADD_SYSTEM_SERVICE in package com.android.phone 05-03 17:54:00.126: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.development 05-03 17:54:00.206: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.ALL_SERVICES in package com.android.development 05-03 17:54:00.206: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.YouTubeUser in package com.android.development 05-03 17:54:00.237: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.ACCESS_GOOGLE_PASSWORD in package com.android.development 05-03 17:54:00.258: WARN/PackageManager(170): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.browser 05-03 17:54:25.456: WARN/ResourceType(170): Resources don't contain package for resource number 0x7f0700e5 05-03 17:54:25.486: WARN/ResourceType(170): Resources don't contain package for resource number 0x7f020031 05-03 17:54:25.536: WARN/ResourceType(170): Resources don't contain package for resource number 0x7f020030 05-03 17:54:25.576: WARN/ResourceType(170): Resources don't contain package for resource number 0x7f050000 05-03 17:54:38.708: WARN/SharedBufferStack(182): waitForCondition(LockCondition) timed out (identity=0, status=0). CPU may be pegged. trying again.

    Read the article

  • Compat Wireless Drivers Centrino N-2230

    - by user2699451
    So I am using linux and am having trouble installing the Compat Wireless drivers Hardware: Intel Centrino N-2230 OS: Linux Mint 64bit (kernel 13.08-generic) I followed this link http://www.mathyvanhoef.com/2012/09/compat-wireless-injection-patch-for.html Output: apt-get install linux-headers-$(uname -r) Reading package lists... Done Building dependency tree Reading state information... Done linux-headers-3.8.0-19-generic is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 19 not upgraded. charles-W55xEU compat-wireless-2010-10-16 # cd ~ charles-W55xEU ~ # dir adt-bundle-linux-x86_64-20130917.zip Desktop known_hosts_backup charles-W55xEU ~ # wget http://www.orbit-lab.org/kernel/compat-wireless-3-stable/v3.6/compat-wireless-3.6.2-1-snp.tar.bz2 --2013-10-29 10:28:23-- http://www.orbit-lab.org/kernel/compat-wireless-3-stable/v3.6/compat-wireless-3.6.2-1-snp.tar.bz2 Resolving www.orbit-lab.org (www.orbit-lab.org)... 128.6.192.131 Connecting to www.orbit-lab.org (www.orbit-lab.org)|128.6.192.131|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 4443700 (4,2M) [application/x-bzip2] Saving to: ‘compat-wireless-3.6.2-1-snp.tar.bz2’ 100%[======================================>] 4 443 700 13,5KB/s in 11m 3s 2013-10-29 10:39:27 (6,55 KB/s) - ‘compat-wireless-3.6.2-1-snp.tar.bz2’ saved [4443700/4443700] charles-W55xEU ~ # tar -xf compat-wireless-3.6.2-1-snp.tar.bz2 charles-W55xEU ~ # cd compat-wireless-3.6-rc6-1 bash: cd: compat-wireless-3.6-rc6-1: No such file or directory charles-W55xEU ~ # dir adt-bundle-linux-x86_64-20130917.zip Desktop compat-wireless-3.6.2-1-snp known_hosts_backup compat-wireless-3.6.2-1-snp.tar.bz2 charles-W55xEU ~ # cd compat-wireless-3.6.2-1-snp/ charles-W55xEU compat-wireless-3.6.2-1-snp # dir code-metrics.txt defconfigs linux-next-pending pending-stable compat drivers MAINTAINERS README config.mk enable-older-kernels Makefile scripts COPYRIGHT include net udev crap linux-next-cherry-picks patches charles-W55xEU compat-wireless-3.6.2-1-snp # wget http://patches.aircrack-ng.org/mac80211.compat08082009.wl_frag+ack_v1.patch --2013-10-29 10:40:52-- http://patches.aircrack-ng.org/mac80211.compat08082009.wl_frag+ack_v1.patch Resolving patches.aircrack-ng.org (patches.aircrack-ng.org)... 213.186.33.2, 2001:41d0:1:1b00:213:186:33:2 Connecting to patches.aircrack-ng.org (patches.aircrack-ng.org)|213.186.33.2|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1049 (1,0K) [text/plain] Saving to: ‘mac80211.compat08082009.wl_frag+ack_v1.patch’ 100%[======================================>] 1 049 --.-K/s in 0s 2013-10-29 10:40:56 (180 MB/s) - ‘mac80211.compat08082009.wl_frag+ack_v1.patch’ saved [1049/1049] charles-W55xEU compat-wireless-3.6.2-1-snp # patch -p1 < mac80211.compat08082009.wl_frag+ack_v1.patch patching file net/mac80211/tx.c Hunk #1 succeeded at 792 (offset 115 lines). charles-W55xEU compat-wireless-3.6.2-1-snp # wget -Ocompatwireless_chan_qos_frag.patch http://pastie.textmate.org/pastes/4882675/download --2013-10-29 10:43:18-- http://pastie.textmate.org/pastes/4882675/download Resolving pastie.textmate.org (pastie.textmate.org)... 178.79.137.125 Connecting to pastie.textmate.org (pastie.textmate.org)|178.79.137.125|:80... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: http://pastie.org/pastes/4882675/download [following] --2013-10-29 10:43:20-- http://pastie.org/pastes/4882675/download Resolving pastie.org (pastie.org)... 96.126.119.119 Connecting to pastie.org (pastie.org)|96.126.119.119|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 2036 (2,0K) [application/octet-stream] Saving to: ‘compatwireless_chan_qos_frag.patch’ 100%[======================================>] 2 036 --.-K/s in 0,001s 2013-10-29 10:43:21 (3,35 MB/s) - ‘compatwireless_chan_qos_frag.patch’ saved [2036/2036] charles-W55xEU compat-wireless-3.6.2-1-snp # patch -p1 < compatwireless_chan_qos_frag.patch patching file drivers/net/wireless/rtl818x/rtl8187/dev.c patching file net/mac80211/tx.c Hunk #1 succeeded at 1495 (offset 8 lines). patching file net/wireless/chan.c charles-W55xEU compat-wireless-3.6.2-1-snp # make ./scripts/gen-compat-autoconf.sh /root/compat-wireless-3.6.2-1-snp/.config /root/compat-wireless-3.6.2-1-snp/config.mk > include/linux/compat_autoconf.h make -C /lib/modules/3.8.0-19-generic/build M=/root/compat-wireless-3.6.2-1-snp modules make[1]: Entering directory `/usr/src/linux-headers-3.8.0-19-generic' CC [M] /root/compat-wireless-3.6.2-1-snp/compat/main.o LD [M] /root/compat-wireless-3.6.2-1-snp/compat/compat.o CC [M] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:8:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_pci.h:217:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_pci_init’ In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:10:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_gmac_cmn.h:95:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_gmac_cmn_init’ In file included from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8:0: /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:25:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:152:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:17:21: warning: ‘bcma_bus_next_num’ defined but not used [-Wunused-variable] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:93:12: warning: ‘bcma_register_cores’ defined but not used [-Wunused-function] make[3]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o] Error 1 make[2]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma] Error 2 make[1]: *** [_module_/root/compat-wireless-3.6.2-1-snp] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.8.0-19-generic' make: *** [modules] Error 2 charles-W55xEU compat-wireless-3.6.2-1-snp # make install Warning: You may or may not need to update your initframfs, you should if any of the modules installed are part of your initramfs. To add support for your distribution to do this automatically send a patch against ./scripts/update-initramfs. If your distribution does not require this send a patch against the '/usr/bin/lsb_release -i -s': LinuxMint tag for your distribution to avoid this warning. make -C /lib/modules/3.8.0-19-generic/build M=/root/compat-wireless-3.6.2-1-snp modules make[1]: Entering directory `/usr/src/linux-headers-3.8.0-19-generic' CC [M] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:8:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_pci.h:217:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_pci_init’ In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:10:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_gmac_cmn.h:95:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_gmac_cmn_init’ In file included from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8:0: /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:25:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:152:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:17:21: warning: ‘bcma_bus_next_num’ defined but not used [-Wunused-variable] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:93:12: warning: ‘bcma_register_cores’ defined but not used [-Wunused-function] make[3]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o] Error 1 make[2]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma] Error 2 make[1]: *** [_module_/root/compat-wireless-3.6.2-1-snp] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.8.0-19-generic' make: *** [modules] Error 2 charles-W55xEU compat-wireless-3.6.2-1-snp # It keeps giving errors, same with other sites, I get the same errors??? I am lost, help needed

    Read the article

  • Do I need afxres.h, if I am not using MFC? How do I remove it from the .RC script?

    - by Cheeso
    I don't know RC scripts. I want to include Product version, File version, etc. metadata into a DLL I'm building. I'm using an .rc file to do that. The build is makefile driven. I'm following along with an example .rc scrpit I found. The template .rc file includes afxres.h , but I don't think I need that. But if I just remove it I get a bunch of compile errors. What does a basic, non-MFC RC script look like? Can I remove all the stuff like this: ///////////////////////////////////////////////////////////////////////////// // English (U.S.) resources #if !defined(AFX_RESOURCE_DLL) || defined(AFX_TARG_ENU) #ifdef _WIN32 LANGUAGE LANG_ENGLISH, SUBLANG_ENGLISH_US #pragma code_page(1252) #endif //_WIN32 ....

    Read the article

  • Cannot find System.Web.Script.Service namespace error after upgrading to Visual studio 2010

    - by Gavin
    I've just upgraded a VS 2008 project to VS 2010, converting the project but keeping the target as .NET 3.5 (SP1 is installed). My project worked without issue under VS 2008 on another machine. I've added references to System.Web.Extensions.dll but I'm still getting the following errors from code in the App_Code folder: 1) Cannot find System.Web.Script.Service namespace 2) Type 'System.Web.Script.Services.ScriptService' is not defined. 3) Type 'System.Runtime.Serialization.Json.DataContractJsonSerializer' is not defined. Anyone have any ideas what the problem might be as I'm pretty stumped? :(

    Read the article

  • TFS 2010 build config transform problem

    - by Zdenek
    Hi all, I'm facing quite a problem while setting up automated TFS Builds. Basically I created new configuration called Tests, added transform config, defined different connection strings for the Database. Then defined TFS build, building whole solution with MSBuild arguments /p:DeployOnBuild=True /p:Configuration=Tests. The problem is that in the drop location (Build_PublishedWebsites\Project) I get web.config, web.debug.config, web.release.config and web.tests.config, however I would expect just one transformed web.config. I already checked PDC presentation Web Deployment Painkillers: Microsoft Visual Studio 2010 & MS Deploy but didn't help. Thanks for any answer.

    Read the article

  • Having trouble compiling with GDI+ (VC++ 2008)

    - by user146780
    I just simply include gdiplus.h and get all these errors: Warning 32 warning C4229: anachronism used : modifiers on data are ignored c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1133 Warning 38 warning C4229: anachronism used : modifiers on data are ignored c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1139 Warning 49 warning C4229: anachronism used : modifiers on data are ignored c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1286 Warning 55 warning C4229: anachronism used : modifiers on data are ignored c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1292 Warning 61 warning C4229: anachronism used : modifiers on data are ignored c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2224 Warning 68 warning C4229: anachronism used : modifiers on data are ignored c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2262 Warning 74 warning C4229: anachronism used : modifiers on data are ignored c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2310 Warning 82 warning C4229: anachronism used : modifiers on data are ignored c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2321 Error 112 fatal error C1003: error count exceeds 100; stopping compilation c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 236 Error 1 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft sdks\windows\v7.0\include\gdiplusimaging.h 74 Error 7 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft sdks\windows\v7.0\include\gdiplusimaging.h 280 Error 8 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft sdks\windows\v7.0\include\gdiplusimaging.h 280 Error 94 error C2761: '{ctor}' : member function redeclaration not allowed c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 195 Error 102 error C2761: '{ctor}' : member function redeclaration not allowed c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 212 Error 110 error C2761: '{ctor}' : member function redeclaration not allowed c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 231 Error 21 error C2535: 'Gdiplus::Metafile::Metafile(void)' : member function already defined or declared c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 813 Error 23 error C2535: 'Gdiplus::Metafile::Metafile(void)' : member function already defined or declared c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 820 Error 25 error C2535: 'Gdiplus::Metafile::Metafile(void)' : member function already defined or declared c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 829 Error 27 error C2535: 'Gdiplus::Metafile::Metafile(void)' : member function already defined or declared c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 923 Error 16 error C2535: 'Gdiplus::Image::Image(void)' : member function already defined or declared c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 471 Error 4 error C2470: 'IImageBytes' : looks like a function definition, but there is no parameter list; skipping apparent body c:\program files\microsoft sdks\windows\v7.0\include\gdiplusimaging.h 74 Error 89 error C2448: 'Gdiplus::Metafile::{ctor}' : function-style initializer appears to be a function definition c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 76 Error 97 error C2447: '{' : missing function header (old-style formal list?) c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 199 Error 105 error C2447: '{' : missing function header (old-style formal list?) c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 218 Error 2 error C2440: 'initializing' : cannot convert from 'const char [37]' to 'int' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusimaging.h 74 Error 72 error C2275: 'HDC' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2310 Error 76 error C2275: 'HDC' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2310 Error 80 error C2275: 'HDC' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2321 Error 84 error C2275: 'HDC' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2321 Error 92 error C2275: 'HDC' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 195 Error 100 error C2275: 'HDC' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 212 Error 108 error C2275: 'HDC' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 231 Error 60 error C2275: 'Gdiplus::MetafileHeader' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2224 Error 67 error C2275: 'Gdiplus::GpMetafile' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2262 Error 31 error C2275: 'Gdiplus::GpImage' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1133 Error 37 error C2275: 'Gdiplus::GpImage' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1139 Error 48 error C2275: 'Gdiplus::GpBitmap' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1286 Error 54 error C2275: 'Gdiplus::GpBitmap' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1292 Error 3 error C2146: syntax error : missing ';' before identifier 'IImageBytes' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusimaging.h 74 Error 6 error C2146: syntax error : missing ';' before identifier 'id' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusimaging.h 280 Error 73 error C2146: syntax error : missing ')' before identifier 'referenceHdc' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2310 Error 81 error C2146: syntax error : missing ')' before identifier 'referenceHdc' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2321 Error 93 error C2146: syntax error : missing ')' before identifier 'referenceHdc' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 195 Error 101 error C2146: syntax error : missing ')' before identifier 'referenceHdc' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 212 Error 109 error C2146: syntax error : missing ')' before identifier 'referenceHdc' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 231 Error 96 error C2143: syntax error : missing ';' before '{' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 199 Error 104 error C2143: syntax error : missing ';' before '{' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 218 Error 33 error C2078: too many initializers c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1133 Error 39 error C2078: too many initializers c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1139 Error 50 error C2078: too many initializers c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1286 Error 56 error C2078: too many initializers c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1292 Error 62 error C2078: too many initializers c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2224 Error 69 error C2078: too many initializers c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2262 Error 75 error C2078: too many initializers c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2310 Error 83 error C2078: too many initializers c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2321 Error 29 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1133 Error 35 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1139 Error 46 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1286 Error 52 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1292 Error 58 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2222 Error 65 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2262 Error 71 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2309 Error 79 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2320 Error 88 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 75 Error 91 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 194 Error 99 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 211 Error 107 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 230 Error 66 error C2065: 'metafile' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2262 Error 28 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1133 Error 34 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1139 Error 45 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1286 Error 51 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1292 Error 57 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2222 Error 64 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2262 Error 70 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2309 Error 78 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2320 Error 87 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 75 Error 90 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 194 Error 98 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 211 Error 106 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 230 Error 30 error C2065: 'image' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1133 Error 36 error C2065: 'image' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1139 Error 59 error C2065: 'header' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2224 Error 47 error C2065: 'bitmap' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1286 Error 53 error C2065: 'bitmap' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1292 Error 12 error C2061: syntax error : identifier 'PROPID' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 443 Error 13 error C2061: syntax error : identifier 'PROPID' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 444 Error 14 error C2061: syntax error : identifier 'PROPID' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 445 Error 15 error C2061: syntax error : identifier 'PROPID' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 453 Error 41 error C2061: syntax error : identifier 'PROPID' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1244 Error 42 error C2061: syntax error : identifier 'PROPID' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1247 Error 43 error C2061: syntax error : identifier 'PROPID' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1250 Error 44 error C2061: syntax error : identifier 'PROPID' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1262 Error 9 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 384 Error 10 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 395 Error 11 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 405 Error 17 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 505 Error 18 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 516 Error 19 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 758 Error 20 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 813 Error 22 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 820 Error 24 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 829 Error 26 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 855 Error 40 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1156 Error 63 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2242 Error 86 error C2061: syntax error : identifier 'byte' c:\program files\microsoft sdks\windows\v7.0\include\gdipluspath.h 133 Error 5 error C2059: syntax error : 'public' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusimaging.h 74 Error 77 error C2059: syntax error : ')' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2316 Error 85 error C2059: syntax error : ')' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2327 Error 95 error C2059: syntax error : ')' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 198 Error 103 error C2059: syntax error : ')' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 217 Error 111 error C2059: syntax error : ')' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 236 I tried updating my sdk to 7.0 but it did not help. I'm not even making any calls to the API. Thanks

    Read the article

  • Javascript || or operator with a undefinded variable

    - by bryan sammon
    I have been doing some reading lately one article I read was from Opera. http://dev.opera.com/articles/view/javascript-best-practices/ In that article they write this: Another common situation in JavaScript is providing a preset value for a variable if it is not defined, like so: if(v){ var x = v; } else { var x = 10; } The shortcut notation for this is the double pipe character: var x = v || 10; For some reason I cant get this to work for me. Is it really possible to check to see if v is defined, if not x = 10? --Thanks. Bryan

    Read the article

  • Memory leak on CollectionView.View.Refresh

    - by Dabblernl
    I have defined my binding thus: <TreeView ItemsSource="{Binding UsersView.View}" ItemTemplate="{StaticResource MyDataTemplate}" /> The CollectionViewSource is defined thus: private ObservableCollection<UserData> users; public CollectionViewSource UsersView{get;set;} UsersView=new CollectionViewSource{Source=users}; UsersView.SortDescriptions.Add( new SortDescription("IsLoggedOn",ListSortDirection.Descending); UsersView.SortDescriptions.Add( new SortDescription("Username",ListSortDirection.Ascending); So far, so good, this works as expected: The view shows first the users that are logged on in alphabetical order, then the ones that are not. However, the IsLoggedIn property of the UserData is updated every few seconds by a backgroundworker thread and then the code calls: UsersView.View.Refresh(); on the UI thread. Again this works as expected: users that log on are moved from the bottom of the view to the top and vice versa. However: Every time I call the Refresh method on the view the application hoards 3,5MB of extra memory, which is only released after application shutdown (or after an OutOfMemoryException...) I did some research and below is a list of fixes that did NOT work: The UserData class implements INotifyPropertyChanged Changing the underlying users collection does not make any difference at all: any IENumerable<UserData as a source for the CollectionViewSource causes the problem. -Changing the ColletionViewSource to a List<UserData (and refreshing the binding) or inheriting from ObservableCollection to get access to the underlying Items collection to sort that in place does not work. I am out of ideas! Help? EDIT: I found it: The Resource MyDataTemplate contains a Label that is bound to a UserData object to show one of its properties, the UserData objects being handed down by the TreeView's ItemsSource. The Label has a ContextMenu defined thus: <ContextMenu Background="Transparent" Width="325" Opacity=".8" HasDropShadow="True"> <PrivateMessengerUI:MyUserData IsReadOnly="True" > <PrivateMessengerUI:MyUserData.DataContext> <Binding Path="."/> </PrivateMessengerUI:MyUserData.DataContext> </PrivateMessengerUI:MyUserData> </ContextMenu> The MyUserData object is a UserControl that shows All properties of the UserData object. In this way the user first only sees one piece of data of a user and on a right click sees all of it. When I remove the MyUserData UserControl from the DataTemplate the memory leak disappears! How can I still implement the behaviour as specified above?

    Read the article

  • How can I do something ~after~ an event has fired in C#?

    - by Siracuse
    I'm using the following project to handle global keyboard and mouse hooking in my C# application. This project is basically a wrapper around the Win API call SetWindowsHookEx using either the WH_MOUSE_LL or WH_KEYBOARD_LL constants. It also manages certain state and generally makes this kind of hooking pretty pain free. I'm using this for a mouse gesture recognition software I'm working on. Basically, I have it setup so it detects when a global hotkey is pressed down (say CTRL), then the user moves the mouse in the shape of a pre-defined gesture and then releases global hotkey. The event for the KeyDown is processed and tells my program to start recording the mouse locations until it receives the KeyUp event. This is working fine and it allows an easy way for users to enter a mouse-gesture mode. Once the KeyUp event fires and it detects the appropriate gesture, it is suppose to send certain keystrokes to the active window that the user has defined for that particular gesture they just drew. I'm using the SendKeys.Send/SendWait methods to send output to the current window. My problem is this: When the user releases their global hotkey (say CTRL), it fires the KeyUp event. My program takes its recorded mouse points and detects the relevant gesture and attempts to send the correct input via SendKeys. However, because all of this is in the KeyUp event, that global hotkey hasn't finished being processed. So, for example if I defined a gesture to send the key "A" when it is detected, and my global hotkey is CTRL, when it is detected SendKeys will send "A" but while CTRL is still "down". So, instead of just sending A, I'm getting CTRL-A. So, in this example, instead of physically sending the single character "A" it is selecting-all via the CTRL-A shortcut. Even though the user has released the CTRL (global hotkey), it is still being considered down by the system. Once my KeyUp event fires, how can I have my program wait some period of time or for some event so I can be sure that the global hotkey is truly no longer being registered by the system, and only then sending the correct input via SendKeys?

    Read the article

  • Weak-linking with static libraries

    - by Jaakko L.
    I have declared an external function with a GCC weak attribute in a .c file: extern int weakFunction( ) __attribute__ ((weak)); Compiled object file has weakFunction defined as a weak symbol. Output of nm: 1791: w weakFunction I am calling the weak defined function as follows: if (weakFunction != NULL) { weakFunction(); } When I link the program by defining the object files as parameters to GCC (gcc main.o weakf.o -o main.exe) weak symbols work fine. If I leave the weakf.o out of linking, the function address is NULL in main.c and the function won't be called. Problem is, when weakf.o is inside a static library, for some reason the linker doesn't find the function and the function address always ends up being NULL. Static library is created with ar: ar rcs weaklibrary weakf.o Anyone had similar problems?

    Read the article

  • Kohana 3: Auth module

    - by Thomas
    Hi, i'm trying to learn the Kohana's Auth module but login method always return false. Controller: <?php defined('SYSPATH') OR die('No Direct Script Access'); class Controller_Auth extends Controller { public function action_index() { if($_POST) { $this->login(); } $this->template = View::factory('login'); echo $this->template; } private function login() { $user = ORM::factory('user'); $data = array('username' => 'wilson', 'password' => '123'); if(!$user->login($data)) { echo 'FAILED!'; } } private function logout() { } } ?> Model: <?php defined('SYSPATH') or die('No direct script access.'); class Model_User extends Model_Auth_User { } ?>

    Read the article

  • Moving a point along a vector

    - by Chris
    Hello- I have a point defined by x,y and a vector defined by heading, speed. I am attempting to move the point x,y along this vector, at a distance of 'speed'. Below is the code I am currently using: self.x += self.speed * cos(self.heading); self.y += self.speed * sin(self.heading); Heading can be any angle in a full circle - 0 to 2p (0-360 degrees). The problem is the above code: Only moves along the x or y axis when angle is 0-270 for example, when the avatar is facing the top-right corner (45 degrees relative), it moves straight up. Does not move at all when angle is 270-360 heading, speed, X and Y are all doubles, and heading is reported by the user touching a direction-pad in the lower corner. I know the heading is correct because the avatar rotates to the correct direction, it is just the actual movement I am having problems with. Thanks for any help - Chris

    Read the article

  • Getting Types in Win32 Dll

    - by Usman
    Hello, I want to know the types and details in a plain Win32DLL just like we can get in case of COM.In COM every thing embed inside idl and results in TLB, here we get every thing , MSFT exposes APIS by which we can extract types. In case of Win32 I strongly needed types defined in it and all details of that type(e.g what are members in it and their types as well). Parsing PE file and looking up export table only gives the exported functions. I want all custom types(Win32 interfaces,classes and members details with types) defined in it. How? Regards Usman

    Read the article

  • ASP.NET MVC Route Default values

    - by Sadegh
    hi, i defined two routes in global.asax like below context.MapRoute("HomeRedirect", "", new { controller = "Home", action = "redirect" }); context.MapRoute("UrlResolver", "{culture}/some", new { culture = "en-gb", controller = "someController", action = "someAction" }, new { culture = new CultureRouteConstraint() }); according to above definition, when user request mysite.com/ redirect action of HomeController should be called and in that: public class HomeController : Controller { public ActionResult Redirect() { return RedirectToRoute("UrlResolver"); } } i want to redirect user to second defined route on above, so also i specified default values for that and some Constraint for each of those. but when RedirectToRoute("UrlResolver") turns, no default values passed to routeConstraints on second route and No route in the route table matches the supplied values shows. update my CultureRouteConstraint: public class CultureRouteConstraint : IRouteConstraint { bool IRouteConstraint.Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection) { try { var parameter = values[parameterName] as string; return (someCondition(parameter)); } catch { return false; } } } now values parameter haven't culture key/value, but route parameter have that.

    Read the article

  • Filter Queryset in Django inlineformset_factory

    - by Dave
    I am trying to use inlineformset_factory to generate a formset. My models are defined as: class Measurement(models.Model): subject = models.ForeignKey(Animal) experiment = models.ForeignKey(Experiment) assay = models.ForeignKey(Assay) values = models.CommaSeparatedIntegerField(blank=True, null=True) class Experiment(models.Model): date = models.DateField() notes = models.TextField(max_length = 500, blank=True) subjects= models.ManyToManyField(Subject) in my view i have: def add_measurement(request, experiment_id): experiment = get_object_or_404(Experiment, pk=experiment_id) MeasurementFormSet = inlineformset_factory(Experiment, Measurement, extra=10, exclude=('experiment')) if request.method == 'POST': formset = MeasurementFormSet(request.POST,instance=experiment) if formset.is_valid(): formset.save() return HttpResponseRedirect( experiment.get_absolute_url() ) else: formset = MeasurementFormSet(instance=experiment) return render_to_response("data_entry_form.html", {"formset": formset, "experiment": experiment }, context_instance=RequestContext(request)) but i want to restrict the Measurement.subject field to only subjects defined in the Experiment.subjects queryset. I have tried a couple of different ways of doing this but I am a little unsure what the best way to accomplish this is. I tried to over-ride the BaseInlineFormset class with a new queryset, but couldnt figure out how to correctly pass the experiment parameter.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >