Search Results

Search found 14448 results on 578 pages for 'schema org'.

Page 83/578 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • BNF – how to read syntax?

    - by Piotr Rodak
    A few days ago I read post of Jen McCown (blog) about her idea of blogging about random articles from Books Online. I think this is a great idea, even if Jen says that it’s not exciting or sexy. I noticed that many of the questions that appear on forums and other media arise from pure fact that people asking questions didn’t bother to read and understand the manual – Books Online. Jen came up with a brilliant, concise acronym that describes very well the category of posts about Books Online – RTFM365. I take liberty of tagging this post with the same acronym. I often come across questions of type – ‘Hey, i am trying to create a table, but I am getting an error’. The error often says that the syntax is invalid. 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT DEFAULT Guid_Default NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); 5 The answer is usually(1), ‘Ok, let me check it out.. Ah yes – you have to put name of the DEFAULT constraint before the type of constraint: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); Why many people stumble on syntax errors? Is the syntax poorly documented? No, the issue is, that correct syntax of the CREATE TABLE statement is documented very well in Books Online and is.. intimidating. Many people can be taken aback by the rather complex block of code that describes all intricacies of the statement. However, I don’t know better way of defining syntax of the statement or command. The notation that is used to describe syntax in Books Online is a form of Backus-Naur notatiion, called BNF for short sometimes. This is a notation that was invented around 50 years ago, and some say that even earlier, around 400 BC – would you believe? Originally it was used to define syntax of, rather ancient now, ALGOL programming language (in 1950’s, not in ancient India). If you look closer at the definition of the BNF, it turns out that the principles of this syntax are pretty simple. Here are a few bullet points: italic_text is a placeholder for your identifier <italic_text_in_angle_brackets> is a definition which is described further. [everything in square brackets] is optional {everything in curly brackets} is obligatory everything | separated | by | operator is an alternative ::= “assigns” definition to an identifier Yes, it looks like these six simple points give you the key to understand even the most complicated syntax definitions in Books Online. Books Online contain an article about syntax conventions – have you ever read it? Let’s have a look at fragment of the CREATE TABLE statement: 1 CREATE TABLE 2 [ database_name . [ schema_name ] . | schema_name . ] table_name 3 ( { <column_definition> | <computed_column_definition> 4 | <column_set_definition> } 5 [ <table_constraint> ] [ ,...n ] ) 6 [ ON { partition_scheme_name ( partition_column_name ) | filegroup 7 | "default" } ] 8 [ { TEXTIMAGE_ON { filegroup | "default" } ] 9 [ FILESTREAM_ON { partition_scheme_name | filegroup 10 | "default" } ] 11 [ WITH ( <table_option> [ ,...n ] ) ] 12 [ ; ] Let’s look at line 2 of the above snippet: This line uses rules 3 and 5 from the list. So you know that you can create table which has specified one of the following. just name – table will be created in default user schema schema name and table name – table will be created in specified schema database name, schema name and table name – table will be created in specified database, in specified schema database name, .., table name – table will be created in specified database, in default schema of the user. Note that this single line of the notation describes each of the naming schemes in deterministic way. The ‘optionality’ of the schema_name element is nested within database_name.. section. You can use either database_name and optional schema name, or just schema name – this is specified by the pipe character ‘|’. The error that user gets with execution of the first script fragment in this post is as follows: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'DEFAULT'. Ok, let’s have a look how to find out the correct syntax. Line number 3 of the BNF fragment above contains reference to <column_definition>. Since column_definition is in angle brackets, we know that this is a reference to notion described further in the code. And indeed, the very next fragment of BNF contains syntax of the column definition. 1 <column_definition> ::= 2 column_name <data_type> 3 [ FILESTREAM ] 4 [ COLLATE collation_name ] 5 [ NULL | NOT NULL ] 6 [ 7 [ CONSTRAINT constraint_name ] DEFAULT constant_expression ] 8 | [ IDENTITY [ ( seed ,increment ) ] [ NOT FOR REPLICATION ] 9 ] 10 [ ROWGUIDCOL ] [ <column_constraint> [ ...n ] ] 11 [ SPARSE ] Look at line 7 in the above fragment. It says, that the column can have a DEFAULT constraint which, if you want to name it, has to be prepended with [CONSTRAINT constraint_name] sequence. The name of the constraint is optional, but I strongly recommend you to make the effort of coming up with some meaningful name yourself. So the correct syntax of the CREATE TABLE statement from the beginning of the article is like this: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); That is practically everything you should know about BNF. I encourage you to study the syntax definitions for various statements and commands in Books Online, you can find really interesting things hidden there. Technorati Tags: SQL Server,t-sql,BNF,syntax   (1) No, my answer usually is a question – ‘What error message? What does it say?’. You’d be surprised to know how many people think I can go through time and space and look at their screen at the moment they received the error.

    Read the article

  • SQL SERVER – Using expressor Composite Types to Enforce Business Rules

    - by pinaldave
    One of the features that distinguish the expressor Data Integration Platform from other products in the data integration space is its concept of composite types, which provide an effective and easily reusable way to clearly define the structure and characteristics of data within your application.  An important feature of the composite type approach is that it allows you to easily adjust the content of a record to its ultimate purpose.  For example, a record used to update a row in a database table is easily defined to include only the minimum set of columns, that is, a value for the key column and values for only those columns that need to be updated. Much like a class in higher level programming languages, you can also use the composite type as a way to enforce business rules onto your data by encapsulating a datum’s name, data type, and constraints (for example, maximum, minimum, or acceptable values) as a single entity, which ensures that your data can not assume an invalid value.  To what extent you use this functionality is a decision you make when designing your application; the expressor design paradigm does not force this approach on you. Let’s take a look at how these features are used.  Suppose you want to create a group of applications that maintain the employee table in your human resources database. Your table might have a structure similar to the HumanResources.Employee table in the AdventureWorks database.  This table includes two columns, EmployeID and rowguid, that are maintained by the relational database management system; you cannot provide values for these columns when inserting new rows into the table. Additionally, there are columns such as VacationHours and SickLeaveHours that you might choose to update for all employees on a monthly basis, which justifies creation of a dedicated application. By creating distinct composite types for the read, insert and update operations against this table, you can more easily manage this table’s content. When developing this application within expressor Studio, your first task is to create a schema artifact for the database table.  This process is completely driven by a wizard, only requiring that you select the desired database schema and table.  The resulting schema artifact defines the mapping of result set records to a record within the expressor data integration application.  The structure of the record within the expressor application is a composite type that is given the default name CompositeType1.  As you can see in the following figure, all columns from the table are included in the result set and mapped to an identically named attribute in the default composite type. If you are developing an application that needs to read this table, perhaps to prepare a year-end report of employees by department, you would probably not be interested in the data in the rowguid and ModifiedDate columns.  A typical approach would be to drop this unwanted data in a downstream operator.  But using an alternative composite type provides a better approach in which the unwanted data never enters your application. While working in expressor  Studio’s schema editor, simply create a second composite type within the same schema artifact, which you could name ReadTable, and remove the attributes corresponding to the unwanted columns. The value of an alternative composite type is even more apparent when you want to insert into or update the table.  In the composite type used to insert rows, remove the attributes corresponding to the EmployeeID primary key and rowguid uniqueidentifier columns since these values are provided by the relational database management system. And to update just the VacationHours and SickLeaveHours columns, use a composite type that includes only the attributes corresponding to the EmployeeID, VacationHours, SickLeaveHours and ModifiedDate columns. By specifying this schema artifact and composite type in a Write Table operator, your upstream application need only deal with the four required attributes and there is no risk of unintentionally overwriting a value in a column that does not need to be updated. Now, what about the option to use the composite type to enforce business rules?  If you review the composition of the default composite type CompositeType1, you will note that the constraints defined for many of the attributes mirror the table column specifications.  For example, the maximum number of characters in the NationaIDNumber, LoginID and Title attributes is equivalent to the maximum width of the target column, and the size of the MaritalStatus and Gender attributes is limited to a single character as required by the table column definition.  If your application code leads to a violation of these constraints, an error will be raised.  The expressor design paradigm then allows you to handle the error in a way suitable for your application.  For example, a string value could be truncated or a numeric value could be rounded. Moreover, you have the option of specifying additional constraints that support business rules unrelated to the table definition. Let’s assume that the only acceptable values for marital status are S, M, and D.  Within the schema editor, double-click on the MaritalStatus attribute to open the Edit Attribute window.  Then click the Allowed Values checkbox and enter the acceptable values into the Constraint Value text box. The schema editor is updated accordingly. There is one more option that the expressor semantic type paradigm supports.  Since the MaritalStatus attribute now clearly specifies how this type of information should be represented (a single character limited to S, M or D), you can convert this attribute definition into a shared type, which will allow you to quickly incorporate this definition into another composite type or into the description of an output record from a transform operator. Again, double-click on the MaritalStatus attribute and in the Edit Attribute window, click Convert, which opens the Share Local Semantic Type window that you use to name this shared type.  There’s no requirement that you give the shared type the same name as the attribute from which it was derived.  You should supply a name that makes it obvious what the shared type represents. In this posting, I’ve overviewed the expressor semantic type paradigm and shown how it can be used to make your application development process more productive.  The beauty of this feature is that you choose when and to what extent you utilize the functionality, but I’m certain that if you opt to follow this approach your efforts will become more efficient and your work will progress more quickly.  As always, I encourage you to download and evaluate expressor Studio for your current and future data integration needs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SOA Suite Integration: Part 3: Loading files

    - by Anthony Shorten
    One of the most common scenarios in SOA Integration is the loading of a file into the product from an external source. In Oracle SOA Suite there is a File Adapter that can process many file types into your BPEL process. For this example I will use the File Adapter to load a file of user and emails to update the user object within the Oracle Utilities Application Framework. Remember you can repeat this process with other objects and other file types. Again I am illustrating the ease of integration. The first thing is to create an empty BPEL process that will hold our flow. In Oracle JDeveloper this can be achieved by specifying the Define Service Later template (as other templates have predefined inputs and outputs and in this case we want to specify those). So I will create simpleFileLoad process to house our process. You will start with an empty canvas so you need to first specify the load part of the process using the File Adapter. Select the File Adapter from the Component Palette under BPEL Services and drag and drop it to the left side Partner Links (left is input). You name the Service. In this case I chose LoadFile. Press Next. We will define the interface as part of the wizard so select Define from operation and schema (specified later). Press Next. We are going to choose Read File to denote that we will read the file and specify the default Operation Name as Read. Press Next. The next step is to tell the Adapter the location of the files, how to process them and what to do with them after they have been processed. I am using hardcoded locations in this example but you can have logical locations as well. Press Next. I am now going to tell the adapter how to recognize the files I want to load. In my case I am using CSV files and more importantly I am tell the adapter to run the process for each record in the file it encounters. Press Next. Now, I tell the adapter how often I want to poll for the files. I have taken the defaults. Press Next. At this stage I have no explanation of the format of the input. So I am going to invoke the Native Format Wizard which will guide me through the process of creating the file input format. Clicking the purple cog icon will start the wizard. After an introduction screen (not shown), you specify the format of the input file. The File Adapter supports multiple format types. For this example, I will use Delimited as I am going to load a CSV file. Press Next. The best way for the wizard to work is with a sample. I have a sample file and the wizard will ask how much of the file to use as a template. I will use the defaults. Note: If you are using a language that has other languages other than US-ASCII, it is at this point you specify the character set to use.  Press Next. The sample contains multiple instances of a single record type. The wizard supports complex types as well. We will use the appropriate setting for our file. Press Next. You have to specify the file element and the record element. This will be used by the input wizard to translate the CSV data into an XML structure (this will make sense later). I am using LoadUsers as my file delimiter (root element) and User Record as my record root element. Press Next. As the file is CSV the delimiter is "," so I will also specify that the End Of Line (EOL) indicator indicates the end of a record. Press Next. Up until this point your have not given the columns their names. In my case my sample includes the column names in the first record. This is not always the case but you can specify the names and formats of columns in this dialog (not shown). Press Next. The wizard now generates the schema for the input file. You can specify a name for the schema. I have used userupdate.xsd. We want to verify the schema so press Test. You can test the schema by specifying an input sample. and pressing the green play button. You will see the delimiters you specified earlier for the file and the records. Press Ok to continue. A confirmation screen will be displayed showing you the location of the schema in your project. Press Finish to return to the File Adapter configuration. You will now see the schema and elements prepopulated from the wizard. Press Next. The File Adapter configuration is now complete. Press Finish. Now you need to receive the input from the LoadFile component so we need to place a Receive node in the BPEL process by drag and dropping the Receive component from the Component Palette under BPEL Constructs onto the BPEL process. We link the receive process with the LoadFile component by dragging the left most connect node of the Receive node to the LoadFile component. Once the link is established you need to name the Receive node appropriately and as in the post of the last part of this series you need to generate input variables for the BPEL process to hold the input records in. You need to now add the product Web Service. The process is the same as described in the post of the last part of this series. You drop the Web Service BPEL Service onto the right side of the process and fill in the details of the WSDL URL . You also have to add an Invoke node to call the service and generate the input and outputs variables for the call in the Invoke node. Now, to get the inputs from File to the service. You have to use a Transform (you can use an Assign action but a Transform action is more flexible). You drag and drop the Transform component from the Component Palette under Oracle Extensions and place it between the Receive and Invoke nodes. We name the Transform Node, Mapper File and associate the source of the mapping the schema from the Receive node and the output will be the input variable from the Invoke node. We now build the transform. We first map the user and email attributes by drag and drop the elements from the left to the right. The reason we needed to use the transform is that we will be telling the AS-User service that we want to issue an update action. Remember when we registered the service we actually used Read as the default. If we do not otherwise inform the service to use the Update action it will use the Read action instead (which is not desired). To specify the update action you need to click on the transactionType node on the right and select Set Text to set the action. You need to specify the transactionType of UPD (for update). The mapping is now complete. The final BPEL process is ready for deployment. You then deploy the BPEL process to the server and to test the service by simply dropping a file, in the same pattern/name as you specified, in the directory you specified in the File Adapter. You will see each record as a separate instance entry in the Fusion Middleware Control console. You can now load files into the product. You can repeat this process for each type of file to process. While this was a simple example it illustrates the method of loading data can be achieved using SOA Suite in conjunction with our products.

    Read the article

  • java AbstractMethodError

    - by Akhil
    How to handle this error in lucene: java.lang.AbstractMethodError: org.apache.lucene.store.Directory.listAll()[Ljava/lang/String; at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:568) at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:69) at org.apache.lucene.index.IndexReader.open(IndexReader.java:316) at org.apache.lucene.index.IndexReader.open(IndexReader.java:188) I am making a lucene function call but unfortunately it itself calls an abstract method of some class, as is evident from the error above. What is the work around for this? Thanks, --Akhil

    Read the article

  • How to fix "could not find a base address that matches schema http"... in WCF

    - by Craig Shearer
    I'm trying to deploy a WCF service to my server, hosted in IIS. Naturally it works on my machine :) But when I deploy it, I get the following error: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Googling on this, I find that I have to put a serviceHostingEnvironment element into the web.config file: <serviceHostingEnvironment> <baseAddressPrefixFilters> <add prefix="http://mywebsiteurl"/> </baseAddressPrefixFilters> </serviceHostingEnvironment> But once I have done this, I get the following: Could not find a base address that matches scheme http for the endpoint with binding BasicHttpBinding. Registered base address schemes are [https]. It seems it doesn't know what the base address is, but how do I specify it? Here's the relevant section of my web.config file: <system.serviceModel> <serviceHostingEnvironment> <baseAddressPrefixFilters> <add prefix="http://mywebsiteurl"/> </baseAddressPrefixFilters> </serviceHostingEnvironment> <behaviors> <serviceBehaviors> <behavior name="WcfPortalBehavior"> <serviceMetadata httpGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="true"/> </behavior> </serviceBehaviors> </behaviors> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IWcfPortal" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647" receiveTimeout="00:10:00" sendTimeout="00:10:00" openTimeout="00:10:00" closeTimeout="00:10:00"> <readerQuotas maxBytesPerRead="2147483647" maxArrayLength="2147483647" maxStringContentLength="2147483647"/> </binding> </basicHttpBinding> </bindings> <services> <service behaviorConfiguration="WcfPortalBehavior" name="Csla.Server.Hosts.Silverlight.WcfPortal"> <endpoint address="" binding="basicHttpBinding" contract="Csla.Server.Hosts.Silverlight.IWcfPortal" bindingConfiguration="BasicHttpBinding_IWcfPortal"> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> </system.serviceModel> Can anybody shed some light on what's going on and how to fix it? Thanks! Craig

    Read the article

  • How to validate my Alexa code <noscript> tag in Head section

    - by Naveen Valecha
    The doctype and HTML tag of my page is below: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML+RDFa 1.1//EN" "http://www.w3.org/MarkUp/DTD/xhtml-rdfa-2.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" version="XHTML+RDFa 1.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/1999/xhtml http://www.w3.org/MarkUp/SCHEMA/xhtml-rdfa-2.xsd" xmlns:og="http://ogp.me/ns#" xml:lang="en" lang="en" dir="ltr">

    Read the article

  • Microsoft Sync Framework - How to reprovision a table (or entire scope) fater schema changes?

    - by Rabbi
    B"H I have already setup Syncing with Microsoft Sync Framework, and now I need to add fields to a table. How do I re-provision the databases? The setup is exceedingly simple: Two sql express 2008 servers The scope includes the entire database Using Microsoft Sync Framework 2.0 Synchronizing by direct access. Using the standard new SqlSyncProvider Do I make the structural changes at both ends? Or do I only change one Server and let Sync Framework somehow propagate the change? Do I need to delete the _tracking tables and/or the stored procedures? How about the triggers? Has anyone been using the Sync Framework? Please help.

    Read the article

  • ‘Empty’ results from MQL Query. Freebase Schema: /film/film/starring & /film/actor/film

    - by user1879631
    First post here, so I hope this is enough detail. I started using freebase-python today to get film information for a program that I’m working on. One thing that I need to grab is a list of actors that starred in a film. I’ve followed some tutorials and guides on the way to do this, and can get a list of films for a director or the director of a film, but when I try to do the same with an actor or a film’s cast, I get ‘null’ results. I have the same problem in both Python and the Freebase MQL Query Editor, and you can see what I've tried below. Links to all of the examples below written in the editor can be found here, as Stack Overflow wouldn't let me post links underneath each example on my first post! Working director query in Python: import freebase fb = freebase.mqlread q = {'type':'/film/film', 'name':'Inception', 'directed_by':[]} fb(q) Working director's filmography query in Python: import freebase fb = freebase.mqlread q = {'type':'/film/director', 'name': 'Christopher Nolan', 'film':[]} fb(q) Based on these tests, I tried to do the same with actors, but with odd results: Not working cast list query in Python: import freebase fb = freebase.mqlread q = {'type':'/film/film', 'name':'Inception', 'starring':[]} fb(q) Not working actor's filmography query in Python: import freebase fb = freebase.mqlread q = {'type':'/film/actor', 'name':'Leonardo DiCaprio', 'film':[]} fb(q) Strangely, I get an accurate number of actors/films back, but no names. Does anyone have any idea what the problem might be? Thanks a lot, I'd appreciate any advice.

    Read the article

  • Strange exception of Httpcore nio in java

    - by bruce dou
    Exception in thread "Thread-0" java.lang.NullPointerException at org.apache.http.impl.nio.reactor.AbstractIOReactor.closeActiveChannels(AbstractIOReactor.java:532) at org.apache.http.impl.nio.reactor.AbstractIOReactor.hardShutdown(AbstractIOReactor.java:564) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.doShutdown(AbstractMultiworkerIOReactor.java:411) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:340) at com.***.clawer.Clawer$1.run(Clawer.java:81) at java.lang.Thread.run(Unknown Source) Exception in thread "Thread-1" java.lang.IllegalStateException: I/O reactor has been shut down at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.connect(DefaultConnectingIOReactor.java:190) at com.***.clawer.Run.run(Run.java:29)

    Read the article

  • Is there a rake task for advancing or retreating your schema version by exactly one?

    - by user30997
    Back when migration version numbers were simply incremented as you created migrations, it was easy enough to do: rake migrate VERSION=097 rake migrate VERSION=098 rake migrate VERSION=099 rake migrate VERSION=100 ...but we now have migration numbers that are something like YYYYMMDDtimeofday. Not that this is a bad thing - it keeps the migration version collisions to a minimum - but when I have 50 migrations and want to step through them one-at-a-time, it is a hassle: rake migrate VERSION=20090129215142 rake migrate VERSION=20090129219783 ...etc. I have to have a list of all the migrations open in front of me, typing out the version numbers to advance by one. Is there anything that would have an easier syntax, like: rake migrate VERSION=NEXT or rake migrate VERSION=PREV ?

    Read the article

  • SolrException: Internal Server Error

    - by Priya
    Hi All, I am working on Solr in my application. I am using apache-solr-solrj-1.4.0.jar When I try to call add(SolrInputDocument doc) of CommonsHttpSolrServer I am getting following exception: org.apache.solr.common.SolrException: Internal Server Error Internal Server Error at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:424) at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:243) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:64) Can anyone please help me to resolve this problem? following are attributes in solrconfig.xml: native false true

    Read the article

  • Is there a GUI that I can use to create XML documents based on my schema?

    - by David Conlisk
    Hi all, I want to create a simple graphical user interface to allow non-technical users to create an XML file without having to manually edit the XML source. Ideally I'd like a drag and drop interface, but failing that, anything really. The contents of the XML file are similar to an encoded flow chart of a binary tree, so maybe something like Visio, with a save as xml option? Here's a quick sample of the XML output that is required: <?xml version="1.0" encoding="utf-8"?> <steps> <step id="1" type="prompt"> <prompt> Welcome. </prompt> <next>1.1</next> </step> <step id="1.1" type="question"> <prompt> Do you have what you need? </prompt> <yes>1.2</yes> <no>1.1.1</no> </step> ... </steps> Are there any existing tools out there that you can recommend for this purpose? Ideally open-source or with a free personal license, but I'm interested in hearing about all options. Thanks, David

    Read the article

  • Jboss AS 7 - Dependency Injection

    - by Nic Willemse
    Im attempting to make use of dependency injection in Jboss AS 7 and im having huge difficulties. I have setup a EAR which contains both a EJB jar and a war. The war contains a richfaces web app. Im attempting to inject an EJB from the ejb jar into a faces managed bean with the code below : public class UserController { @EJB(mappedName="UserService") private UserFacadeService userService; public String getService(){ if(userService == null){ however when i deploy jboss puts the error in the console : rolled back with failure message {"Services with missing/unavailable dependencies" => ["jboss.deployment.subunit.\"GoodByeJohnEAR.ear\".\"GoodByeJohnWeb-1.0-SNAPSHOT.war\".component.\"managed-bean.za.co.gbj.UserController\".START missing [ jboss.naming.context.java.module.GoodByeJohnEAR.\"GoodByeJohnWeb-1.0-SNAPSHOT\".\"env/za.co.gbj.UserController/userService\" ]","jboss.deployment.subunit.\"GoodByeJohnEAR.ear\".\"GoodByeJohnWeb-1.0-SNAPSHOT.war\".jndiDependencyService missing [ jboss.naming.context.java.module.GoodByeJohnEAR.\"GoodByeJohnWeb-1.0-SNAPSHOT\".\"env/za.co.gbj.UserController/userService\" ]","jboss.naming.context.java.module.GoodByeJohnEAR.\"GoodByeJohnWeb-1.0-SNAPSHOT\".\"env/za.co.gbj.UserController/userService\".jboss.deployment.subunit.\"GoodByeJohnEAR.ear\".\"GoodByeJohnWeb-1.0-SNAPSHOT.war\".module.GoodByeJohnEAR.\"GoodByeJohnWeb-1.0-SNAPSHOT\".2 missing [ jboss.naming.context.java.module.GoodByeJohnEAR.\"GoodByeJohnWeb-1.0-SNAPSHOT\".env/UserService ]"]} 09:03:50,576 INFO [org.jboss.as.server.deployment] (MSC service thread 1-8) Starting deployment of "GoodByeJohnEAR.ear" 09:03:50,670 INFO [org.jboss.as.server.deployment] (MSC service thread 1-3) Starting deployment of "GoodByeJohnWeb-1.0-SNAPSHOT.war" 09:03:50,670 INFO [org.jboss.as.server.deployment] (MSC service thread 1-8) Starting deployment of "GoodByeJohnEJB-1.0-SNAPSHOT.jar" 09:03:51,367 WARN [org.jboss.as.server.deployment.service-loader] (MSC service thread 1-2) Encountered invalid class name "com.sun.faces.vendor.Tomcat6InjectionProvider:org.apache.catalina.util.DefaultAnnotationProcessor" for service type "com.sun.faces.spi.injectionprovider" 09:03:51,367 WARN [org.jboss.as.server.deployment.service-loader] (MSC service thread 1-2) Encountered invalid class name "com.sun.faces.vendor.Jetty6InjectionProvider:org.mortbay.jetty.plus.annotation.InjectionCollection" for service type "com.sun.faces.spi.injectionprovider" 09:03:51,375 INFO [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-8) JNDI bindings for session bean named UserFacadeBean in deployment unit subdeployment "GoodByeJohnEJB-1.0-SNAPSHOT.jar" of deployment "GoodByeJohnEAR.ear" are as follows: java:global/GoodByeJohnEAR/GoodByeJohnEJB-1.0-SNAPSHOT/UserFacadeBean!za.co.gbj.UserFacadeService java:app/GoodByeJohnEJB-1.0-SNAPSHOT/UserFacadeBean!za.co.gbj.UserFacadeService java:module/UserFacadeBean!za.co.gbj.UserFacadeService java:global/GoodByeJohnEAR/GoodByeJohnEJB-1.0-SNAPSHOT/UserFacadeBean java:app/GoodByeJohnEJB-1.0-SNAPSHOT/UserFacadeBean java:module/UserFacadeBean 09:03:51,406 INFO [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-4) JNDI bindings for session bean named UserFacadeBean in deployment unit subdeployment "GoodByeJohnWeb-1.0-SNAPSHOT.war" of deployment "GoodByeJohnEAR.ear" are as follows: java:global/GoodByeJohnEAR/GoodByeJohnWeb-1.0-SNAPSHOT/UserFacadeBean!za.co.gbj.UserFacadeService java:app/GoodByeJohnWeb-1.0-SNAPSHOT/UserFacadeBean!za.co.gbj.UserFacadeService java:module/UserFacadeBean!za.co.gbj.UserFacadeService java:global/GoodByeJohnEAR/GoodByeJohnWeb-1.0-SNAPSHOT/UserFacadeBean java:app/GoodByeJohnWeb-1.0-SNAPSHOT/UserFacadeBean java:module/UserFacadeBean 09:03:51,577 INFO [org.jboss.as.controller] (DeploymentScanner-threads - 1) Service status report New missing/unsatisfied dependencies: service jboss.naming.context.java.module.GoodByeJohnEAR."GoodByeJohnWeb-1.0-SNAPSHOT".env/UserService (missing) service jboss.naming.context.java.module.GoodByeJohnEAR."GoodByeJohnWeb-1.0-SNAPSHOT"."env/za.co.gbj.UserController/userService" (missing) Please assist!

    Read the article

  • ORDER BY job failed in the Pig script while running EmbeddedPig using Java

    - by C.c. Huang
    I have this following pig script, which works perfectly using grunt shell (stored the results to HDFS without any issues); however, the last job (ORDER BY) failed if I ran the same script using Java EmbeddedPig. If I replace the ORDER BY job by others, such as GROUP or FOREACH GENERATE, the whole script then succeeded in Java EmbeddedPig. So I think it's the ORDER BY which causes the issue. Anyone has any experience with this? Any help would be appreciated! The Pig script: REGISTER pig-udf-0.0.1-SNAPSHOT.jar; user_similarity = LOAD '/tmp/sample-sim-score-results-31/part-r-00000' USING PigStorage('\t') AS (user_id: chararray, sim_user_id: chararray, basic_sim_score: float, alt_sim_score: float); simplified_user_similarity = FOREACH user_similarity GENERATE $0 AS user_id, $1 AS sim_user_id, $2 AS sim_score; grouped_user_similarity = GROUP simplified_user_similarity BY user_id; ordered_user_similarity = FOREACH grouped_user_similarity { sorted = ORDER simplified_user_similarity BY sim_score DESC; top = LIMIT sorted 10; GENERATE group, top; }; top_influencers = FOREACH ordered_user_similarity GENERATE com.aol.grapevine.similarity.pig.udf.AssignPointsToTopInfluencer($1, 10); all_influence_scores = FOREACH top_influencers GENERATE FLATTEN($0); grouped_influence_scores = GROUP all_influence_scores BY bag_of_topSimUserTuples::user_id; influence_scores = FOREACH grouped_influence_scores GENERATE group AS user_id, SUM(all_influence_scores.bag_of_topSimUserTuples::points) AS influence_score; ordered_influence_scores = ORDER influence_scores BY influence_score DESC; STORE ordered_influence_scores INTO '/tmp/cc-test-results-1' USING PigStorage(); The error log from Pig: 12/04/05 10:00:56 INFO pigstats.ScriptState: Pig script settings are added to the job 12/04/05 10:00:56 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 12/04/05 10:00:58 INFO mapReduceLayer.JobControlCompiler: Setting up single store job 12/04/05 10:00:58 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 12/04/05 10:00:58 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission. 12/04/05 10:00:58 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/04/05 10:00:58 INFO input.FileInputFormat: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths (combined) to process : 1 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating tmp-1546565755 in /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134-work-6955502337234509704 with rwxr-xr-x 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir. 12/04/05 10:00:58 INFO mapred.TaskRunner: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/pigsample_854728855_1333645258470 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.jar.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.jar.crc 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.split.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.split.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.splitmetainfo.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.splitmetainfo.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.xml.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.xml.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.jar <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.jar 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.split <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.split 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.splitmetainfo <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.splitmetainfo 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.xml <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.xml 12/04/05 10:00:59 INFO mapred.Task: Using ResourceCalculatorPlugin : null 12/04/05 10:00:59 INFO mapred.MapTask: io.sort.mb = 100 12/04/05 10:00:59 INFO mapred.MapTask: data buffer = 79691776/99614720 12/04/05 10:00:59 INFO mapred.MapTask: record buffer = 262144/327680 12/04/05 10:00:59 WARN mapred.LocalJobRunner: job_local_0004 java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:139) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigFileInputFormat.listStatus(PigFileInputFormat.java:37) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248) at org.apache.pig.impl.io.ReadToEndLoader.init(ReadToEndLoader.java:153) at org.apache.pig.impl.io.ReadToEndLoader.<init>(ReadToEndLoader.java:115) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:112) ... 6 more 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Deleted path /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:59 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: job job_local_0004 has failed! Stop running all dependent jobs 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: 100% complete 12/04/05 10:01:04 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed! 12/04/05 10:01:04 INFO pigstats.PigStats: Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 0.20.2-cdh3u3 0.8.1-cdh3u3 cchuang 2012-04-05 10:00:34 2012-04-05 10:01:04 GROUP_BY,ORDER_BY Some jobs have failed! Stop running all dependent jobs Job Stats (time in seconds): JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MaxReduceTime MinReduceTime AvgReduceTime Alias Feature Outputs job_local_0001 0 0 0 0 0 0 0 0 all_influence_scores,grouped_user_similarity,simplified_user_similarity,user_similarity GROUP_BY job_local_0002 0 0 0 0 0 0 0 0 grouped_influence_scores,influence_scores GROUP_BY,COMBINER job_local_0003 0 0 0 0 0 0 0 0 ordered_influence_scores SAMPLER Failed Jobs: JobId Alias Feature Message Outputs job_local_0004 ordered_influence_scores ORDER_BY Message: Job failed! Error - NA /tmp/cc-test-results-1, Input(s): Successfully read 0 records from: "/tmp/sample-sim-score-results-31/part-r-00000" Output(s): Failed to produce result in "/tmp/cc-test-results-1" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_local_0001 -> job_local_0002, job_local_0002 -> job_local_0003, job_local_0003 -> job_local_0004, job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: Some jobs have failed! Stop running all dependent jobs

    Read the article

  • Maven does not resolve a local Grails plug-in

    - by Drew
    My goal is to take a Grails web application and build it into a Web ARchive (WAR file) using Maven, and the key is that it must populate the "plugins" folder without live access to the internet. An "out of the box" Grails webapp will already have the plugins folder populated with JAR files, but the maven build script should take care of populating it, just like it does for any traditional WAR projects (such as WEB-INF/lib/ if it's empty) This is an error when executing mvn grails:run-app with Grails 1.1 using Maven 2.0.10 and org.grails:grails-maven-plugin:1.0. (This "hibernate-1.1" plugin is needed to do GORM.) [INFO] [grails:run-app] Running pre-compiled script Environment set to development Plugin [hibernate-1.1] not installed, resolving.. Reading remote plugin list ... Error reading remote plugin list [svn.codehaus.org], building locally... Unable to list plugins, please check you have a valid internet connection: svn.codehaus.org Reading remote plugin list ... Error reading remote plugin list [plugins.grails.org], building locally... Unable to list plugins, please check you have a valid internet connection: plugins.grails.org Plugin 'hibernate' was not found in repository. If it is not stored in a configured repository you will need to install it manually. Type 'grails list-plugins' to find out what plugins are available. The build machine does not have access to the internet and must use an internal/enterprise repository, so this error is just saying that maven can't find the required artifact anywhere. That dependency is already included with the stock Grails software that's installed locally, so I just need to figure out how to get my POM file to unpackage that ZIP file into my webapp's "plugins" folder. I've tried installing the plugin manually to my local repository and making it an explicit dependency in POM.xml, but it's still not being recognized. Maybe you can't pull down grails plugins like you would a standard maven reference? mvn install:install-file -DgroupId=org.grails -DartifactId=grails-hibernate -Dversion=1.1 -Dpackaging=zip -Dfile=%GRAILS_HOME%/plugins/grails-hibernate-1.1.zip I can manually setup the Grails webapp from the command-line, which creates that local ./plugins folder properly. This is a step in the right direction, so maybe the question is: how can I incorporate this goal into my POM? mvn grails:install-plugin -DpluginUrl=%GRAILS_HOME%/plugins/grails-hibernate-1.1.zip Here is a copy of my POM.xml file, which was generated using an archetype. <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.samples</groupId> <artifactId>sample-grails</artifactId> <packaging>war</packaging> <name>Sample Grails webapp</name> <properties> <sourceComplianceLevel>1.5</sourceComplianceLevel> </properties> <version>0.0.1-SNAPSHOT</version> <dependencies> <dependency> <groupId>org.grails</groupId> <artifactId>grails-crud</artifactId> <version>1.1</version> </dependency> <dependency> <groupId>org.grails</groupId> <artifactId>grails-gorm</artifactId> <version>1.1</version> </dependency> <dependency> <groupId>opensymphony</groupId> <artifactId>oscache</artifactId> <version>2.4</version> <exclusions> <exclusion> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> </exclusion> <exclusion> <groupId>javax.jms</groupId> <artifactId>jms</artifactId> </exclusion> <exclusion> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>hsqldb</groupId> <artifactId>hsqldb</artifactId> <version>1.8.0.7</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.5.6</version> <scope>runtime</scope> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>jstl</artifactId> <version>1.2</version> </dependency> <!-- <dependency> <groupId>org.grails</groupId> <artifactId>grails-hibernate</artifactId> <version>1.1</version> <type>zip</type> </dependency> --> </dependencies> <build> <pluginManagement /> <plugins> <plugin> <groupId>org.grails</groupId> <artifactId>grails-maven-plugin</artifactId> <version>1.0</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>init</goal> <goal>maven-clean</goal> <goal>validate</goal> <goal>config-directories</goal> <goal>maven-compile</goal> <goal>maven-test</goal> <goal>maven-war</goal> <goal>maven-functional-test</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>${sourceComplianceLevel}</source> <target>${sourceComplianceLevel}</target> </configuration> </plugin> </plugins> </build> </project>

    Read the article

  • Hierarchy flattening of interfaces in WCF

    - by nmarun
    Alright, so say I have my service contract interface as below: 1: [ServiceContract] 2: public interface ILearnWcfService 3: { 4: [OperationContract(Name = "AddInt")] 5: int Add(int arg1, int arg2); 6: } Say I decided to add another interface with a similar add “feature”. 1: [ServiceContract] 2: public interface ILearnWcfServiceExtend : ILearnWcfService 3: { 4: [OperationContract(Name = "AddDouble")] 5: double Add(double arg1, double arg2); 6: } My class implementing the ILearnWcfServiceExtend ends up as: 1: public class LearnWcfService : ILearnWcfServiceExtend 2: { 3: public int Add(int arg1, int arg2) 4: { 5: return arg1 + arg2; 6: } 7:  8: public double Add(double arg1, double arg2) 9: { 10: return arg1 + arg2; 11: } 12: } Now when I consume this service and look at the proxy that gets generated, here’s what I see: 1: public interface ILearnWcfServiceExtend 2: { 3: [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/ILearnWcfService/AddInt", ReplyAction="http://tempuri.org/ILearnWcfService/AddIntResponse")] 4: int AddInt(int arg1, int arg2); 5: 6: [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/ILearnWcfServiceExtend/AddDouble", ReplyAction="http://tempuri.org/ILearnWcfServiceExtend/AddDoubleResponse")] 7: double AddDouble(double arg1, double arg2); 8: } Only the ILearnWcfServiceExtend gets ‘listed’ in the proxy class and not the (base interface) ILearnWcfService interface. But then to uniquely identify the operations that the service exposes, the Action and ReplyAction properties are set. So in the above example, the AddInt operation has the Action property set to ‘http://tempuri.org/ILearnWcfService/AddInt’ and the AddDouble operation has the Action property of ‘http://tempuri.org/ILearnWcfServiceExtend/AddDouble’. Similarly the ReplyAction properties are set corresponding to the namespace that they’re declared in. The ‘http://tempuri.org’ is chosen as the default namespace, since the Namespace property on the ServiceContract is not defined. The other thing is the service contract itself – the Add() method. You’ll see that in both interfaces, the method names are the same. As you might know, this is not allowed in WSDL-based environments, even though the arguments are of different types. This is allowed only if the Name attribute of the ServiceContract is set (as done above). This causes a change in the name of the service contract itself in the proxy class. See that their names are changed to AddInt / AddDouble respectively. Lesson learned: The interface hierarchy gets ‘flattened’ when the WCF service proxy class gets generated.

    Read the article

  • How (recipe) to build only one kernel module?

    - by Pro Backup
    I have a bug in a Linux kernel module that causes the stock Ubuntu 14.04 kernel to oops (crash). That is why I want to edit/patch the source of only that single kernel module to add some extra debug output. The kernel module in question is mvsas and not necessary to boot. For that reason I don't see any need to update any initrd images. I have read a lot of information (as shown below) and find the setup and build process confusion. I need two recipes: to setup/configure the build environment once steps to do after editing any source file of this kernel module (.c and .h) and converting that edit into a new kernel module (.ko) The sources that have been used are: build one kernel module - Google search http://www.linuxquestions.org/questions/linux-kernel-70/rebuilding-a-single-kernel-module-595116/ http://stackoverflow.com/questions/8744087/how-to-recompile-just-a-single-kernel-module http://www.pixelbeat.org/docs/rebuild_kernel_module.html How do I build a single in-tree kernel module? http://ubuntuforums.org/showthread.php?t=1153067 http://ubuntuforums.org/showthread.php?t=2112166 http://ubuntuforums.org/showthread.php?t=1115593 build one kernel module ubuntu - Google search 'make +single +kernel +module' - Ask Ubuntu 'make +kernel +module' - Ask Ubuntu My makefile results in: No rule to make target `arch/x86/tools/relocs.c', needed '"Invalid module format"' - Ask Ubuntu Driver installation: compiling source code for newer kernel Modprobe: 'Invalid nodule format', yet works after insmod "Symbol version dump" "is missing" - Google search http://stackoverflow.com/questions/9425523/should-i-care-that-the-symbol-version-dump-is-missing-how-do-i-get-one Where can I find the corresponding Module.symvers and .config files for 12.04.3 i386 server? "no symbol version for module_layout" when trying to load usbhid.ko Broken links inside Linux header file folder 'make modules_install' - Ask Ubuntu 'modules_install' - Ask Ubuntu Empty build directory in custom compiled kernel Not able to see pr_info output In which directory are the kernel source files and how can I recompile it? How can I compile and install that patched libata-eh.c file? 'modules_install +depmod' - Ask Ubuntu modules_install depmod - Google search "make modules_install" - Google search http://www.csee.umbc.edu/courses/undergraduate/CMSC421/fall02/burt/projects/howto_build_kernel.html http://unix.stackexchange.com/questions/20864/what-happens-in-each-step-of-the-linux-kernel-building-process https://wiki.ubuntu.com/KernelCustomBuild http://www.cyberciti.biz/tips/build-linux-kernel-module-against-installed-kernel-source-tree.html http://www.linuxforums.org/forum/kernel/170617-solved-make-modules_install-different-path.html "make prepare" - Google search "make prepare" "scripts/kconfig/conf --silentoldconfig Kconfig" - Google search http://ubuntuforums.org/showthread.php?t=1963515 ubuntu "make prepare" version - Google search http://stackoverflow.com/questions/8276245/how-to-compile-a-kernel-module-against-a-new-source https://help.ubuntu.com/community/Kernel/Compile How do I compile a kernel module? How to add a custom driver to my kernel? Compile and loading kernel module without compiling the kernel

    Read the article

  • How can I dump my MS SQL Server Database Schema to a human readable & printable format?

    - by Kyle West
    I want to generate something like the following: LineItems Id ItemId OrderId Price Orders Id CustomerId DateCreated Customers Id FirstName LastName Email I don't need all the relationships, the diagram that will never print correctly, the metadata, anything. Just a list of the tables and their columns in a simple text format. Has anyone done this before? Is there a simple solution? Thanks, Kyle

    Read the article

  • How can I dump my SQL Server Database Schema to a human readable & printable format?

    - by Kyle West
    I want to generate something like the following: LineItems Id ItemId OrderId Price Orders Id CustomerId DateCreated Customers Id FirstName LastName Email I don't need all the relationships, the diagram that will never print correctly, the metadata, anything. Just a list of the tables and their columns in a simple text format. Has anyone done this before? Is there a simple solution? Thanks, Kyle

    Read the article

  • Microsoft Sync Framework - Local DB and Remote DB have to have the same schema?

    - by Josh
    When using MSF, is it implied in the technology that the sync tables are supposed to be 1-1? The reason I'm wondering is that if I'm synching from a SQL2005 database to a SQLCE, I might want the CE one to be a little more flattened out so I can get data out with a simpler SELECT statement (as CE does not support sprocs). For example, I might have a tblCustomer, tblOrder, and tblCustomerOrder in the central database, but in the local databases one table with all the data might be preferred. Of course I'd still want the updates to reflect back and forth between the two databases. Does MSF make this possible, or does the local DB have to have the same tables as the central?

    Read the article

  • Adding custom filter in spring framework problem?

    - by user298768
    hello there iam trying to make a custom AuthenticationProcessingFilter to save some user data in the session after successful login here's my filter: Code: package projects.internal; import java.io.IOException; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.springframework.security.Authentication; import org.springframework.security.ui.webapp.AuthenticationProcessingFilter; public class MyAuthenticationProcessingFilter extends AuthenticationProcessingFilter { protected void onSuccessfulAuthentication(HttpServletRequest request, HttpServletResponse response, Authentication authResult) throws IOException { super.onSuccessfulAuthentication(request, response, authResult); request.getSession().setAttribute("myValue", "My value is set"); } } and here's my security.xml file Code: <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> <global-method-security pre-post-annotations="enabled"> </global-method-security> <http use-expressions="true" auto-config="false" entry-point-ref="authenticationProcessingFilterEntryPoint"> <intercept-url pattern="/" access="permitAll" /> <intercept-url pattern="/images/**" filters="none" /> <intercept-url pattern="/scripts/**" filters="none" /> <intercept-url pattern="/styles/**" filters="none" /> <intercept-url pattern="/p/login.jsp" filters="none" /> <intercept-url pattern="/p/register" filters="none" /> <intercept-url pattern="/p/**" access="isAuthenticated()" /> <form-login login-processing-url="/j_spring_security_check" login-page="/p/login.jsp" authentication-failure-url="/p/login_error.jsp" /> <logout /> </http> <authentication-manager alias="authenticationManager"> <authentication-provider> <jdbc-user-service data-source-ref="dataSource"/> </authentication-provider> </authentication-manager> <beans:bean id="authenticationProcessingFilter" class="projects.internal.MyAuthenticationProcessingFilter"> <custom-filter position="AUTHENTICATION_PROCESSING_FILTER" /> </beans:bean> <beans:bean id="authenticationProcessingFilterEntryPoint" class="org.springframework.security.ui.webapp.AuthenticationProcessingFilterEntryPoint"> </beans:bean> </beans:beans> it gives an error here: Code: <custom-filter position="AUTHENTICATION_PROCESSING_FILTER" /> multiple annotation found at this line:cvc-attribute.3 cvc-complex-type.4 cvc-enumeration-vaild what is the problem? thanks in advance

    Read the article

  • What to do after a servicing fails on TFS 2010

    - by Martin Hinshelwood
    What do you do if you run a couple of hotfixes against your TFS 2010 server and you start to see seem odd behaviour? A customer of mine encountered that very problem, but they could not just, or at least not easily, go back a version.   You see, around the time of the TFS 2010 launch this company decided to upgrade their entire 250+ development team from TFS 2008 to TFS 2010. They encountered a few problems, owing mainly to the size of their TFS deployment, and the way they were using TFS. They were not doing anything wrong, but when you have the largest deployment of TFS outside of Microsoft you tend to run into problems that most people will never encounter. We are talking half a terabyte of source control in TFS with over 80 proxy servers. Its certainly the largest deployment I have ever heard of. When they did their upgrade way back in April, they found two major flaws in the product that meant that they had to back out of the upgrade and wait for a couple of hotfixes. KB983504 – Hotfix KB983578 – Patch KB2401992 -Hotfix In the time since they got the hotfixes they have run 6 successful trial migrations, but we are not talking minutes or hours here. When you have 400+ GB of data it takes time to copy it around. It takes time to do the upgrade and it takes time to do a backup. Well, last week it was crunch time with their developers off for Christmas they had a window of opportunity to complete the upgrade. Now these guys are good, but they wanted Northwest Cadence to be available “just in case”. They did not expect any problems as they already had 6 successful trial upgrades. The problems surfaced around 20 hours in after the first set of hotfixes had been applied. The new Team Project Collection, the only thing of importance, had disappeared from the Team Foundation Server Administration console. The collection would not reattach either. It would not even list the new collection as attachable! Figure: We know there is a database there, but it does not This was a dire situation as 20+ hours to repeat would leave the customer over time with 250+ developers sitting around doing nothing. We tried everything, and then we stumbled upon the command of last resort. TFSConfig Recover /ConfigurationDB:SQLServer\InstanceName;TFS_ConfigurationDBName /CollectionDB:SQLServer\instanceName;"Collection Name" -http://msdn.microsoft.com/en-us/library/ff407077.aspx WARNING: Never run this command! Now this command does something a little nasty. It assumes that there really should not be anything wrong and sets about fixing it. It ignores any servicing levels in the Team Project Collection database and forcibly applies the latest version of the schema. I am sure you can imagine the types of problems this may cause when the schema is updated leaving the data behind. That said, as far as we could see this collection looked good, and we were even able to find and attach the team project collection to the Configuration database. Figure: After attaching the TPC it enters a servicing mode After reattaching the team project collection we found the message “Re-Attaching”. Well, fair enough that sounds like something that may need to happen, and after checking that there was disk IO we left it to it. 14+ hours later, it was still not done so the customer raised a priority support call with MSFT and an engineer helped them out. Figure: Everything looks good, it is just offline. Tip: Did you know that these logs are not represented in the ~/Logs/* folder until they are opened once? The engineer dug around a bit and listened to our situation. He knew that we had run the dreaded “tfsconfig restore”, but was not phased. Figure: This message looks suspiciously like the wrong servicing version As it turns out, the servicing version was slightly out of sync with the schema. KB Schema Successful           KB983504 341 Yes   KB983578 344 sort of   KB2401992 360 nope   Figure: KB, Schema table with notation to its success The Schema version above represents the final end of run version for that hotfix or patch. The only way forward The problem was that the version was somewhere between 341 and 344. This is not a nice place to be in and the engineer give us the  only way forward as the removal of the servicing number from the database so that the re-attach process would apply the latest schema. if his sounds a little like the “tfsconfig recover” command then you are exactly right. Figure: Sneakily changing that 3 to a 1 should do the trick Figure: Changing the status and dropping the version should do it Now that we have done that we should be able to safely reattach and enable the Team Project Collection. Figure: The TPC is now all attached and running You may think that this is the end of the story, but it is not. After a while of mulling and seeking expert advice we came to the opinion that the database was, for want of a better term, “hosed”. There could well be orphaned data in there and the likelihood that we would have problems later down the line is pretty high. We contacted the customer back and made them aware that in all likelihood the repaired database was more like a “cut and shut” than anything else, and at the first sign of trouble later down the line was likely to split in two. So with 40+ hours invested in getting this new database ready the customer threw it away and started again. What would you do? Would you take the “cut and shut” to production and hope for the best?

    Read the article

  • maven .Net build plugin clean, compile problem

    - by senzacionale
    i am using http://maven-dotnet-plugin.appspot.com/ but i get when i use clean command: [INFO] Internal error in the plugin manager executing goal 'org.codehaus.sonar-plugins.dotnet:maven-dotnet-plugin:0.1:clean': Unable to load the mojo 'org.codehaus.sonar-plugins.dotnet:maven-dotnet-plugin:0.1:clean' in the plugin 'org.codehaus.sonar-plugins.dotnet:maven-dotnet-pl ugin'. A required class is missing: org/codehaus/plexus/util/cli/CommandLineException org.codehaus.plexus.util.cli.CommandLineException [INFO]

    Read the article

  • When doing a Schema Export with hbm2ddl, is there a way to specify that you DO NOT want Nullable For

    - by Jon Erickson
    The DDL that is being created is putting all of my many to many associations into 1 table, but I actually want each many to many association in its' own table (for other reasons) Right now hbm2ddl is creating this table (only Table1Key OR Table2Key OR Table3Key should be filled out for any given record, causing this table to have nullable foreign keys): +-----------+ | xRef | +-----------+ | Table1Key | | Table2Key | | Table3Key | | RiskKey | +-----------+ I want hbm2ddl to create the following 3 tables so that there are no nullable foreign keys. +-----------+ +-----------+ +-----------+ | xRef1 | | xRef2 | | xRef3 | +-----------+ +-----------+ +-----------+ | Table1Key | | Table2Key | | Table3Key | | RiskKey | | RiskKey | | RiskKey | +-----------+ +-----------+ +-----------+

    Read the article

  • NullPointerException using datanucleus-json with S3

    - by Matt
    I'm using datanucleus 3.2.7 from Maven, trying to use the Amazon S3 JPA provider. I can successfully write data into S3, but querying either by using "SELECT u FROM User u" or "SELECT u FROM User u WHERE id = :id" causes a NullPointerException to be thrown. Using the RDBMS provider, everything works perfectly. Is there something I'm doing wrong? Main.java EntityManagerFactory factory = Persistence.createEntityManagerFactory("MyUnit"); EntityManager entityManager = factory.createEntityManager(); Query query = entityManager.createQuery("SELECT u FROM User u", User.class); List<User> users = query.getResultList(); // Null pointer exception here for(User u:users) System.out.println(u); User.java package test; import javax.persistence.*; @Entity @Table(name = "User") public class User { @Id public String id; public String name; public User(String id, String name) { this.id = id; this.name = name; } public String toString() { return id+" : "+name; } } persistence.xml <?xml version="1.0" encoding="UTF-8" ?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="MyUnit"> <class>test.User</class> <exclude-unlisted-classes /> <properties> <properties> <property name="datanucleus.ConnectionURL" value="amazons3:http://s3.amazonaws.com/" /> <property name="datanucleus.ConnectionUserName" value="xxxxx" /> <property name="datanucleus.ConnectionPassword" value="xxxxx" /> <property name="datanucleus.cloud.storage.bucket" value="my-bucket" /> </properties> <property name="datanucleus.autoCreateSchema" value="true" /> </properties> </persistence-unit> </persistence> Exception java.lang.NullPointerException at org.datanucleus.NucleusContext.isClassWithIdentityCacheable(NucleusContext.java:1840) at org.datanucleus.ExecutionContextImpl.getObjectFromLevel2Cache(ExecutionContextImpl.java:5287) at org.datanucleus.ExecutionContextImpl.getObjectFromCache(ExecutionContextImpl.java:5191) at org.datanucleus.ExecutionContextImpl.findObject(ExecutionContextImpl.java:3137) at org.datanucleus.store.json.CloudStoragePersistenceHandler.getObjectsOfCandidateType(CloudStoragePersistenceHandler.java:367) at org.datanucleus.store.json.query.JPQLQuery.performExecute(JPQLQuery.java:94) at org.datanucleus.store.query.Query.executeQuery(Query.java:1786) at org.datanucleus.store.query.Query.executeWithMap(Query.java:1690) at org.datanucleus.api.jpa.JPAQuery.getResultList(JPAQuery.java:194) at test.Main.main(Main.java:16)

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >