Search Results

Search found 60037 results on 2402 pages for 'custom field type'.

Page 2/2402 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Coherence - How to develop a custom push replication publisher

    - by cosmin.tudor(at)oracle.com
    CoherencePushReplicationDB.zipIn the example bellow I'm describing a way of developing a custom push replication publisher that publishes data to a database via JDBC. This example can be easily changed to publish data to other receivers (JMS,...) by performing changes to step 2 and small changes to step 3, steps that are presented bellow. I've used Eclipse as the development tool. To develop a custom push replication publishers we will need to go through 6 steps: Step 1: Create a custom publisher scheme class Step 2: Create a custom publisher class that should define what the publisher is doing. Step 3: Create a class data is performing the actions (publish to JMS, DB, etc ) for the custom publisher. Step 4: Register the new publisher against a ContentHandler. Step 5: Add the new custom publisher in the cache configuration file. Step 6: Add the custom publisher scheme class to the POF configuration file. All these steps are detailed bellow. The coherence project is attached and conclusions are presented at the end. Step 1: In the Coherence Eclipse project create a class called CustomPublisherScheme that should implement com.oracle.coherence.patterns.pushreplication.publishers.AbstractPublisherScheme. In this class define the elements of the custom-publisher-scheme element. For instance for a CustomPublisherScheme that looks like that: <sync:publisher> <sync:publisher-name>Active2-JDBC-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:custom-publisher-scheme> <sync:jdbc-string>jdbc:oracle:thin:@machine-name:1521:XE</sync:jdbc-string> <sync:username>hr</sync:username> <sync:password>hr</sync:password> </sync:custom-publisher-scheme> </sync:publisher-scheme> </sync:publisher> the code is: package com.oracle.coherence; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import com.oracle.coherence.patterns.pushreplication.Publisher; import com.oracle.coherence.configuration.Configurable; import com.oracle.coherence.configuration.Mandatory; import com.oracle.coherence.configuration.Property; import com.oracle.coherence.configuration.parameters.ParameterScope; import com.oracle.coherence.environment.Environment; import com.tangosol.io.pof.PofReader; import com.tangosol.io.pof.PofWriter; import com.tangosol.util.ExternalizableHelper; @Configurable public class CustomPublisherScheme extends com.oracle.coherence.patterns.pushreplication.publishers.AbstractPublisherScheme { /** * */ private static final long serialVersionUID = 1L; private String jdbcString; private String username; private String password; public String getJdbcString() { return this.jdbcString; } @Property("jdbc-string") @Mandatory public void setJdbcString(String jdbcString) { this.jdbcString = jdbcString; } public String getUsername() { return username; } @Property("username") @Mandatory public void setUsername(String username) { this.username = username; } public String getPassword() { return password; } @Property("password") @Mandatory public void setPassword(String password) { this.password = password; } public Publisher realize(Environment environment, ClassLoader classLoader, ParameterScope parameterScope) { return new CustomPublisher(getJdbcString(), getUsername(), getPassword()); } public void readExternal(DataInput in) throws IOException { super.readExternal(in); this.jdbcString = ExternalizableHelper.readSafeUTF(in); this.username = ExternalizableHelper.readSafeUTF(in); this.password = ExternalizableHelper.readSafeUTF(in); } public void writeExternal(DataOutput out) throws IOException { super.writeExternal(out); ExternalizableHelper.writeSafeUTF(out, this.jdbcString); ExternalizableHelper.writeSafeUTF(out, this.username); ExternalizableHelper.writeSafeUTF(out, this.password); } public void readExternal(PofReader reader) throws IOException { super.readExternal(reader); this.jdbcString = reader.readString(100); this.username = reader.readString(101); this.password = reader.readString(102); } public void writeExternal(PofWriter writer) throws IOException { super.writeExternal(writer); writer.writeString(100, this.jdbcString); writer.writeString(101, this.username); writer.writeString(102, this.password); } } Step 2: Define what the CustomPublisher should basically do by creating a new java class called CustomPublisher that implements com.oracle.coherence.patterns.pushreplication.Publisher package com.oracle.coherence; import com.oracle.coherence.patterns.pushreplication.EntryOperation; import com.oracle.coherence.patterns.pushreplication.Publisher; import com.oracle.coherence.patterns.pushreplication.exceptions.PublisherNotReadyException; import java.io.BufferedWriter; import java.util.Iterator; public class CustomPublisher implements Publisher { private String jdbcString; private String username; private String password; private transient BufferedWriter bufferedWriter; public CustomPublisher() { } public CustomPublisher(String jdbcString, String username, String password) { this.jdbcString = jdbcString; this.username = username; this.password = password; this.bufferedWriter = null; } public String getJdbcString() { return this.jdbcString; } public String getUsername() { return username; } public String getPassword() { return password; } public void publishBatch(String cacheName, String publisherName, Iterator<EntryOperation> entryOperations) { DatabasePersistence databasePersistence = new DatabasePersistence( jdbcString, username, password); while (entryOperations.hasNext()) { EntryOperation entryOperation = (EntryOperation) entryOperations .next(); databasePersistence.databasePersist(entryOperation); } } public void start(String cacheName, String publisherName) throws PublisherNotReadyException { System.err .printf("Started: Custom JDBC Publisher for Cache %s with Publisher %s\n", new Object[] { cacheName, publisherName }); } public void stop(String cacheName, String publisherName) { System.err .printf("Stopped: Custom JDBC Publisher for Cache %s with Publisher %s\n", new Object[] { cacheName, publisherName }); } } In the publishBatch method from above we inform the publisher that he is supposed to persist data to a database: DatabasePersistence databasePersistence = new DatabasePersistence( jdbcString, username, password); while (entryOperations.hasNext()) { EntryOperation entryOperation = (EntryOperation) entryOperations .next(); databasePersistence.databasePersist(entryOperation); } Step 3: The class that deals with the persistence is a very basic one that uses JDBC to perform inserts/updates against a database. package com.oracle.coherence; import com.oracle.coherence.patterns.pushreplication.EntryOperation; import java.sql.*; import java.text.SimpleDateFormat; import com.oracle.coherence.Order; public class DatabasePersistence { public static String INSERT_OPERATION = "INSERT"; public static String UPDATE_OPERATION = "UPDATE"; public Connection dbConnection; public DatabasePersistence(String jdbcString, String username, String password) { this.dbConnection = createConnection(jdbcString, username, password); } public Connection createConnection(String jdbcString, String username, String password) { Connection connection = null; System.err.println("Connecting to: " + jdbcString + " Username: " + username + " Password: " + password); try { // Load the JDBC driver String driverName = "oracle.jdbc.driver.OracleDriver"; Class.forName(driverName); // Create a connection to the database connection = DriverManager.getConnection(jdbcString, username, password); System.err.println("Connected to:" + jdbcString + " Username: " + username + " Password: " + password); } catch (ClassNotFoundException e) { e.printStackTrace(); } // driver catch (SQLException e) { e.printStackTrace(); } return connection; } public void databasePersist(EntryOperation entryOperation) { if (entryOperation.getOperation().toString() .equalsIgnoreCase(INSERT_OPERATION)) { insert(((Order) entryOperation.getPublishableEntry().getValue())); } else if (entryOperation.getOperation().toString() .equalsIgnoreCase(UPDATE_OPERATION)) { update(((Order) entryOperation.getPublishableEntry().getValue())); } } public void update(Order order) { String update = "UPDATE Orders set QUANTITY= '" + order.getQuantity() + "', AMOUNT='" + order.getAmount() + "', ORD_DATE= '" + (new SimpleDateFormat("dd-MMM-yyyy")).format(order .getOrdDate()) + "' WHERE SYMBOL='" + order.getSymbol() + "'"; System.err.println("UPDATE = " + update); try { Statement stmt = getDbConnection().createStatement(); stmt.execute(update); stmt.close(); } catch (SQLException ex) { System.err.println("SQLException: " + ex.getMessage()); } } public void insert(Order order) { String insert = "insert into Orders values('" + order.getSymbol() + "'," + order.getQuantity() + "," + order.getAmount() + ",'" + (new SimpleDateFormat("dd-MMM-yyyy")).format(order .getOrdDate()) + "')"; System.err.println("INSERT = " + insert); try { Statement stmt = getDbConnection().createStatement(); stmt.execute(insert); stmt.close(); } catch (SQLException ex) { System.err.println("SQLException: " + ex.getMessage()); } } public Connection getDbConnection() { return dbConnection; } public void setDbConnection(Connection dbConnection) { this.dbConnection = dbConnection; } } Step 4: Now we need to register our publisher against a ContentHandler. In order to achieve that we need to create in our eclipse project a new class called CustomPushReplicationNamespaceContentHandler that should extend the com.oracle.coherence.patterns.pushreplication.configuration.PushReplicationNamespaceContentHandler. In the constructor of the new class we define a new handler for our custom publisher. package com.oracle.coherence; import com.oracle.coherence.configuration.Configurator; import com.oracle.coherence.environment.extensible.ConfigurationContext; import com.oracle.coherence.environment.extensible.ConfigurationException; import com.oracle.coherence.environment.extensible.ElementContentHandler; import com.oracle.coherence.patterns.pushreplication.PublisherScheme; import com.oracle.coherence.environment.extensible.QualifiedName; import com.oracle.coherence.patterns.pushreplication.configuration.PushReplicationNamespaceContentHandler; import com.tangosol.run.xml.XmlElement; public class CustomPushReplicationNamespaceContentHandler extends PushReplicationNamespaceContentHandler { public CustomPushReplicationNamespaceContentHandler() { super(); registerContentHandler("custom-publisher-scheme", new ElementContentHandler() { public Object onElement(ConfigurationContext context, QualifiedName qualifiedName, XmlElement xmlElement) throws ConfigurationException { PublisherScheme publisherScheme = new CustomPublisherScheme(); Configurator.configure(publisherScheme, context, qualifiedName, xmlElement); return publisherScheme; } }); } } Step 5: Now we should define our CustomPublisher in the cache configuration file according to the following documentation. <cache-config xmlns:sync="class:com.oracle.coherence.CustomPushReplicationNamespaceContentHandler" xmlns:cr="class:com.oracle.coherence.environment.extensible.namespaces.InstanceNamespaceContentHandler"> <caching-schemes> <sync:provider pof-enabled="false"> <sync:coherence-provider /> </sync:provider> <caching-scheme-mapping> <cache-mapping> <cache-name>publishing-cache</cache-name> <scheme-name>distributed-scheme-with-publishing-cachestore</scheme-name> <autostart>true</autostart> <sync:publisher> <sync:publisher-name>Active2 Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:remote-cluster-publisher-scheme> <sync:remote-invocation-service-name>remote-site1</sync:remote-invocation-service-name> <sync:remote-publisher-scheme> <sync:local-cache-publisher-scheme> <sync:target-cache-name>publishing-cache</sync:target-cache-name> </sync:local-cache-publisher-scheme> </sync:remote-publisher-scheme> <sync:autostart>true</sync:autostart> </sync:remote-cluster-publisher-scheme> </sync:publisher-scheme> </sync:publisher> <sync:publisher> <sync:publisher-name>Active2-Output-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:stderr-publisher-scheme> <sync:autostart>true</sync:autostart> <sync:publish-original-value>true</sync:publish-original-value> </sync:stderr-publisher-scheme> </sync:publisher-scheme> </sync:publisher> <sync:publisher> <sync:publisher-name>Active2-JDBC-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:custom-publisher-scheme> <sync:jdbc-string>jdbc:oracle:thin:@machine_name:1521:XE</sync:jdbc-string> <sync:username>hr</sync:username> <sync:password>hr</sync:password> </sync:custom-publisher-scheme> </sync:publisher-scheme> </sync:publisher> </cache-mapping> </caching-scheme-mapping> <!-- The following scheme is required for each remote-site when using a RemoteInvocationPublisher --> <remote-invocation-scheme> <service-name>remote-site1</service-name> <initiator-config> <tcp-initiator> <remote-addresses> <socket-address> <address>localhost</address> <port>20001</port> </socket-address> </remote-addresses> <connect-timeout>2s</connect-timeout> </tcp-initiator> <outgoing-message-handler> <request-timeout>5s</request-timeout> </outgoing-message-handler> </initiator-config> </remote-invocation-scheme> <!-- END: com.oracle.coherence.patterns.pushreplication --> <proxy-scheme> <service-name>ExtendTcpProxyService</service-name> <acceptor-config> <tcp-acceptor> <local-address> <address>localhost</address> <port>20002</port> </local-address> </tcp-acceptor> </acceptor-config> <autostart>true</autostart> </proxy-scheme> </caching-schemes> </cache-config> As you can see in the red-marked text from above I've:       - set new Namespace Content Handler       - define the new custom publisher that should work together with other publishers like: stderr and remote publishers in our case. Step 6: Add the com.oracle.coherence.CustomPublisherScheme to your custom-pof-config file: <pof-config> <user-type-list> <!-- Built in types --> <include>coherence-pof-config.xml</include> <include>coherence-common-pof-config.xml</include> <include>coherence-messagingpattern-pof-config.xml</include> <include>coherence-pushreplicationpattern-pof-config.xml</include> <!-- Application types --> <user-type> <type-id>1901</type-id> <class-name>com.oracle.coherence.Order</class-name> <serializer> <class-name>com.oracle.coherence.OrderSerializer</class-name> </serializer> </user-type> <user-type> <type-id>1902</type-id> <class-name>com.oracle.coherence.CustomPublisherScheme</class-name> </user-type> </user-type-list> </pof-config> CONCLUSIONSThis approach allows for publishers to publish data to almost any other receiver (database, JMS, MQ, ...). The only thing that needs to be changed is the DatabasePersistence.java class that should be adapted to the chosen receiver. Only minor changes are needed for the rest of the code (to publishBatch method from CustomPublisher class).

    Read the article

  • Developer’s Life – Disaster Lessons – Notes from the Field #039

    - by Pinal Dave
    [Note from Pinal]: This is a 39th episode of Notes from the Field series. What is the best solution do you have when you encounter a disaster in your organization. Now many of you would answer that in this scenario you would have another standby machine or alternative which you will plug in. Now let me ask second question – What would you do if you as an individual faces disaster?  In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue we face in our career, which is not technical but more to relate to human nature. Read on this may be the best blog post you might read in recent times. Howdy! When it was my turn to share the Notes from the Field last time, I took a departure from my normal technical content to talk about Attitude and Communication.(http://blog.sqlauthority.com/2014/05/08/developers-life-attitude-and-communication-they-can-cause-problems-notes-from-the-field-027/) Pinal said it was a popular topic so I hope he won’t mind if I stick with Professional Development for another of my turns at sharing some information here. Like I said last time, the “soft skills” of the IT world are often just as important – sometimes more important – than the technical skills. As a consultant with Linchpin People – I see so many situations where the professional skills I’ve gained and use are more valuable to clients than knowing the best way to tune a query. Today I want to continue talking about professional development and tell you about the way I almost got myself hit by a train – and why that matters in our day jobs. Sometimes we can learn a lot from disasters. Whether we caused them or someone else did. If you are interested in learning about some of my observations in these lessons you can see more where I talk about lessons from disasters on my blog. For now, though, onto how I almost got my vehicle hit by a train… The Train Crash That Almost Was…. My family and I own a little schoolhouse building about a 10 mile drive away from our house. We use it as a free resource for families in the area that homeschool their children – so they can have some class space. I go up there a lot to check in on the property, to take care of the trash and to do work on the property. On the way there, there is a very small Stop Sign controlled railroad intersection. There is only two small freight trains a day passing there. Actually the same train, making a journey south and then back North. That’s it. This road is a small rural road, barely ever a second car driving in the neighborhood there when I am. The stop sign is pretty much there only for the train crossing. When we first bought the building, I was up there a lot doing renovations on the property. Being familiar with the area, I am also familiar with the train schedule and know the tracks are normally free of trains. So I developed a bad habit. You see, I’d approach the stop sign and slow down as I roll through it. Sometimes I’d do a quick look and come to an “almost” stop there but keep on going. I let my impatience and complacency take over. And that is because most of the time I was going there long after the train was done for the day or in between the runs. This habit became pretty well established after a couple years of driving the route. The behavior reinforced a bit by the success ratio. I saw others doing it as well from the neighborhood when I would happen to be there around the time another car was there. Well. You already know where this ends up by the title and backstory here. A few months ago I came to that little crossing, and I started to do the normal routine. I’d pretty much stopped looking in some respects because of the pattern I’d gotten into.  For some reason I looked and heard and saw the train slowly approaching and slammed on my brakes and stopped. It was an abrupt stop, and it was close. I probably would have made it okay, but I sat there thinking about lessons for IT professionals from the situation once I started breathing again and watched the cars loaded with sand and propane slowly labored down the tracks… Here are Those Lessons… It’s easy to get stuck into a routine – That isn’t always bad. Except when it’s a bad routine. Momentum and inertia are powerful. Once you have a habit and a routine developed – it’s really hard to break that. Make sure you are setting the right routines and habits TODAY. What almost dangerous things are you doing today? How are you almost messing up your production environment today? Stop doing that. Be Deliberate – (Even when you are the only one) – Like I said – a lot of people roll through that stop sign. Perhaps the neighbors or other drivers think “why is he fully stopping and looking… The train only comes two times a day!” – they can think that all they want. Through deliberate actions and forcing myself to pay attention, I will avoid that oops again. Slow down. Take a deep breath. Be Deliberate in your job. Pay attention to the small stuff and go out of your way to be careful. It will save you later. Be Observant – Keep your eyes open. By looking around, observing the situation and understanding what your servers, databases, users and vendors are doing – you’ll notice when something is out of place. But if you don’t know what is normal, if you don’t look to make sure nothing has changed – that train will come and get you. Where can you be more observant? What warning signs are you ignoring in your environment today? In the IT world – trains are everywhere. Projects move fast. Decisions happen fast. Problems turn from a warning sign to a disaster quickly. If you get stuck in a complacent pattern of “Everything is okay, it always has been and always will be” – that’s the time that you will most likely get stuck in a bad situation. Don’t let yourself get complacent, don’t let your team get complacent. That will lead to being proactive. And a proactive environment spends less money on consultants for troubleshooting problems you should have seen ahead of time. You can spend your money and IT budget on improving for your customers. If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Type dependencies vs directory structure

    - by paul
    Something I've been wondering about recently is how to organize types in directories/namespaces w.r.t. their dependencies. One method I've seen, which I believe is the recommendation for both Haskell and .NET (though I can't find the references for this at the moment), is: Type Type/ATypeThatUsesType Type/AnotherTypeThatUsesType My natural inclination, however, is to do the opposite: Type Type/ATypeUponWhichTypeDepends Type/AnotherTypeUponWhichTypeDepends Questions: Is my inclination bass-ackwards? Are there any major benefits/pitfalls of one over the other? Is it just something that depends on the application, e.g. whether you're building a library vs doing normal coding?

    Read the article

  • SQL SERVER – ?Finding Out What Changed in a Deleted Database – Notes from the Field #041

    - by Pinal Dave
    [Note from Pinal]: This is a 41th episode of Notes from the Field series. The real world is full of challenges. When we are reading theory or book, we sometimes do not realize how real world reacts works and that is why we have the series notes from the field, which is extremely popular with developers and DBA. Let us talk about interesting problem of how to figure out what has changed in the DELETED database. Well, you think I am just throwing the words but in reality this kind of problems are making our DBA’s life interesting and in this blog post we have amazing story from Brian Kelley about the same subject. In this episode of the Notes from the Field series database expert Brian Kelley explains a how to find out what has changed in deleted database. Read the experience of Brian in his own words. Sometimes, one of the hardest questions to answer is, “What changed?” A similar question is, “Did anything change other than what we expected to change?” The First Place to Check – Schema Changes History Report: Pinal has recently written on the Schema Changes History report and its requirement for the Default Trace to be enabled. This is always the first place I look when I am trying to answer these questions. There are a couple of obvious limitations with the Schema Changes History report. First, while it reports what changed, when it changed, and who changed it, other than the base DDL operation (CREATE, ALTER, DELETE), it does not present what the changes actually were. This is not something covered by the default trace. Second, the default trace has a fixed size. When it hits that size, the changes begin to overwrite. As a result, if you wait too long, especially on a busy database server, you may find your changes rolled off. But the Database Has Been Deleted! Pinal cited another issue, and that’s the inability to run the Schema Changes History report if the database has been dropped. Thankfully, all is not lost. One thing to remember is that the Schema Changes History report is ultimately driven by the Default Trace. As you may have guess, it’s a trace, like any other database trace. And the Default Trace does write to disk. The trace files are written to the defined LOG directory for that SQL Server instance and have a prefix of log_: Therefore, you can read the trace files like any other. Tip: Copy the files to a working directory. Otherwise, you may occasionally receive a file in use error. With the Default Trace files, if you ask the question early enough, you can see the information for a deleted database just the same as any other database. Testing with a Deleted Database: Here’s a short script that will create a database, create a schema, create an object, and then drop the database. Without the database, you can’t do a standard Schema Changes History report. CREATE DATABASE DeleteMe; GO USE DeleteMe; GO CREATE SCHEMA Test AUTHORIZATION dbo; GO CREATE TABLE Test.Foo (FooID INT); GO USE MASTER; GO DROP DATABASE DeleteMe; GO This sets up the perfect situation where we can’t retrieve the information using the Schema Changes History report but where it’s still available. Finding the Information: I’ve sorted the columns so I can see the Event Subclass, the Start Time, the Database Name, the Object Name, and the Object Type at the front, but otherwise, I’m just looking at the trace files using SQL Profiler. As you can see, the information is definitely there: Therefore, even in the case of a dropped/deleted database, you can still determine who did what and when. You can even determine who dropped the database (loginame is captured). The key is to get the default trace files in a timely manner in order to extract the information. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Accessing type-parameter of a type-parameter

    - by itemState
    i would like to access, in a trait, the type-parameter of a type-parameter of that trait. without adding this "second-order" type-parameter as another "first-order" parameter to the trait. the following illustrates this problem: sealed trait A; sealed trait A1 extends A; sealed trait A2 extends A trait B[ ASpecific <: A ] { type ASpec = ASpecific } trait D[ ASpecific <: A ] extends B[ ASpecific ] trait C[ +BSpecific <: B[ _ <: A ]] { def unaryOp : C[ D[ BSpecific#ASpec ]] } def test( c: C[ B[ A1 ]]) : C[ D[ A1 ]] = c.unaryOp the test fails to compile because apparently, the c.unaryOp has a result of type C[D[A]] and not C[D[A1]], indicating that ASpec is merely a shortcut for _ <: A and does not refer to the specific type parameter. the two-type-parameter solution is simple: sealed trait A; sealed trait A1 extends A; sealed trait A2 extends A trait B[ ASpecific <: A ] trait D[ ASpecific <: A ] extends B[ ASpecific ] trait C[ ASpecific <: A, +BSpecific <: B[ ASpecific ]] { def unaryOp : C[ ASpecific, D[ ASpecific ]] } def test( c: C[ A1, B[ A1 ]]) : C[ A1, D[ A1 ]] = c.unaryOp but i don't understand why i need to clutter my source with this second, obviously redundant, parameter. is there no way to retrieve it from trait B?

    Read the article

  • Developer’s Life – Attitude and Communication – They Can Cause Problems – Notes from the Field #027

    - by Pinal Dave
    [Note from Pinal]: This is a 27th episode of Notes from the Field series. The biggest challenge for anyone is to understand human nature. We human have so many things on our mind at any moment of time. There are cases when what we say is not what we mean and there are cases where what we mean we do not say. We do say and things as per our mood and our agenda in mind. Sometimes there are incidents when our attitude creates confusion in the communication and we end up creating a situation which is absolutely not warranted. In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue we face in our career, which is not technical but more to relate to human nature. Read on this may be the best blog post you might read in recent times. In this week’s note from the field, I’m taking a slight departure from technical knowledge and concepts explained. We’ll be back to it next week, I’m sure. Pinal wanted us to explain some of the issues we bump into and how we see some of our customers arrive at problem situations and how we have helped get them back on the right track. Often it is a technical problem we are officially solving – but in a lot of cases as a consultant, we are really helping fix some communication difficulties. This is a technical blog post and not an “advice column” in a newspaper – but the longer I am a consultant, the more years I add to my experience in technology the more I learn that the vast majority of the problems we encounter have “soft skills” included in the chain of causes for the issue we are helping overcome. This is not going to be exhaustive but I hope that sharing four pieces of advice inspired by real issues starts a process of searching for places where we can be the cause of these challenges and look at fixing them in ourselves. Or perhaps we can begin looking at resolving them in teams that we manage. I’ll share three statements that I’ve either heard, read or said and talk about some of the communication or attitude challenges highlighted by the statement. 1 – “But that’s the SAN Administrator’s responsibility…” I heard that early on in my consulting career when talking with a customer who had serious corruption and no good recent backups – potentially no good backups at all. The statement doesn’t have to be this one exactly, but the attitude here is an attitude of “my job stops here, and I don’t care about the intent or principle of why I’m here.” It’s also a situation of having the attitude that as long as there is someone else to blame, I’m fine…  You see in this case, the DBA had a suspicion that the backups were not being handled right.  They were the DBA and they knew that they had responsibility to ensure SQL backups were good to go – it’s a basic requirement of a production DBA. In my “As A DBA Where Do I start?!” presentation, I argue that is job #1 of a DBA. But in this case, the thought was that there was someone else to blame. Rather than create extra work and take on responsibility it was decided to just let it be another team’s responsibility. This failed the company, the company’s customers and no one won. As technologists – we should strive to go the extra mile. If there is a lack of clarity around roles and responsibilities and we know it – we should push to get it resolved. Especially as the DBAs who should act as the advocates of the data contained in the databases we are responsible for. 2 – “We’ve always done it this way, it’s never caused a problem before!” Complacency. I have to say that many failures I’ve been paid good money to help recover from would have not happened had it been for an attitude of complacency. If any thoughts like this have entered your mind about your situation you may be suffering from it. If, while reading this, you get this sinking feeling in your stomach about that one thing you know should be fixed but haven’t done it.. Why don’t you stop and go fix it then come back.. “We should have better backups, but we’re on a SAN so we should be fine really.” “Technically speaking that could happen, but what are the chances?” “We’ll just clean that up as a fast follow” ..and so on. In the age of tightening IT budgets, increased expectations of up time, availability and performance there is no room for complacency. Our customers and business units expect – no demand – the best. Complacency says “we will give you second best or hopefully good enough and we accept the risk and know this may hurt us later. Sometimes an organization will opt for “good enough” and I agree with the concept that at times the perfect can be the enemy of the good. But when we make those decisions in a vacuum and are not reporting them up and discussing them as an organization that is different. That is us unilaterally choosing to do something less than the best and purposefully playing a game of chance. 3 – “This device must accept interference from other devices but not create any” I’ve paraphrased this one – but it’s something the Federal Communications Commission – a federal agency in the United States that regulates electronic communication – requires of all manufacturers of any device that could cause or receive interference electronically. I blogged in depth about this here (http://www.straightpathsql.com/archives/2011/07/relationship-advice-from-the-fcc/) so I won’t go into much detail other than to say this… If we all operated more on the premise that we should do our best to not be the cause of conflict, and to be less easily offended and less upset when we perceive offense life would be easier in many areas! This doesn’t always cause the issues we are called in to help out. Not directly. But where we see it is in unhealthy relationships between the various technology teams at a client. We’ll see teams hoarding knowledge, not sharing well with others and almost working against other teams instead of working with them. If you trace these problems back far enough it often stems from someone or some group of people violating this principle from the FCC. To Sum It Up Technology problems are easy to solve. At Linchpin People we help many customers get past the toughest technological challenge – and at the end of the day it is really just a repeatable process of pattern based troubleshooting, logical thinking and starting at the beginning and carefully stepping through to the end. It’s easy at the end of the day. The tough part of what we do as consultants is the people skills. Being able to help get teams working together, being able to help teams take responsibility, to improve team to team communication? That is the difficult part, and we get to use the soft skills on every engagement. Work on professional development (http://professionaldevelopment.sqlpass.org/) and see continuing improvement here, not just with technology. I can teach just about anyone how to be an excellent DBA and performance tuner, but some of these soft skills are much more difficult to teach. If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Change type of control by type

    - by Ruben
    Hi, I'm having following problem. I should convert a control to a certain type, this can be multiple types (for example a custom button or a custom label, ....) Here's an example of what i would like to do: private void ConvertToTypeAndUseCustomProperty(Control c) { Type type = c.getType(); ((type) c).CustomPropertieOfControl = 234567; } Thanks in advance.

    Read the article

  • IIS website is sending multiple content-type headers for zip files

    - by frankadelic
    We have a problem with an IIS5 server. When certain users/browsers click to download .zip files, binary gibberish text sometimes renders in the browser window. The desired behavior is for the file to either download or open with the associated zip application. Initially, we suspected that the wrong content-type header was set on the file. The IIS tech confirmed that .zip files were being served by IIS with the mime-type "application/x-zip-compressed". However, an inspection of the HTTP packets using Wireshark reveals that requests for zip files return two Content-Type headers. Content-Type: text/html; charset=UTF-8 Content-Type: application/x-zip-compressed Any idea why IIS is sending two content-type headers? This doesn't happen for regular HTML or images files. It does happen with ZIP and PDF. Is there a particular place we can ask the IIS tech to look? Or is there a configuration file we can examine?

    Read the article

  • SQL SERVER – Windows File/Folder and Share Permissions – Notes from the Field #029

    - by Pinal Dave
    [Note from Pinal]: This is a 29th episode of Notes from the Field series. Security is the task which we should give it to the experts. If there is a small overlook or misstep, there are good chances that security of the organization is compromised. This is very true, but there are always devils’s advocates who believe everyone should know the security. As a DBA and Administrator, I often see people not taking interest in the Windows Security hiding behind the reason of not expert of Windows Server. We all often miss the important mission statement for the success of any organization – Teamwork. In this blog post Brian tells the story in very interesting lucid language. Read On! In this episode of the Notes from the Field series database expert Brian Kelley explains a very crucial issue DBAs and Developer faces on their production server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of Brian in his own words. When I talk security among database professionals, I find that most have at least a working knowledge of how to apply security within a database. When I talk with DBAs in particular, I find that most have at least a working knowledge of security at the server level if we’re speaking of SQL Server. One area I see continually that is weak is in the area of Windows file/folder (NTFS) and share permissions. The typical response is, “I’m a database developer and the Windows system administrator is responsible for that.” That may very well be true – the system administrator may have the primary responsibility and accountability for file/folder and share security for the server. However, if you’re involved in the typical activities surrounding databases and moving data around, you should know these permissions, too. Otherwise, you could be setting yourself up where someone is able to get to data he or she shouldn’t, or you could be opening the door where human error puts bad data in your production system. File/Folder Permission Basics: I wrote about file/folder permissions a few years ago to give the basic permissions that are most often seen. Here’s what you must know as a minimum at the file/folder level: Read - Allows you to read the contents of the file or folder. Having read permissions allows you to copy the file or folder. Write  – Again, as the name implies, it allows you to write to the file or folder. This doesn’t include the ability to delete, however, nothing stops a person with this access from writing an empty file. Delete - Allows the file/folder to be deleted. If you overwrite files, you may need this permission. Modify - Allows read, write, and delete. Full Control - Same as modify + the ability to assign permissions. File/Folder permissions aggregate, unless there is a DENY (where it trumps, just like within SQL Server), meaning if a person is in one group that gives Read and antoher group that gives Write, that person has both Read and Write permissions. As you might expect me to say, always apply the Principle of Least Privilege. This likely means that any additional permission you might add does not need Full Control. Share Permission Basics: At the share level, here are the permissions. Read - Allows you to read the contents on the share. Change - Allows you to read, write, and delete contents on the share. Full control - Change + the ability to modify permissions. Like with file/folder permissions, these permissions aggregate, and DENY trumps. So What Access Does a Person / Process Have? Figuring out what someone or some process has depends on how the location is being accessed: Access comes through the share (\\ServerName\Share) – a combination of permissions is considered. Access is through a drive letter (C:\, E:\, S:\, etc.) – only the file/folder permissions are considered. The only complicated one here is access through the share. Here’s what Windows does: Figures out what the aggregated permissions are at the file/folder level. Figures out what the aggregated permissions are at the share level. Takes the most restrictive of the two sets of permissions. You can test this by granting Full Control over a folder (this is likely already in place for the Users local group) and then setting up a share. Give only Read access through the share, and that includes to Administrators (if you’re creating a share, likely you have membership in the Administrators group). Try to read a file through the share. Now try to modify it. The most restrictive permission is the Share level permissions. It’s set to only allow Read. Therefore, if you come through the share, it’s the most restrictive. Does This Knowledge Really Help Me? In my experience, it does. I’ve seen cases where sensitive files were accessible by every authenticated user through a share. Auditors, as you might expect, have a real problem with that. I’ve also seen cases where files to be imported as part of the nightly processing were overwritten by files intended from development. And I’ve seen cases where a process can’t get to the files it needs for a process because someone changed the permissions. If you know file/folder and share permissions, you can spot and correct these types of security flaws. Given that there are a lot of database professionals that don’t understand these permissions, if you know it, you set yourself apart. And if you’re able to help on critical processes, you begin to set yourself up as a linchpin (link to .pdf) for your organization. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Converting raw data type to enumerated type

    - by Jim Lahman
    There are times when an enumerated type is preferred over using the raw data type.  An example of using a scheme is when we need to check the health of x-ray gauges in use on a production line.  Rather than using a scheme like 0, 1 and 2, we can use an enumerated type: 1: /// <summary> 2: /// POR Healthy status indicator 3: /// </summary> 4: /// <remarks>The healthy status is for each POR x-ray gauge; each has its own status.</remarks> 5: [Flags] 6: public enum POR_HEALTH : short 7: { 8: /// <summary> 9: /// POR1 healthy status indicator 10: /// </summary> 11: POR1 = 0, 12: /// <summary> 13: /// POR2 healthy status indicator 14: /// </summary> 15: POR2 = 1, 16: /// <summary> 17: /// Both POR1 and POR2 healthy status indicator 18: /// </summary> 19: BOTH = 2 20: } By using the [Flags] attribute, we are treating the enumerated type as a bit mask.  We can then use bitwise operations such as AND, OR, NOT etc. . Now, when we want to check the health of a specific gauge, we would rather use the name of the gauge than the numeric identity; it makes for better reading and programming practice. To translate the numeric identity to the enumerated value, we use the Parse method of Enum class: POR_HEALTH GaugeHealth = (POR_HEALTH) Enum.Parse(typeof(POR_HEALTH), XrayMsg.Gauge_ID.ToString()); The Parse method creates an instance of the enumerated type.  Now, we can use the name of the gauge rather than the numeric identity: 1: if (GaugeHealth == POR_HEALTH.POR1 || GaugeHealth == POR_HEALTH.BOTH) 2: { 3: XrayHealthyTag.Name = Properties.Settings.Default.POR1XRayHealthyTag; 4: } 5: else if (GaugeHealth == POR_HEALTH.POR2) 6: { 7: XrayHealthyTag.Name = Properties.Settings.Default.POR2XRayHealthyTag; 8: }

    Read the article

  • Why to say, my function is of IFly type rather than saying it's Airplane type

    - by Vishwas Gagrani
    Say, I have two classes: Airplane and Bird, both of them fly. Both implement the interface IFly. IFly declares a function StartFlying(). Thus both Airplane and Bird have to define the function, and use it as per their requirement. Now when I make a manual for class reference, what should I write for the function StartFlying? 1) StartFlying is a function of type IFly . 2) StartFlying is a function of type Airplane 3) StartFlying is a function of type Bird. My opinion is 2 and 3 are more informative. But what i see is that class references use the 1st one. They say what interface the function is declared in. Problem is, I really don't get any usable information from knowing StartFlying is IFly type. However, knowing that StartFlying is a function inside Airplane and Bird, is more informative, as I can decide which instance (Airplane or Bird ) to use. Any lights on this: how saying StartFlying is a function of type IFly, can help a programmer understanding how to use the function?

    Read the article

  • determine complex type from a primitive type using reflection

    - by Nilotpal Das
    I am writing a tool where I need to reflect upon methods and if the parameters of the methods are complex type, then I need to certain type of actions such as instantiating them etc. Now I saw the IsPrimitive property in the Type variable. However, it shows string and decimal as complex types, which technically isn't incorrect. However what I really want is to be able to distinguish developer created class types from system defined data types. Is there any way that I can do this?

    Read the article

  • Hot to get generic type from object type

    - by Murat
    My Classes are; class BaseClass { } class DerivedClass1 : BaseClass { } class GenericClass<T> { } class DerivedClass2 : BaseClass { GenericClass<DerivedClass1> subItem; } I want to access all fields of DerivedClass2 class. I use System.Reflection and FieldInfo.GetValue() method; Bu I cant get subItem field. FieldInfo.GetValue() method return type is "object". And I cant cast to GenericClass<DerivedClass1> or I cant get DerivedClass1 type. I try this with BaseClass BaseClass instance = FieldInfo.Getvalue(this) as GenericClass<BaseClass>; but instance is null. How to get instance with type or how to get only type?

    Read the article

  • SQL SERVER – Planned and Unplanned Availablity Group Failovers – Notes from the Field #031

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Fields series. AlwaysOn is a very complex subject and not everyone knows many things about this. The matter of the fact is there is very little information available on this subject online and not everyone knows everything about this. This is why when a very common question related to AlwaysOn comes, people get confused. In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career and is related to Planned and Unplanned Availablity Group Failovers. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words. Whenever a disaster occurs it will be a stressful scenario regardless of how small or big the disaster is. This gets multiplied when it is your first time working with newer technology or the first time you are going through a disaster without a proper run book. Today, were going to help you establish a run book for creating a planned failover with availability groups. To make today’s session simple were going to have two instances of SQL Server 2012 included in an availability group and walk through the steps of doing an unplanned failover.  We will focus on using the user interface and T-SQL to complete the failovers. We are going to use a two replica Availability Group where each replica is in another location. Therefore, we will be covering Asynchronous (non automatic failover) the following is a breakdown of our availability group utilized today. Seeing the following screen might be scary the first time you come across an unplanned failover.  It looks like our test database used in this Availability Group is not functional and it currently isn’t. The database status is not synchronizing which makes sense because the primary replica went down so it couldn’t synchronize. With that said, we can still failover and make it functional while we troubleshoot why we lost our primary replica. To start we are going to right click on the availability group that needs to be restarted and select failover. This will bring up the following wizard, which will walk you through several steps needed to complete the failover using the graphical user interface provided with SQL Server Management Studio (SSMS). You are going to see warning messages simply because we are in Asynchronous commit mode and can not guarantee ‘no data loss’ when we do failover. Just incase you missed it; you get another screen warning you about potential data loss because we are in Asynchronous mode. Next we get to connect to the specific replica we want to become the primary replica after the failover occurs. In our case, we only have two replicas so this is trivial. In order to failover, it’s required to connect to the replica that will become primary.  The following screen shows that the connection has been made successfully. Next, you will see the final summary screen. Once again, this reminds you that the failover action will cause data loss as were using Asynchronous commit mode due to the distance between instances used for disaster recovery. Finally, once the failover is completed you will see the following screen. If you followed along this long you might be wondering what T-SQL scripts are generated for clicking through all the sections of the wizard. If you have used Database Mirroring in the past you might be surprised.  It’s not too different, which makes sense because the data is being replicated via SQL Server endpoints just like the good old database mirroring. Now were going to take a look at how to do a failover with just T-SQL. First, were going to need to open a new query window and run our query in SQLCMD mode. Just incase you haven’t used SQLCMD mode before we will show you how to enable it below. Now you can run the following statement. Notice, we connect to the replica we want to become primary after failover and specify to force failover to allow data loss. We can use the following script to failback over when our primary instance comes back online. -- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE. :Connect SQL2012PROD1 ALTER AVAILABILITY GROUP [AGSQL2] FORCE_FAILOVER_ALLOW_DATA_LOSS; GO Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – SSIS Parameters in Parent-Child ETL Architectures – Notes from the Field #040

    - by Pinal Dave
    [Notes from Pinal]: SSIS is very well explored subject, however, there are so many interesting elements when we read, we learn something new. A similar concept has been Parent-Child ETL architecture’s relationship in SSIS. Linchpin People are database coaches and wellness experts for a data driven world. In this 40th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to understand SSIS Parameters in Parent-Child ETL Architectures. In this brief Notes from the Field post, I will review the use of SSIS parameters in parent-child ETL architectures. A very common design pattern used in SQL Server Integration Services is one I call the parent-child pattern.  Simply put, this is a pattern in which packages are executed by other packages.  An ETL infrastructure built using small, single-purpose packages is very often easier to develop, debug, and troubleshoot than large, monolithic packages.  For a more in-depth look at parent-child architectures, check out my earlier blog post on this topic. When using the parent-child design pattern, you will frequently need to pass values from the calling (parent) package to the called (child) package.  In older versions of SSIS, this process was possible but not necessarily simple.  When using SSIS 2005 or 2008, or even when using SSIS 2012 or 2014 in package deployment mode, you would have to create package configurations to pass values from parent to child packages.  Package configurations, while effective, were not the easiest tool to work with.  Fortunately, starting with SSIS in SQL Server 2012, you can now use package parameters for this purpose. In the example I will use for this demonstration, I’ll create two packages: one intended for use as a child package, and the other configured to execute said child package.  In the parent package I’m going to build a for each loop container in SSIS, and use package parameters to pass in a value – specifically, a ClientID – for each iteration of the loop.  The child package will be executed from within the for each loop, and will create one output file for each client, with the source query and filename dependent on the ClientID received from the parent package. Configuring the Child and Parent Packages When you create a new package, you’ll see the Parameters tab at the package level.  Clicking over to that tab allows you to add, edit, or delete package parameters. As shown above, the sample package has two parameters.  Note that I’ve set the name, data type, and default value for each of these.  Also note the column entitled Required: this allows me to specify whether the parameter value is optional (the default behavior) or required for package execution.  In this example, I have one parameter that is required, and the other is not. Let’s shift over to the parent package briefly, and demonstrate how to supply values to these parameters in the child package.  Using the execute package task, you can easily map variable values in the parent package to parameters in the child package. The execute package task in the parent package, shown above, has the variable vThisClient from the parent package mapped to the pClientID parameter shown earlier in the child package.  Note that there is no value mapped to the child package parameter named pOutputFolder.  Since this parameter has the Required property set to False, we don’t have to specify a value for it, which will cause that parameter to use the default value we supplied when designing the child pacakge. The last step in the parent package is to create the for each loop container I mentioned earlier, and place the execute package task inside it.  I’m using an object variable to store the distinct client ID values, and I use that as the iterator for the loop (I describe how to do this more in depth here).  For each iteration of the loop, a different client ID value will be passed into the child package parameter. The final step is to configure the child package to actually do something meaningful with the parameter values passed into it.  In this case, I’ve modified the OleDB source query to use the pClientID value in the WHERE clause of the query to restrict results for each iteration to a single client’s data.  Additionally, I’ll use both the pClientID and pOutputFolder parameters to dynamically build the output filename. As shown, the pClientID is used in the WHERE clause, so we only get the current client’s invoices for each iteration of the loop. For the flat file connection, I’m setting the Connection String property using an expression that engages both of the parameters for this package, as shown above. Parting Thoughts There are many uses for package parameters beyond a simple parent-child design pattern.  For example, you can create standalone packages (those not intended to be used as a child package) and still use parameters.  Parameter values may be supplied to a package directly at runtime by a SQL Server Agent job, through the command line (via dtexec.exe), or through T-SQL. Also, you can also have project parameters as well as package parameters.  Project parameters work in much the same way as package parameters, but the parameters apply to all packages in a project, not just a single package. Conclusion Of the numerous advantages of using catalog deployment model in SSIS 2012 and beyond, package parameters are near the top of the list.  Parameters allow you to easily share values from parent to child packages, enabling more dynamic behavior and better code encapsulation. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – SQL Server High Availability Options – Notes from the Field #032

    - by Pinal Dave
    [Notes from Pinal]: When it is about High Availability or Disaster Recovery, I often see people getting confused. There are so many options available that when the user has to select what is the most optimal solution for their organization they are often confused. Most of the people even know the salient features of various options, but when they have to figure out one single option to use they are often not sure which option to use. I like to give ask my dear friend time all these kinds of complicated questions. He has a skill to make a complex subject very simple and easy to understand. Linchpin People are database coaches and wellness experts for a data driven world. In this 26th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words the best High Availability Option for your SQL Server.  Working with SQL Server a common challenge we are faced with is providing the maximum uptime possible.  To meet these demands we have to design a solution to provide High Availability (HA). Microsoft SQL Server depending on your edition provides you with several options.  This could be database mirroring, log shipping, failover clusters, availability groups or replication. Each possible solution comes with pro’s and con’s.  Not anyone one solution fits all scenarios so understanding which solution meets which need is important.  As with anything IT related, you need to fully understand your requirements before trying to solution the problem.  When it comes to building an HA solution, you need to understand the risk your organization needs to mitigate the most. I have found that most are concerned about hardware failure and OS failures. Other common concerns are data corruption or storage issues.  For data corruption or storage issues you can mitigate those concerns by having a second copy of the databases. That can be accomplished with database mirroring, log shipping, replication or availability groups with a secondary replica.  Failover clustering and virtualization with shared storage do not provide redundancy of the data. I recently created a chart outlining some pros and cons of each of the technologies that I posted on my blog. I like to use this chart to help illustrate how each technology provides a certain number of benefits.  Each of these solutions carries with it some level of cost and complexity.  As a database professional we should all be familiar with these technologies so we can make the best possible choice for our organization. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Shrinking Database

    Read the article

  • Scala type system : basic type mismatch

    - by SiM
    I have a basic type system type mismatch problem: I have a class with a method def Create(nodeItem : NodeItem) = {p_nodeStart.addEndNode(nodeItem)} where p_nodeStart is NodeCache class NodeCache[END_T<:BaseNode] private(node: Node) extends BaseNode { def addEndNode(endNode : END_T) = {this.CACHE_HAS_ENDNODES.Create(endNode)} and the error its giving me is: error: type mismatch; found : nodes.NodeItem required: Nothing def Create(nodeItem : NodeItem) = {p_nodeStart.addEndNode(nodeItem)} while the NodeCache is defined as object NodeTrigger { def Create() { val nodeTimeCache = NodeCache.Create[NodeItem](node) and in object NodeCache object NodeCache { def Create[END_T<:BaseNode]() { val nodeCache = new NodeCache[END_T](node); Any ideas, how to fix the error?

    Read the article

  • A follow up on type coercion in C++, as it may be construed by type conversion

    - by David
    This is a follow up to my previous question. Consider that I write a function with the following prototype: int a_function(Foo val); Where foo is believed to be a type defined unsigned int. This is unfortunately not verifiable for lack of documentation. So, someone comes along and uses a_function, but calls it with an unsigned int as an argument. Here the story takes a turn. Foo turns out to actually be a class, which can take an unsigned int as a single argument of unsigned int in an explicit constructor. Is it a standard and reliable behavior for the compiler to render the function call by doing a type conversion on the argument. I.e. is the compiler supposed to recognize the mismatch and insert the constructor? Or should I get a compile time error reporting the type mismatch.

    Read the article

  • Use date operations on a computed field filter in a view

    - by stephenhay
    I'd like to expose a filter in Views based on a Computed Field. This field should utilize the date operation "less than". Apparently, a date type is not available to computed field (varchar is). However, using varchar results in views treating the field as a string rather than as a date, without the operation I'm looking for. So I tried using a timestamp instead of a date, which allowed me to use an integer data type. The exposed view gives me a "less than" operation, but as it's still not a date, the user must type in a timestamp. Not handy. So I'm thinking the best way to do this is to create a new (hidden) CCK field which is a date field, and set the value of this field to the value of the Computed Field. I can then use the new CCK field for my exposed filter. But I'm not getting anywhere with this, as I'm unsure how to set another CCK field with a Computed Field. To return the value to the computed field itself, I'm using the standard: $node_field[0]['value'] = $newestdate; I would like to set the value of the new CCK field to $newestdate as well. Can anyone help?

    Read the article

  • Silverlight Custom Control Create Custom Event

    - by PlayKid
    Hi There, How do I create an event that handled a click event of one of my other control from my custom control? Here is the setup of what I've got: a textbox and a button (Custom Control) a silverlight application (uses that above custom control) I would like to expose the click event of the button from the custom control on the main application, how do I do that? Thanks

    Read the article

  • drupal how to show custom profile field

    - by Arun
    i added a profile field to registration form. how to show in edit registration (account) form . i wrote a module for edit account in that $form [function editregistration_form_user_profile_form_alter(&$form, &$form_state) ] doesn't contain the values of custom profile fields.

    Read the article

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • Which statically typed languages support intersection types for function return values?

    - by stakx
    Initial note: This question got closed after several edits because I lacked the proper terminology to state accurately what I was looking for. Sam Tobin-Hochstadt then posted a comment which made me recognise exactly what that was: programming languages that support intersection types for function return values. Now that the question has been re-opened, I've decided to improve it by rewriting it in a (hopefully) more precise manner. Therefore, some answers and comments below might no longer make sense because they refer to previous edits. (Please see the question's edit history in such cases.) Are there any popular statically & strongly typed programming languages (such as Haskell, generic Java, C#, F#, etc.) that support intersection types for function return values? If so, which, and how? (If I'm honest, I would really love to see someone demonstrate a way how to express intersection types in a mainstream language such as C# or Java.) I'll give a quick example of what intersection types might look like, using some pseudocode similar to C#: interface IX { … } interface IY { … } interface IB { … } class A : IX, IY { … } class B : IX, IY, IB { … } T fn() where T : IX, IY { return … ? new A() : new B(); } That is, the function fn returns an instance of some type T, of which the caller knows only that it implements interfaces IX and IY. (That is, unlike with generics, the caller doesn't get to choose the concrete type of T — the function does. From this I would suppose that T is in fact not a universal type, but an existential type.) P.S.: I'm aware that one could simply define a interface IXY : IX, IY and change the return type of fn to IXY. However, that is not really the same thing, because often you cannot bolt on an additional interface IXY to a previously defined type A which only implements IX and IY separately. Footnote: Some resources about intersection types: Wikipedia article for "Type system" has a subsection about intersection types. Report by Benjamin C. Pierce (1991), "Programming With Intersection Types, Union Types, and Polymorphism" David P. Cunningham (2005), "Intersection types in practice", which contains a case study about the Forsythe language, which is mentioned in the Wikipedia article. A Stack Overflow question, "Union types and intersection types" which got several good answers, among them this one which gives a pseudocode example of intersection types similar to mine above.

    Read the article

  • kernel module compiling error

    - by wati
    sh@ubuntu:/home/ccpp/helloworld$ make gcc-4.6 -O2 -DMODULE -D_KERNEL_ -W -Wall -Wstrict-prototypes -Wmissing-prototypes -isystem /lib/modules/`uname -r`/build/include -c -o hello-1.o hello-1.c hello-1.c:4:0: warning: "MODULE" redefined [enabled by default] <command-line>:0:0: note: this is the location of the previous definition hello-1.c:6:0: warning: "_KERNEL_" redefined [enabled by default] <command-line>:0:0: note: this is the location of the previous definition In file included from /lib/modules/3.2.0-25-generic/build/include/linux/list.h:4:0, from /lib/modules/3.2.0-25-generic/build/include/linux/module.h:9, from hello-1.c:7: /lib/modules/3.2.0-25-generic/build/include/linux/types.h:13:2: warning: #warning "Attempt to use kernel headers from user space, see http://kernelnewbies.org/KernelHeaders" [-Wcpp] In file included from /lib/modules/3.2.0-25-generic/build/include/linux/module.h:9:0, from hello-1.c:7: /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘INIT_LIST_HEAD’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:26:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:27:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘__list_add’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:41:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:42:5: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:43:5: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:44:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘list_add’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:62:28: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘list_add_tail’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:76:22: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘__list_del’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:88:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:89:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘__list_del_entry’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:101:18: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:101:31: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘list_del’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:106:18: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:106:31: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:107:7: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:108:7: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘list_replace’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:125:5: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:125:17: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:126:5: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:127:5: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:127:17: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:128:5: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘list_is_last’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:179:13: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘list_empty’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:188:13: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘list_empty_careful’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:206:31: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:207:40: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘list_rotate_left’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:219:15: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘list_is_singular’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:230:35: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:230:49: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘__list_cut_position’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:236:37: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:237:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:237:19: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:238:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:239:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:240:7: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:241:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:242:11: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘list_cut_position’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:265:8: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘__list_splice’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:277:32: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:278:31: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:280:7: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:281:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:283:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:284:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘list_splice’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:296:33: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘list_splice_tail’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:308:27: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘list_splice_init’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:322:33: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘list_splice_tail_init’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:339:27: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘INIT_HLIST_NODE’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:572:3: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:573:3: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘hlist_unhashed’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:578:11: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘hlist_empty’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:583:11: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘__hlist_del’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:588:29: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:589:31: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:592:7: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘hlist_del’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:598:3: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:599:3: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘hlist_add_head’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:612:30: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:613:3: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:615:8: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:615:20: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:616:3: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:617:3: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:617:15: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘hlist_add_before’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:624:3: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:624:17: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:625:3: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:626:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:626:18: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:627:5: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘hlist_add_after’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:633:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:633:16: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:634:3: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:635:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:635:18: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:637:9: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:638:7: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:638:29: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘hlist_add_fake’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:644:3: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:644:15: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h: In function ‘hlist_move_list’: /lib/modules/3.2.0-25-generic/build/include/linux/list.h:654:5: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:654:18: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:655:9: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:656:6: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:656:27: error: dereferencing pointer to incomplete type /lib/modules/3.2.0-25-generic/build/include/linux/list.h:657:5: error: dereferencing pointer to incomplete type In file included from /lib/modules/3.2.0-25-generic/build/include/linux/module.h:12:0, from hello-1.c:7: /lib/modules/3.2.0-25-generic/build/include/linux/cache.h: At top level: /lib/modules/3.2.0-25-generic/build/include/linux/cache.h:5:23: fatal error: asm/cache.h: No such file or directory compilation terminated. make: *** [hello-1.o] Error 1 i got this error after compiling an helloworld program my program is #define MODULE #define LINUX #define _KERNEL_ #include <linux/module.h> #include <linux/kernel.h> int init_module(void) { printk("<1>hello World 1.\n"); return 0; } void cleanup_module(void) { printk(KERN_ALERT "goodbye world 1.\n"); } MODULE_LICENSE("GPL"); my make file is: TARGET := hello-1 WARN := -W -Wall -Wstrict-prototypes -Wmissing-prototypes INCLUDE := -isystem /lib/modules/`uname -r`/build/include CFLAGS := -O2 -DMODULE -D_KERNEL_ ${WARN} ${INCLUDE} CC := gcc-4.6 ${TARGET}.o: ${TARGET}.c .PHONY: clean clean: rm -rf ${TARGET}.o iam usin kernel 3.2.0.25 as novice i can't able to figure out where the problem is I SEARCHED EVERY THING I CAN TO KNOW ABOUT THIS ERROR BUT I CANT UNDERSTAND &I GET IRRELEVANT DOCS anybody help me please

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >