Search Results

Search found 881 results on 36 pages for 'audit trail'.

Page 32/36 | < Previous Page | 28 29 30 31 32 33 34 35 36  | Next Page >

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2

    - by pinaldave
    Social media has created an Always Connected World for us. Recently I enrolled myself to learn new technologies as a student. I had decided to focus on learning and decided not to stay connected on the internet while I am in the learning session. On the second day of the event after the learning was over, I noticed lots of notification from my friend on my various social media handle. He had connected with me on Twitter, Facebook, Google+, LinkedIn, YouTube as well SMS, WhatsApp on the phone, Skype messages and not to forget with a few emails. I right away called him up. The problem was very unique – let us hear the problem in his own words. “Pinal – we are in big trouble we are not able to figure out what is going on. Our product details table is continuously loosing rows. Lots of rows have disappeared since morning and we are unable to find why the rows are getting deleted. We have made sure that there is no DELETE command executed on the table as well. The matter of the fact, we have removed every single place the code which is referencing the table. We have done so many crazy things out of desperation but no luck. The rows are continuously deleted in a random pattern. Do you think we have problems with intrusion or virus?” After describing the problems he had pasted few rants about why I was not available during the day. I think it will be not smart to post those exact words here (due to many reasons). Well, my immediate reaction was to get online with him. His problem was unique to him and his team was all out to fix the issue since morning. As he said he has done quite a lot out in desperation. I started asking questions from audit, policy management and profiling the data. Very soon I realize that I think this problem was not as advanced as it looked. There was no intrusion, SQL Injection or virus issue. Well, long story short first - It was a very simple issue of foreign key created with ON UPDATE CASCADE and ON DELETE CASCADE.  CASCADE allows deletions or updates of key values to cascade through the tables defined to have foreign key relationships that can be traced back to the table on which the modification is performed. ON DELETE CASCADE specifies that if an attempt is made to delete a row with a key referenced by foreign keys in existing rows in other tables, all rows containing those foreign keys are also deleted. ON UPDATE CASCADE specifies that if an attempt is made to update a key value in a row, where the key value is referenced by foreign keys in existing rows in other tables, all of the foreign key values are also updated to the new value specified for the key. (Reference: BOL) In simple words – due to ON DELETE CASCASE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. In my friend’s case, they had two tables, Products and ProductDetails. They had created foreign key referential integrity of the product id between the table. Now the as fall was up they were updating their catalogue. When they were updating the catalogue they were deleting products which are no more available. As the changes were cascading the corresponding rows were also deleted from another table. This is CORRECT. The matter of the fact, there is no error or anything and SQL Server is behaving how it should be behaving. The problem was in the understanding and inappropriate implementations of business logic.  What they needed was Product Master Table, Current Product Catalogue, and Product Order Details History tables. However, they were using only two tables and without proper understanding the relation between them was build using foreign keys. If there were only two table, they should have used soft delete which will not actually delete the record but just hide it from the original product table. This workaround could have got them saved from cascading delete issues. I will be writing a detailed post on the design implications etc in my future post as in above three lines I cannot cover every issue related to designing and it is also not the scope of the blog post. More about designing in future blog posts. Once they learn their mistake, they were happy as there was no intrusion but trust me sometime we are our own enemy and this is a great example of it. In tomorrow’s blog post we will go over their code and workarounds. Feel free to share your opinions, experiences and comments. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • .NET Security Part 4

    - by Simon Cooper
    Finally, in this series, I am going to cover some of the security issues that can trip you up when using sandboxed appdomains. DISCLAIMER: I am not a security expert, and this is by no means an exhaustive list. If you actually are writing security-critical code, then get a proper security audit of your code by a professional. The examples below are just illustrations of the sort of things that can go wrong. 1. AppDomainSetup.ApplicationBase The most obvious one is the issue covered in the MSDN documentation on creating a sandbox, in step 3 – the sandboxed appdomain has the same ApplicationBase as the controlling appdomain. So let’s explore what happens when they are the same, and an exception is thrown. In the sandboxed assembly, Sandboxed.dll (IPlugin is an interface in a partially-trusted assembly, with a single MethodToDoThings on it): public class UntrustedPlugin : MarshalByRefObject, IPlugin { // implements IPlugin.MethodToDoThings() public void MethodToDoThings() { throw new EvilException(); } } [Serializable] internal class EvilException : Exception { public override string ToString() { // show we have read access to C:\Windows // read the first 5 directories Console.WriteLine("Pwned! Mwuahahah!"); foreach (var d in Directory.EnumerateDirectories(@"C:\Windows").Take(5)) { Console.WriteLine(d.FullName); } return base.ToString(); } } And in the controlling assembly: // what can possibly go wrong? AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase } // only grant permissions to execute // and to read the application base, nothing else PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, appDomainSetup.ApplicationBase); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.pathDiscovery, appDomainSetup.ApplicationBase); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain("Sandbox", null, appDomainSetup, restrictedPerms); // execute UntrustedPlugin in the sandbox // don't crash the application if the sandbox throws an exception IPlugin o = (IPlugin)sandbox.CreateInstanceFromAndUnwrap("Sandboxed.dll", "UntrustedPlugin"); try { o.MethodToDoThings() } catch (Exception e) { Console.WriteLine(e.ToString()); } And the result? Oops. We’ve allowed a class that should be sandboxed to execute code with fully-trusted permissions! How did this happen? Well, the key is the exact meaning of the ApplicationBase property: The application base directory is where the assembly manager begins probing for assemblies. When EvilException is thrown, it propagates from the sandboxed appdomain into the controlling assembly’s appdomain (as it’s marked as Serializable). When the exception is deserialized, the CLR finds and loads the sandboxed dll into the fully-trusted appdomain. Since the controlling appdomain’s ApplicationBase directory contains the sandboxed assembly, the CLR finds and loads the assembly into a full-trust appdomain, and the evil code is executed. So the problem isn’t exactly that the sandboxed appdomain’s ApplicationBase is the same as the controlling appdomain’s, it’s that the sandboxed dll was in such a place that the controlling appdomain could find it as part of the standard assembly resolution mechanism. The sandbox then forced the assembly to load in the controlling appdomain by throwing a serializable exception that propagated outside the sandbox. The easiest fix for this is to keep the sandbox ApplicationBase well away from the ApplicationBase of the controlling appdomain, and don’t allow the sandbox permissions to access the controlling appdomain’s ApplicationBase directory. If you do this, then the sandboxed assembly can’t be accidentally loaded into the fully-trusted appdomain, and the code can’t be executed. If the plugin does try to induce the controlling appdomain to load an assembly it shouldn’t, a SerializationException will be thrown when it tries to load the assembly to deserialize the exception, and no damage will be done. 2. Loading the sandboxed dll into the application appdomain As an extension of the previous point, you shouldn’t directly reference types or methods in the sandboxed dll from your application code. That loads the assembly into the fully-trusted appdomain, and from there code in the assembly could be executed. Instead, pull out methods you want the sandboxed dll to have into an interface or class in a partially-trusted assembly you control, and execute methods via that instead (similar to the example above with the IPlugin interface). If you need to have a look at the assembly before executing it in the sandbox, either examine the assembly using reflection from within the sandbox, or load the assembly into the Reflection-only context in the application’s appdomain. The code in assemblies in the reflection-only context can’t be executed, it can only be reflected upon, thus protecting your appdomain from malicious code. 3. Incorrectly asserting permissions You should only assert permissions when you are absolutely sure they’re safe. For example, this method allows a caller read-access to any file they call this method with, including your documents, any network shares, the C:\Windows directory, etc: [SecuritySafeCritical] public static string GetFileText(string filePath) { new FileIOPermission(FileIOPermissionAccess.Read, filePath).Assert(); return File.ReadAllText(filePath); } Be careful when asserting permissions, and ensure you’re not providing a loophole sandboxed dlls can use to gain access to things they shouldn’t be able to. Conclusion Hopefully, that’s given you an idea of some of the ways it’s possible to get past the .NET security system. As I said before, this post is not exhaustive, and you certainly shouldn’t base any security-critical applications on the contents of this blog post. What this series should help with is understanding the possibilities of the security system, and what all the security attributes and classes mean and what they are used for, if you were to use the security system in the future.

    Read the article

  • How Can I Safely Destroy Sensitive Data CDs/DVDs?

    - by Jason Fitzpatrick
    You have a pile of DVDs with sensitive information on them and you need to safely and effectively dispose of them so no data recovery is possible. What’s the most safe and efficient way to get the job done? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader HaLaBi wants to know how he can safely destroy CDs and DVDs with personal data on them: I have old CDs/DVDs which have some backups, these backups have some work and personal files. I always had problems when I needed to physically destroy them to make sure no one will reuse them. Breaking them is dangerous, pieces could fly fast and may cause harm. Scratching them badly is what I always do but it takes long time and I managed to read some of the data in the scratched CDs/DVDs. What’s the way to physically destroy a CD/DVD safely? How should he approach the problem? The Answer SuperUser contributor Journeyman Geek offers a practical solution coupled with a slightly mad-scientist solution: The proper way is to get yourself a shredder that also handles cds – look online for cd shredders. This is the right option if you end up doing this routinely. I don’t do this very often – For small scale destruction I favour a pair of tin snips – they have enough force to cut through a cd, yet are blunt enough to cause small cracks along the sheer line. Kitchen shears with one serrated side work well too. You want to damage the data layer along with shearing along the plastic, and these work magnificently. Do it in a bag, cause this generates sparkly bits. There’s also the fun, and probably dangerous way – find yourself an old microwave, and microwave them. I would suggest doing this in a well ventilated area of course, and not using your mother’s good microwave. There’s a lot of videos of this on YouTube – such as this (who’s done this in a kitchen… and using his mom’s microwave). This results in a very much destroyed cd in every respect. If I was an evil hacker mastermind, this is what I’d do. The other options are better for the rest of us. Another contributor, Keltari, notes that the only safe (and DoD approved) way to dispose of data is total destruction: The answer by Journeyman Geek is good enough for almost everything. But oddly, that common phrase “Good enough for government work” does not apply – depending on which part of the government. It is technically possible to recover data from shredded/broken/etc CDs and DVDs. If you have a microscope handy, put the disc in it and you can see the pits. The disc can be reassembled and the data can be reconstructed — minus the data that was physically destroyed. So why not just pulverize the disc into dust? Or burn it to a crisp? While technically, that would completely eliminate the data, it leaves no record of the disc having existed. And in some places, like DoD and other secure facilities, the data needs to be destroyed, but the disc needs to exist. If there is a security audit, the disc can be pulled to show it has been destroyed. So how can a disc exist, yet be destroyed? Well, the most common method is grinding the disc down to destroy the data, yet keep the label surface of the disc intact. Basically, it’s no different than using sandpaper on the writable side, till the data is gone. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

  • ADF Business Components

    - by Arda Eralp
    ADF Business Components and JDeveloper simplify the development, delivery, and customization of business applications for the Java EE platform. With ADF Business Components, developers aren't required to write the application infrastructure code required by the typical Java EE application to: Connect to the database Retrieve data Lock database records Manage transactions   ADF Business Components addresses these tasks through its library of reusable software components and through the supporting design time facilities in JDeveloper. Most importantly, developers save time using ADF Business Components since the JDeveloper design time makes typical development tasks entirely declarative. In particular, JDeveloper supports declarative development with ADF Business Components to: Author and test business logic in components which automatically integrate with databases Reuse business logic through multiple SQL-based views of data, supporting different application tasks Access and update the views from browser, desktop, mobile, and web service clients Customize application functionality in layers without requiring modification of the delivered application The goal of ADF Business Components is to make the business services developer more productive.   ADF Business Components provides a foundation of Java classes that allow your business-tier application components to leverage the functionality provided in the following areas: Simplifying Data Access Design a data model for client displays, including only necessary data Include master-detail hierarchies of any complexity as part of the data model Implement end-user Query-by-Example data filtering without code Automatically coordinate data model changes with business services layer Automatically validate and save any changes to the database   Enforcing Business Domain Validation and Business Logic Declaratively enforce required fields, primary key uniqueness, data precision-scale, and foreign key references Easily capture and enforce both simple and complex business rules, programmatically or declaratively, with multilevel validation support Navigate relationships between business domain objects and enforce constraints related to compound components   Supporting Sophisticated UIs with Multipage Units of Work Automatically reflect changes made by business service application logic in the user interface Retrieve reference information from related tables, and automatically maintain the information when the user changes foreign-key values Simplify multistep web-based business transactions with automatic web-tier state management Handle images, video, sound, and documents without having to use code Synchronize pending data changes across multiple views of data Consistently apply prompts, tooltips, format masks, and error messages in any application Define custom metadata for any business components to support metadata-driven user interface or application functionality Add dynamic attributes at runtime to simplify per-row state management   Implementing High-Performance Service-Oriented Architecture Support highly functional web service interfaces for business integration without writing code Enforce best-practice interface-based programming style Simplify application security with automatic JAAS integration and audit maintenance "Write once, run anywhere": use the same business service as plain Java class, EJB session bean, or web service   Streamlining Application Customization Extend component functionality after delivery without modifying source code Globally substitute delivered components with extended ones without modifying the application   ADF Business Components implements the business service through the following set of cooperating components: Entity object An entity object represents a row in a database table and simplifies modifying its data by handling all data manipulation language (DML) operations for you. These are basically your 1 to 1 representation of a database table. Each table in the database will have 1 and only 1 EO. The EO contains the mapping between columns and attributes. EO's also contain the business logic and validation. These are you core data services. They are responsible for updating, inserting and deleting records. The Attributes tab displays the actual mapping between attributes and columns, the mapping has following fields: Name : contains the name of the attribute we expose in our data model. Type : defines the data type of the attribute in our application. Column : specifies the column to which we want to map the attribute with Column Type : contains the type of the column in the database   View object A view object represents a SQL query. You use the full power of the familiar SQL language to join, filter, sort, and aggregate data into exactly the shape required by the end-user task. The attributes in the View Objects are actually coming from the Entity Object. In the end the VO will generate a query but you basically build a VO by selecting which EO need to participate in the VO and which attributes of those EO you want to use. That's why you have the Entity Usage column so you can see the relation between VO and EO. In the query tab you can clearly see the query that will be generated for the VO. At this stage we don't need it and just use it for information purpose. In later stages we might use it. Application module An application module is the controller of your data layer. It is responsible for keeping hold of the transaction. It exposes the data model to the view layer. You expose the VO's through the Application Module. This is the abstraction of your data layer which you want to show to the outside word.It defines an updatable data model and top-level procedures and functions (called service methods) related to a logical unit of work related to an end-user task. While the base components handle all the common cases through built-in behavior, customization is always possible and the default behavior provided by the base components can be easily overridden or augmented. When you create EO's, a foreign key will be translated into an association in our model. It defines the type of relation and who is the master and child as well as how the visibility of the association looks like. A similar concept exists to identify relations between view objects. These are called view links. These are almost identical as association except that a view link is based upon attributes defined in the view object. It can also be based upon an association. Here's a short summary: Entity Objects: representations of tables Association: Relations between EO's. Representations of foreign keys View Objects: Logical model View Links: Relationships between view objects Application Model: interface to your application  

    Read the article

  • What to leave when you're leaving

    - by BuckWoody
    There's already a post on this topic - sort of. I read this entry, where the author did a good job on a few steps, but I found that a few other tips might be useful, so if you want to check that one out and then this post, you might be able to put together your own plan for when you leave your job.  I once took over the system administrator (of which the Oracle and SQL Server servers were a part) at a mid-sized firm. The outgoing administrator had about a two- week-long scheduled overlap with me, but was angry at the company and told me "hey, I know this is going to be hard on you, but I want them to know how important I was. I'm not telling you where anything is or what the passwords are. Good luck!" He then quit that day. It took me about three days to find all of the servers and crack the passwords. Yes, the company tried to take legal action against the guy and all that, but he moved back to his home country and so largely got away with it. Obviously, this isn't the way to leave a job. Many of us have changed jobs in the past, and most of us try to be very professional about the transition to a new team, regardless of the feelings about a particular company. I've been treated badly at a firm, but that is no reason to leave a mess for someone else. So here's what you should put into place at a minimum before you go. Most of this is common sense - which of course isn't very common these days - and another good rule is just to ask yourself "what would I want to know"? The article I referenced at the top of this post focuses on a lot of documentation of the systems. I think that's fine, but in actuality, I really don't need that. Even with this kind of documentation, I still perform a full audit on the systems, so in the end I create my own system documentation. There are actually only four big items I need to know to get started with the systems: 1. Where is everything/everybody?The first thing I need to know is where all of the systems are. I mean not only the street address, but the closet or room, the rack number, the IU number in the rack, the SAN luns, all that. A picture here is worth a thousand words, which is why I really like Visio. It combines nice graphics, full text and all that. But use whatever you have to tell someone the physical locations of the boxes. Also, tell them the physical location of the folks in charge of those boxes (in case you aren't) or who share that responsibility. And by "where" in this case, I mean names and phones.  2. What do they do?For both the servers and the people, tell them what they do. If it's a database server, detail what each database does and what application goes to that, and who "owns" that application. In my mind, this is one of hte most important things a Data Professional needs to know. In the case of the other administrtors or co-owners, document each person's responsibilities.   3. What are the credentials?Logging on/in and gaining access to the buildings are things that the new Data Professional will need to do to successfully complete their job. This means service accounts, certificates, all of that. The first thing they should do, of course, is change the passwords on all that, but the first thing they need is the ability to do that!  4. What is out of the ordinary?This is the most tricky, and perhaps the next most important thing to know. Did you have to use a "special" driver for that video card on server X? Is the person that co-owns an application with you mentally unstable (like me) or have special needs, like "don't talk to Buck before he's had coffee. Nothing will make any sense"? Do you have service pack requirements for a specific setup? Write all that down. Anything that took you a day or longer to make work is probably a candidate here. This is my short list - anything you care to add? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • JMaghreb 2012 Trip Report

    - by arungupta
    JMaghreb is the inaugural Java conference organized by Morocco JUG. It is the biggest Java conference in Maghreb (5 countries in North West Africa). Oracle was the exclusive platinum sponsor with several others. The registrations had to be closed at 1412 for the free conference and several folks were already on the waiting list. Rabat with 531 registrations and Casablanca with 426 were the top cities. Some statistics ... 850+ attendees over 2 days, 500+ every day 30 sessions were delivered by 18 speakers from 10 different countries 10 sessions in French and 20 in English 6 of the speakers spoke at JavaOne 2012 8 will be at Devoxx Attendees from 5 different countries and 57 cities in Morocco 40.9% qualified them as professional and rest as students Topics ranged from HTML5, Java EE 7, ADF, JavaFX, MySQL, JCP, Vaadin, Android, Community, JCP Java EE 6 hands-on lab was sold out within 7 minutes and JavaFX in 12 minutes I gave the keynote along with Simon Ritter which was basically a recap of the Strategy and Technical keynotes presented at JavaOne 2012. An informal survey during the keynote showed the following numbers: 25% using NetBeans, 90% on Eclipse, 3 on JDeveloper, 1 on IntelliJ About 10 subscribers to free online Java magazine. This digital magazine is a comprehensive source of information for everything Java - subscribe for free!! About 10-15% using Java SE 7. Download JDK 7 and get started today! Even JDK 8 builds have been available for a while now. My second talk explained the core concepts of WebSocket and how JSR 356 is providing a standard API to build WebSocket-driven applications in Java EE 7. TOTD #183 explains how you can easily get started with WebSocket in GlassFish 4. The complete slide deck is available: Next day started with a community keynote by Sonya Barry. Some of us live the life of JCP, JSR, EG, EC, RI, etc every day, but not every body is. To address that, Sonya prepared an excellent introductory presentation providing an explanation of these terms and how java.net infrastructure supports Java development. The registration for the lab showed there is a definite demand for these technologies in this part of the world. I delivered the Java EE 6 hands-on lab to a packed room of about 120 attendees. Most of the attendees were able to progress and follow the lab instructions. Some of the attendees did not have a laptop but were taking extensive notes on paper notepads. Several attendees were already using Java EE 6 in their projects and typically they are the ones asking deep dive questions. Also gave out three copies of my recently released Java EE 6 Pocket Guide and new GlassFish t-shirts. Definitely feels happy to coach ~120 more Java developers learn standards-based enterprise Java programming. I also participated in a JCP BoF along with Werner, Sonya, and Badr. Adotp-a-JSR, java.net infrastructure, how to file a JSR, what is an RI, and other similar topics were discussed in a candid manner. You can follow @JMaghrebConf or check out their facebook page. java.net published a timely conversation with Badr El Houari - the fearless leader of the Morocco JUG team. Did you know that Morocco JUG stood for JCP EC elections (ADD LINK) ? Even though they did not get elected but did fairly well. Now some sample tweets from #JMaghreb ... #JMaghreb is over. Impressive for a first edition! Thanks @badrelhouari and all the @MoroccoJUG team ! Since you @speakjava : System.out.println("Thank you so much dear Tech Evangelist ! The JavaFX was pretty amazing !!! "); #JMaghreb @YounesVendetta @arungupta @JMaghrebConf Right ! hope he will be back to morocco again and again .. :) @Alji_ @arungupta @JMaghrebConf That dude is a genius ;) Put it on your wall :p @arungupta rocking Java EE 6 at @JMaghrebConf #Java #JavaEE #JMaghreb http://t.co/isl0Iq5p @sonyabarry you are an awesome speaker ;-) #JMaghreb rich more than 550 attendees in day one. Expecting more tomorrow! ongratulations @badrelhouari the organisation was great! The talks were pretty interesting, and the turnout was surprising at #JMaghreb! #JMaghreb is truly awesome... The speakers are unbelievable ! #JavaFX... Just amazing #JMaghreb Charmed by the talk about #javaFX ( nodes architecture, MVC, Lazy loading, binding... ) gotta start using it intead of SWT. #JMaghreb JavaFX is killing JFreeChart. It supports Charts a lot of kind of them ... #JMaghreb The british man is back #JMaghreb I do like him!! #JMaghreb @arungupta rocking @JMaghrebConf. pic.twitter.com/CNohA3PE @arungupta Great talk about the future of Java EE (JEE 7 & JEE 8) Thank you. #JMaghreb JEE7 more mooore power , leeess less code !! #JMaghreb They are simplifying the existing API for Java Message Service 2.0 #JMaghreb good to know , the more the code is simplified the better ! The Glassdoor guy #arungupta is doing it RIGHT ! #JMaghreb Great presentation of The Future of the Java Platform: Java EE 7, Java SE 8 & Beyond #jMaghreb @arungupta is a great Guy apparently #JMaghreb On a personal front, the hotel (Soiftel Jardin des Roses) was pretty nice and the location was perfect. There was a 1.8 mile loop dirt trail right next to it so I managed to squeeze some runs before my upcoming marathon. Also enjoyed some great Moroccan cuisine - Couscous, Tajine, mint tea, and moroccan salad. Visit to Kasbah of the Udayas, Hassan II (one of the tallest mosque in the world), and eating in a restaurant in a kasbah are some of the exciting local experiences. Now some pictures from the event (and around the city) ... And the complete album: Many thanks to Badr, Faisal, and rest of the team for organizing a great conference. They are already thinking about how to improve the content, logisitics, and flow for the next year. I'm certainly looking forward to JMaghreb 2.0 :-)

    Read the article

  • How I do VCS

    - by Wes McClure
    After years of dabbling with different version control systems and techniques, I wanted to share some of what I like and dislike in a few blog posts.  To start this out, I want to talk about how I use VCS in a team environment.  These come in a series of tips or best practices that I try to follow.  Note: This list is subject to change in the future. Always use some form of version control for all aspects of software development. Development is an evolution.  Looking back at where we were is an invaluable asset in that process.  This includes data schemas and documentation. Reverting / reapplying changes is absolutely critical for efficient development. The tools I use: Code: Hg (preferred), SVN Database: TSqlMigrations Documents: Sometimes in code repository, also SharePoint with versioning Always tag a commit (changeset) with comments This is a quick way to describe to someone else (or your future self) what the changeset entails. Be brief but courteous. One or two sentences about the task, not the actual changes. Use precommit hooks or setup the central repository to reject changes without comments. Link changesets to documentation If your project management system integrates with version control, or has a way to externally reference stories, tasks etc then leave a reference in the commit.  This helps locate more information about the commit and/or related changesets. It’s best to have a precommit hook or system that requires this information, otherwise it’s easy to forget. Ability to work offline is required, including commits and history Yes this requires a DVCS locally but doesn’t require the central repository to be a DVCS.  I prefer to use either Git or Hg but if it isn’t possible to migrate the central repository, it’s still possible for a developer to push / pull changes to that repository from a local Hg or Git repository. Never lock resources (files) in a central repository… Rude! We have merge tools for a reason, merging sucked a long time ago, it doesn’t anymore… stop locking files! This is unproductive, rude and annoying to other team members. Always review everything in your commit. Never ever commit a set of files without reviewing the changes in each. Never add a file without asking yourself, deep down inside, does this belong? If you leave to make changes during a review, start the review over when you come back.  Never assume you didn’t touch a file, double check. This is another reason why you want to avoid large, infrequent commits. Requirements for tools Quickly show pending changes for the entire repository. Default action for a resource with pending changes is a diff. Pluggable diff & merge tool Produce a unified diff or a diff of all changes.  This is helpful to bulk review changes instead of opening each file. The central repository is not your own personal dump yard.  Breaking this rule is a sure fire way to get the F bomb dropped in front of your name, multiple times. If you turn on Visual Studio’s commit on closing studio option, I will personally break your fingers. By the way, the person(s) in charge of this feature should be fired and never be allowed near programming, ever again. Commit (integrate) to the central repository / branch frequently I try to do this before leaving each day, especially without a DVCS.  One never knows when they might need to work from remote the following day. Never commit commented out code If it isn’t needed anymore, delete it! If you aren’t sure if it might be useful in the future, delete it! This is why we have history. If you don’t know why it’s commented out, figure it out and then either uncomment it or delete it. Don’t commit build artifacts, user preferences and temporary files. Build artifacts do not belong in VCS, everything in them is present in the code. (ie: bin\*, obj\*, *.dll, *.exe) User preferences are your settings, stop overriding my preferences files! (ie: *.suo and *.user files) Most tools allow you to ignore certain files and Hg/Git allow you to version this as an ignore file.  Set this up as a first step when creating a new repository! Be polite when merging unresolved conflicts. Count to 10, cuss, grab a stress ball and realize it’s not a big deal.  Actually, it’s an opportunity to let you know that someone else is working in the same area and you might want to communicate with them. Following the other rules, especially committing frequently, will reduce the likelihood of this. Suck it up, we all have to deal with this unintended consequence at times.  Just be careful and GET FAMILIAR with your merge tool.  It’s really not as scary as you think.  I personally prefer KDiff3 as its merging capabilities rock. Don’t blindly merge and then blindly commit your changes, this is rude and unprofessional.  Make sure you understand why the conflict occurred and which parts of the code you want to keep.  Apply scrutiny when you commit a manual merge: review the diff! Make sure you test the changes (build and run automated tests) Become intimate with your version control system and the tools you use with it. Avoid trial and error as much as is possible, sit down and test the tool out, read some tutorials etc.  Create test repositories and walk through common scenarios. Find the most efficient way to do your work.  These tools will be used repetitively, so inefficiencies will add up. Sometimes this involves a mix of tools, both GUI and CLI. I like a combination of both Tortoise Hg and hg cli to get the job efficiently. Always tag releases Create a way to find a given release, whether this be in comments or an explicit tag / branch.  This should be readily discoverable. Create release branches to patch bugs and then merge the changes back to other development branch(es). If using feature branches, strive for periodic integrations. Feature branches often cause forked code that becomes irreconcilable.  Strive to re-integrate somewhat frequently with the branch this code will ultimately be merged into.  This will avoid merge conflicts in the future. Feature branches are best when they are mutually exclusive of active development in other branches. Use and abuse local commits , at least one per task in a story. This builds a trail of changes in your local repository that can be pushed to a central repository when the story is complete. Never commit a broken build or failing tests to the central repository. It’s ok for a local commit to break the build and/or tests.  In fact, I encourage this if it helps group the changes more logically.  This is one of the main reasons I got excited about DVCS, when I wanted more than one changeset for a set of pending changes but some files could be grouped into both changesets (like solution file / project file changes). If you have more than a dozen outstanding changed resources, there should probably be more than one commit involved. Exceptions when maintaining code bases that require shotgun surgery, in this case, it’s a design smell :) Don’t version sensitive information Especially usernames / passwords   There is one area I haven’t found a solution I like yet: versioning 3rd party libraries and/or code.  I really dislike keeping any assemblies in the repository, but seems to be a common practice for external libraries.  Please feel free to share your ideas about this below.    -Wes

    Read the article

  • DRY and SRP

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/06/11/dry-and-srp.aspxKent Beck’s XP Simplicity Rules (aka Four Rules of Simple Design) are a prioritized list of rules that when applied to your code generally yield a great design.  As you’ll see from the above link the list has slightly evolved over time.  I find today they are usually listed as: All Tests Pass Don’t Repeat Yourself (DRY) Express Intent Minimalistic These are prioritized.  If your code doesn’t work (rule 1) then everything else is forfeit.  Go back to rule one and get the code working before worrying about anything else. Over the years the community have debated whether the priority of rules 2 and 3 should be reversed.  Some say a little duplication in the code is OK as long as it helps express intent.  I’ve debated it myself.  This recent post got me thinking about this again, hence this post.   I don’t think it is fair to compare “Expressing Intent” against “DRY”.  This is a comparison of apples to oranges.  “Expressing Intent” is a principal of code quality.  “Repeating Yourself” is a code smell.  A code smell is merely an indicator that there might be something wrong with the code.  It takes further investigation to determine if a violation of an underlying principal of code quality has actually occurred. For example “using nouns for method names”, “using verbs for property names”, or “using Booleans for parameters” are all code smells that indicate that code probably isn’t doing a good job at expressing intent.  They are usually very good indicators.  But what principle is the code smell of Duplication pointing to and how good of an indicator is it? Duplication in the code base is bad for a couple reasons.  If you need to make a change and that needs to be made in a number of locations it is difficult to know if you have caught all of them.  This can lead to bugs if/when one of those locations is overlooked.  By refactoring the code to remove all duplication there will be left with only one place to change, thereby eliminating this problem. With most projects the code becomes the single source of truth for a project.  If a production code base is inconsistent with a five year old requirements or design document the production code that people are currently living with is usually declared as the current reality (or truth).  Requirement or design documents at this age in a project life cycle are usually of little value. Although comparing production code to external documentation is usually straight forward, duplication within the code base muddles this declaration of truth.  When code is duplicated small discrepancies will creep in between the two copies over time.  The question then becomes which copy is correct?  As different factions debate how the software should work, trust in the software and the team behind it erodes. The code smell of Duplication points to a violation of the “Single Source of Truth” principle.  Let me define that as: A stakeholder’s requirement for a software change should never cause more than one class to change. Violation of the Single Source of Truth principle will always result in duplication in the code.  However, the inverse is not always true.  Duplication in the code does not necessarily indicate that there is a violation of the Single Source of Truth principle. To illustrate this, let’s look at a retail system where the system will (1) send a transaction to a bank and (2) print a receipt for the customer.  Although these are two separate features of the system, they are closely related.  The reason for printing the receipt is usually to provide an audit trail back to the bank transaction.  Both features use the same data:  amount charged, account number, transaction date, customer name, retail store name, and etcetera.  Because both features use much of the same data, there is likely to be a lot of duplication between them.  This duplication can be removed by making both features use the same data access layer. Then start coming the divergent requirements.  The receipt stakeholder wants a change so that the account number has the last few digits masked out to protect the customer’s privacy.  That can be solve with a small IF statement whilst still eliminating all duplication in the system.  Then the bank wants to take a picture of the customer as well as capture their signature and/or PIN number for enhanced security.  Then the receipt owner wants to pull data from a completely different system to report the customer’s loyalty program point total. After a while you realize that the two stakeholders have somewhat similar, but ultimately different responsibilities.  They have their own reasons for pulling the data access layer in different directions.  Then it dawns on you, the Single Responsibility Principle: There should never be more than one reason for a class to change. In this example we have two stakeholders giving two separate reasons for the data access class to change.  It is clear violation of the Single Responsibility Principle.  That’s a problem because it can often lead the project owner pitting the two stakeholders against each other in a vein attempt to get them to work out a mutual single source of truth.  But that doesn’t exist.  There are two completely valid truths that the developers need to support.  How is this to be supported and honour the Single Responsibility Principle?  The solution is to duplicate the data access layer and let each stakeholder control their own copy. The Single Source of Truth and Single Responsibility Principles are very closely related.  SST tells you when to remove duplication; SRP tells you when to introduce it.  They may seem to be fighting each other, but really they are not.  The key is to clearly identify the different responsibilities (or sources of truth) over a system.  Sometimes there is a single person with that responsibility, other times there are many.  This can be especially difficult if the same person has dual responsibilities.  They might not even realize they are wearing multiple hats. In my opinion Single Source of Truth should be listed as the second rule of simple design with Express Intent at number three.  Investigation of the DRY code smell should yield to the proper application SST, without violating SRP.  When necessary leave duplication in the system and let the class names express the different people that are responsible for controlling them.  Knowing all the people with responsibilities over a system is the higher priority because you’ll need to know this before you can express it.  Although it may be a code smell when there is duplication in the code, it does not necessarily mean that the coder has chosen to be expressive over DRY or that the code is bad.

    Read the article

  • RiverTrail - JavaScript GPPGU Data Parallelism

    - by JoshReuben
    Where is WebCL ? The Khronos WebCL working group is working on a JavaScript binding to the OpenCL standard so that HTML 5 compliant browsers can host GPGPU web apps – e.g. for image processing or physics for WebGL games - http://www.khronos.org/webcl/ . While Nokia & Samsung have some protype WebCL APIs, Intel has one-upped them with a higher level of abstraction: RiverTrail. Intro to RiverTrail Intel Labs JavaScript RiverTrail provides GPU accelerated SIMD data-parallelism in web applications via a familiar JavaScript programming paradigm. It extends JavaScript with simple deterministic data-parallel constructs that are translated at runtime into a low-level hardware abstraction layer. With its high-level JS API, programmers do not have to learn a new language or explicitly manage threads, orchestrate shared data synchronization or scheduling. It has been proposed as a draft specification to ECMA a (known as ECMA strawman). RiverTrail runs in all popular browsers (except I.E. of course). To get started, download a prebuilt version https://github.com/downloads/RiverTrail/RiverTrail/rivertrail-0.17.xpi , install Intel's OpenCL SDK http://www.intel.com/go/opencl and try out the interactive River Trail shell http://rivertrail.github.com/interactive For a video overview, see  http://www.youtube.com/watch?v=jueg6zB5XaM . ParallelArray the ParallelArray type is the central component of this API & is a JS object that contains ordered collections of scalars – i.e. multidimensional uniform arrays. A shape property describes the dimensionality and size– e.g. a 2D RGBA image will have shape [height, width, 4]. ParallelArrays are immutable & fluent – they are manipulated by invoking methods on them which produce new ParallelArray objects. ParallelArray supports several constructors over arrays, functions & even the canvas. // Create an empty Parallel Array var pa = new ParallelArray(); // pa0 = <>   // Create a ParallelArray out of a nested JS array. // Note that the inner arrays are also ParallelArrays var pa = new ParallelArray([ [0,1], [2,3], [4,5] ]); // pa1 = <<0,1>, <2,3>, <4.5>>   // Create a two-dimensional ParallelArray with shape [3, 2] using the comprehension constructor var pa = new ParallelArray([3, 2], function(iv){return iv[0] * iv[1];}); // pa7 = <<0,0>, <0,1>, <0,2>>   // Create a ParallelArray from canvas.  This creates a PA with shape [w, h, 4], var pa = new ParallelArray(canvas); // pa8 = CanvasPixelArray   ParallelArray exposes fluent API functions that take an elemental JS function for data manipulation: map, combine, scan, filter, and scatter that return a new ParallelArray. Other functions are scalar - reduce  returns a scalar value & get returns the value located at a given index. The onus is on the developer to ensure that the elemental function does not defeat data parallelization optimization (avoid global var manipulation, recursion). For reduce & scan, order is not guaranteed - the onus is on the dev to provide an elemental function that is commutative and associative so that scan will be deterministic – E.g. Sum is associative, but Avg is not. map Applies a provided elemental function to each element of the source array and stores the result in the corresponding position in the result array. The map method is shape preserving & index free - can not inspect neighboring values. // Adding one to each element. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.map(function inc(v) {     return v+1; }); //<2,3,4,5,6> combine Combine is similar to map, except an index is provided. This allows elemental functions to access elements from the source array relative to the one at the current index position. While the map method operates on the outermost dimension only, combine, can choose how deep to traverse - it provides a depth argument to specify the number of dimensions it iterates over. The elemental function of combine accesses the source array & the current index within it - element is computed by calling the get method of the source ParallelArray object with index i as argument. It requires more code but is more expressive. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.combine(function inc(i) { return this.get(i)+1; }); reduce reduces the elements from an array to a single scalar result – e.g. Sum. // Calculate the sum of the elements var source = new ParallelArray([1,2,3,4,5]); var sum = source.reduce(function plus(a,b) { return a+b; }); scan Like reduce, but stores the intermediate results – return a ParallelArray whose ith elements is the results of using the elemental function to reduce the elements between 0 and I in the original ParallelArray. // do a partial sum var source = new ParallelArray([1,2,3,4,5]); var psum = source.scan(function plus(a,b) { return a+b; }); //<1, 3, 6, 10, 15> scatter a reordering function - specify for a certain source index where it should be stored in the result array. An optional conflict function can prevent an exception if two source values are assigned the same position of the result: var source = new ParallelArray([1,2,3,4,5]); var reorder = source.scatter([4,0,3,1,2]); // <2, 4, 5, 3, 1> // if there is a conflict use the max. use 33 as a default value. var reorder = source.scatter([4,0,3,4,2], 33, function max(a, b) {return a>b?a:b; }); //<2, 33, 5, 3, 4> filter // filter out values that are not even var source = new ParallelArray([1,2,3,4,5]); var even = source.filter(function even(iv) { return (this.get(iv) % 2) == 0; }); // <2,4> Flatten used to collapse the outer dimensions of an array into a single dimension. pa = new ParallelArray([ [1,2], [3,4] ]); // <<1,2>,<3,4>> pa.flatten(); // <1,2,3,4> Partition used to restore the original shape of the array. var pa = new ParallelArray([1,2,3,4]); // <1,2,3,4> pa.partition(2); // <<1,2>,<3,4>> Get return value found at the indices or undefined if no such value exists. var pa = new ParallelArray([0,1,2,3,4], [10,11,12,13,14], [20,21,22,23,24]) pa.get([1,1]); // 11 pa.get([1]); // <10,11,12,13,14>

    Read the article

  • Introduction to Human Workflow 11g

    - by agiovannetti
    Human Workflow is a component of SOA Suite just like BPEL, Mediator, Business Rules, etc. The Human Workflow component allows you to incorporate human intervention in a business process. You can use Human Workflow to create a business process that requires a manager to approve purchase orders greater than $10,000; or a business process that handles article reviews in which a group of reviewers need to vote/approve an article before it gets published. Human Workflow can handle the task assignment and routing as well as the generation of notifications to the participants. There are three common patterns or usages of Human Workflow: 1) Approval Scenarios: manage documents and other transactional data through approval chains . For example: approve expense report, vacation approval, hiring approval, etc. 2) Reviews by multiple users or groups: group collaboration and review of documents or proposals. For example, processing a sales quote which is subject to review by multiple people. 3) Case Management: workflows around work management or case management. For example, processing a service request. This could be routed to various people who all need to modify the task. It may also incorporate ad hoc routing which is unknown at design time. SOA 11g Human Workflow includes the following features: Assignment and routing of tasks to the correct users or groups. Deadlines, escalations, notifications, and other features required for ensuring the timely performance of a task. Presentation of tasks to end users through a variety of mechanisms, including a Worklist application. Organization, filtering, prioritization and other features required for end users to productively perform their tasks. Reports, reassignments, load balancing and other features required by supervisors and business owners to manage the performance of tasks. Human Workflow Architecture The Human Workflow component is divided into 3 modules: the service interface, the task definition and the client interface module. The Service Interface handles the interaction with BPEL and other components. The Client Interface handles the presentation of task data through clients like the Worklist application, portals and notification channels. The task definition module is in charge of managing the lifecycle of a task. Who should get the task assigned? What should happen next with the task? When must the task be completed? Should the task be escalated?, etc Stages and Participants When you create a Human Task you need to specify how the task is assigned and routed. The first step is to define the stages and participants. A stage is just a logical group. A participant can be a user, a group of users or an application role. The participants indicate the type of assignment and routing that will be performed. Stages can be sequential or in parallel. You can combine them to create any usage you require. See diagram below: Assignment and Routing There are different ways a task can be assigned and routed: Single Approver: task is assigned to a single user, group or role. For example, a vacation request is assigned to a manager. If the manager approves or rejects the request, the employee is notified with the decision. If the task is assigned to a group then once one of managers acts on it, the task is completed. Parallel : task is assigned to a set of people that must work in parallel. This is commonly used for voting. For example, a task gets approved once 50% of the participants approve it. You can also set it up to be a unanimous vote. Serial : participants must work in sequence. The most common scenario for this is management chain escalation. FYI (For Your Information) : task is assigned to participants who can view it, add comments and attachments, but can not modify or complete the task. Task Actions The following is the list of actions that can be performed on a task: Claim : if a task is assigned to a group or multiple users, then the task must be claimed first to be able to act on it. Escalate : if the participant is not able to complete a task, he/she can escalate it. The task is reassigned to his/her manager (up one level in a hierarchy). Pushback : the task is sent back to the previous assignee. Reassign :if the participant is a manager, he/she can delegate a task to his/her reports. Release : if a task is assigned to a group or multiple users, it can be released if the user who claimed the task cannot complete the task. Any of the other assignees can claim and complete the task. Request Information and Submit Information : use when the participant needs to supply more information or to request more information from the task creator or any of the previous assignees. Suspend and Resume :if a task is not relevant, it can be suspended. A suspension is indefinite. It does not expire until Resume is used to resume working on the task. Withdraw : if the creator of a task does not want to continue with it, for example, he wants to cancel a vacation request, he can withdraw the task. The business process determines what happens next. Renew : if a task is about to expire, the participant can renew it. The task expiration date is extended one week. Notifications Human Workflow provides a mechanism for sending notifications to participants to alert them of changes on a task. Notifications can be sent via email, telephone voice message, instant messaging (IM) or short message service (SMS). Notifications can be sent when the task status changes to any of the following: Assigned/renewed/delegated/reassigned/escalated Completed Error Expired Request Info Resume Suspended Added/Updated comments and/or attachments Updated Outcome Withdraw Other Actions (e.g. acquiring a task) Here is an example of an email notification: Worklist Application Oracle BPM Worklist application is the default user interface included in SOA Suite. It allows users to access and act on tasks that have been assigned to them. For example, from the Worklist application, a loan agent can review loan applications or a manager can approve employee vacation requests. Through the Worklist Application users can: Perform authorized actions on tasks, acquire and check out shared tasks, define personal to-do tasks and define subtasks. Filter tasks view based on various criteria. Work with standard work queues, such as high priority tasks, tasks due soon and so on. Work queues allow users to create a custom view to group a subset of tasks in the worklist, for example, high priority tasks, tasks due in 24 hours, expense approval tasks and more. Define custom work queues. Gain proxy access to part of another user's tasks. Define custom vacation rules and delegation rules. Enable group owners to define task dispatching rules for shared tasks. Collect a complete workflow history and audit trail. Use digital signatures for tasks. Run reports like Unattended tasks, Tasks productivity, etc. Here is a screenshoot of what the Worklist Application looks like. On the right hand side you can see the tasks that have been assigned to the user and the task's detail. References Introduction to SOA Suite 11g Human Workflow Webcast Note 1452937.2 Human Workflow Information Center Using the Human Workflow Service Component 11.1.1.6 Human Workflow Samples Human Workflow APIs Java Docs

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 2 of 2 &ndash; Memory Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible. Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind… In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve. One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well. In this review, I am going to cover some of the features of the ANTS memory profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program. I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction – Part 2 In my last post, I reviewed the feature packed Red Gate ANTS Performance Profiler.  Separate from the Red Gate Performance Profiler is the Red Gate ANTS Memory Profiler – a simple, easy to use utility for checking how your application is handling memory management…  A tool that I wish I had had many times in the past.  This post will be focusing on the ANTS Memory Profiler and its tool set. The memory profiler has a large assortment of features just like the Performance Profiler, with the new session looking nearly exactly alike: ANTS Memory Profiler Memory profiling is not something that I have to do very often…  In the past, the few cases I’ve had to find a memory leak in an application I have usually just had to trace the code of the operations being performed to look for oddities…  Sadly, I have come across more undisposed/non-using’ed IDisposable objects, usually from ADO.Net than I would like to ever see.  Support is not fun, however using ANTS Memory Profiler makes this task easier.  For this round of testing, I am going to use the same code from my previous example, using the WPF application. This time, I will choose the ‘Profile Memory’ option from the ANTS menu in Visual Studio, which launches the solution in its currently configured state/start-up project, and then launches the ANTS Memory Profiler to help.  It prepopulates all of the fields with the current project information, and all I have to do is select the ‘Start Profiling’ option. When the window comes up, it is actually quite barren, just giving ideas on how to work the profiler.  You start by getting to the point in your application that you want to profile, and then taking a ‘Memory Snapshot’.  This performs a full garbage collection, and snapshots the managed heap.  Using the same WPF app as before, I will go ahead and take a snapshot now. As you can see, ANTS is already giving me lots of information regarding the snapshot, however this is just a snapshot.  The whole point of the profiler is to perform an action, usually one where a memory problem is being noticed, and then take another snapshot and perform a diff between them to see what has changed.  I am going to go ahead and generate 5000 primes, and then take another snapshot: As you can see, ANTS is already giving me a lot of new information about this snapshot compared to the last.  Information such as difference in memory usage, fragmentation, class usage, etc…  If you take more snapshots, you can use the dropdown at the top to set your actual comparison snapshots. If you beneath the timeline, you will see a breadcrumb trail showing how best to approach profiling memory using ANTS.  When you first do the comparison, you start on the Summary screen.  You can either use the charts at the bottom, or switch to the class list screen to get to the next step.  Here is the class list screen: As you can see, it lists information about all of the instances between the snapshots, as well as at the bottom giving you a way to filter by telling ANTS what your problem is.  I am going to go ahead and select the Int16[] to look at the Instance Categorizer Using the instance categorizer, you can travel backwards to see where all of the instances are coming from.  It may be hard to see in this image, but hopefully the lightbox (click on it) will help: I can see that all of these instances are rooted to the application through the UI TextBlock control.  This image will probably be even harder to see, however using the ‘Instance Retention Graph’, you can trace an objects memory inheritance up the chain to see its roots as well.  This is a simple example, as this is simply a known element.  Usually you would be profiling an actual problem, and comparing those differences.  I know in the past, I have spotted a problem where a new context was created per page load, and it was rooted into the application through an event.  As the application began to grow, performance and reliability problems started to emerge.  A tool like this would have been a great way to identify the problem quickly. Overview Overall, I think that the Red Gate ANTS Memory Profiler is a great utility for debugging those pesky leaks.  3 Biggest Pros: Easy to use interface with lots of options for configuring profiling session Intuitive and helpful interface for drilling down from summary, to instance, to root graphs ANTS provides an API for controlling the profiler. Not many options, but still helpful. 2 Biggest Cons: Inability to automatically snapshot the memory by interval Lack of complete integration with Visual Studio via an extension panel Ratings Ease of Use (9/10) – I really do believe that they have brought simplicity to the once difficult task of memory profiling.  I especially liked how it stepped you further into the drilldown by directing you towards the best options. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Features (7/10) – A really great set of features all around in the application, however, I would like to see some ability for automatically triggering snapshots based on intervals or framework level items such as events. Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (9/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (9/10) – Overall, I am happy with the Memory Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  Thank you for reading up to here, or skipping ahead – I told you it would be shorter!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • ?11.2RAC??????????????

    - by JaneZhang(???)
           ?????,???????????????,???dbca???????,???????????dbca,?????????11.2???????,???????,??dbca??????????????????,????????????????     ????11.2???????RACDB2???,?????RACDB1? ?????rac1,????rac2?     ?11.2?,?????grid?????GI,??oracle????????,????????oracle?????? 1. ??????????????????,?????,???????????:audit_file_dest, background_dump_dest, user_dump_dest ?core_dump_dest????audit_file_dest=/u01/app/oracle/admin/RACDB/adump,?????????,?????????:ORA-09925: Unable to create audit trail fileLinux-x86_64 Error: 2: No such file or directoryAdditional information: 99252. ????????????????????????????:SQL> alter system set instance_number=2 scope=spfile sid='RACDB2';SQL> alter system set thread=2 scope=spfile sid='RACDB2';SQL> alter system set undo_tablespace='UNDOTBS2' scope=spfile sid='RACDB2';SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.0.2.122)(PORT=1521))))' sid='RACDB2'; <=====192.0.2.122???2?VIP 3. ???????DB?$ORACLE_HOME/dbs/init<sid>.ora ?????DB?$ORACLE_HOME/dbs/init<sid>.ora,??????????????init<sid>.ora ????,????spfile???:=======================SPFILE='+DATA/racdb/spfileracdb.ora'??:[oracle@rac1 ~]$ scp $ORACLE_HOME/dbs/initRACDB1.ora rac2:$ORACLE_HOME/dbs/initRACDB2.ora <===????????24.  ??????/etc/oratab,????????????:RACDB2:/u01/app/oracle/product/11.2.0/dbhome_1:N       5.  ???????????: DB?$ORACLE_HOME/dbs/ora<sid>.pwd ????DB?$ORACLE_HOME/dbs/ora<sid>.pwd,??????????????:[oracle@rac1 dbs]$ scp $ORACLE_HOME/dbs/orapwRACDB1 rac2:$ORACLE_HOME/dbs/orapwRACDB2 <==?????26.  ?????????????,????????UNDO TABLESPACE?(??????dbca?????,???????undo tablespace????,?????????)??:SQL>CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE '/dev/….' SIZE 4096M ;???????:SQL>CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE '+DATA' SIZE 4096M ;7.  ?????????????,????????redo thread?redo log:??:SQL> alter database add logfile thread 2      group 3 ('/dev/...', '/dev/...') size 1024M,     group 4 ('/dev/...','dev/...') size 1024M;???????:SQL> alter database add logfile thread 2     group 3 ('+DATA','+RECO') size 1024M,     group 4 ('+DATA','+RECO') size 1024M;SQL> alter database enable thread 2; <==????thread8.  ??????????,?????????????:[oracle@rac2 admin]$su - oracle[oracle@rac2 admin]$export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1[oracle@rac2 admin]$export ORACLE_SID=RACDB2[oracle@rac2 admin]$ sqlplus / as sysdbaSQL> startup <==??????,???????2????????????9. ?????OCR???GI??,?????????????:$srvctl add instance -d <database name> -i <new instance name> -n <new node name>Example of srvctl add instance command:============================[oracle@rac2 ~]$ srvctl add instance -d racdb -i RACDB2 -n rac2  <==????????,????ps -ef|grep smon???[oracle@rac2 dbs]$ ps -ef|grep smonroot      3453     1  1 Jun12 ?        04:03:05 /u01/app/11.2.0/grid/bin/osysmond.bingrid      3727     1  0 Jun12 ?        00:00:19 asm_smon_+ASM2oracle    5343  4543  0 14:06 pts/1    00:00:00 grep smonoracle   28736     1  0 Jun25 ?        00:00:03 ora_smon_RACDB2 <========??????10. ???????:$su - grid[grid@rac2 ~]$ crsctl stat res -t...ora.racdb.db      1        ONLINE  ONLINE       rac1                     Open                      2        OFFLINE OFFLINE             rac2????,??????offline,????????????sqlplus??????sqlplus??????,???srvctl??:[grid@rac2 ~]$ su  - oraclePassword: [oracle@rac2 ~]$ sqlplus / as sysdbaSQL> shutdown immediate;Database closed.Database dismounted.ORACLE instance shut down.SQL> exit[oracle@rac2 ~]$ srvctl start instance -d racdb -i RACDB2[oracle@rac2 ~]$ su - gridPassword: [grid@rac2 ~]$ crsctl stat res -tora.racdb.db      1        ONLINE  ONLINE       rac1                     Open                      2        ONLINE  ONLINE       rac2                     Open                11. ?????????:[oracle@rac2 ~]$ crsctl stat res ora.racdb.db -pNAME=ora.racdb.dbTYPE=ora.database.typeACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--ACTION_FAILURE_TEMPLATE=ACTION_SCRIPT=ACTIVE_PLACEMENT=1AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%AUTO_START=restoreCARDINALITY=2CHECK_INTERVAL=1CHECK_TIMEOUT=30CLUSTER_DATABASE=trueDATABASE_TYPE=RACDB_UNIQUE_NAME=RACDBDEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) ELEMENT(DATABASE_TYPE= %DATABASE_TYPE%)DEGREE=1DESCRIPTION=Oracle Database resourceENABLED=1FAILOVER_DELAY=0FAILURE_INTERVAL=60FAILURE_THRESHOLD=1GEN_AUDIT_FILE_DEST=/u01/app/oracle/admin/RACDB/adumpGEN_START_OPTIONS=GEN_START_OPTIONS@SERVERNAME(rac1)=openGEN_START_OPTIONS@SERVERNAME(rac2)=openGEN_USR_ORA_INST_NAME=GEN_USR_ORA_INST_NAME@SERVERNAME(rac1)=RACDB1HOSTING_MEMBERS=INSTANCE_FAILOVER=0LOAD=1LOGGING_LEVEL=1MANAGEMENT_POLICY=AUTOMATICNLS_LANG=NOT_RESTARTING_TEMPLATE=OFFLINE_CHECK_INTERVAL=0ONLINE_RELOCATION_TIMEOUT=0ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1ORACLE_HOME_OLD=PLACEMENT=restrictedPROFILE_CHANGE_TEMPLATE=RESTART_ATTEMPTS=2ROLE=PRIMARYSCRIPT_TIMEOUT=60SERVER_POOLS=ora.RACDBSPFILE=+DATA/RACDB/spfileRACDB.oraSTART_DEPENDENCIES=hard(ora.DATA.dg,ora.RECO.dg) weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns) pullup(ora.DATA.dg,ora.RECO.dg)START_TIMEOUT=600STATE_CHANGE_TEMPLATE=STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DATA.dg,shutdown:ora.RECO.dg)STOP_TIMEOUT=600TYPE_VERSION=3.2UPTIME_THRESHOLD=1hUSR_ORA_DB_NAME=RACDBUSR_ORA_DOMAIN=USR_ORA_ENV=USR_ORA_FLAGS=USR_ORA_INST_NAME=USR_ORA_INST_NAME@SERVERNAME(rac1)=RACDB1USR_ORA_INST_NAME@SERVERNAME(rac2)=RACDB2USR_ORA_OPEN_MODE=openUSR_ORA_OPI=falseUSR_ORA_STOP_MODE=immediateVERSION=11.2.0.3.0???11.2,?OCR???database??,??????,???????????database???????database???????,??????,???????????????ASM????????????  ?:dbca ???????????:????????oracle????dbca:su - oracledbca?? RAC database?? Instance Management?? add an instance???active rac database??????? ??undo?redo??

    Read the article

  • Android: Adding static header to the top of a ListActivity

    - by GrandPrix
    Currently I have a class that is extending the ListActivity class. I need to be able to add a few static buttons above the list that are always visible. I've attempted to grab the ListView using getListView() from within the class. Then I used addHeaderView(View) to add a small layout to the top of the screen. Header.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" > <Button android:id="@+id/testButton" android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="Income" android:textSize="15dip" android:layout_weight="1" /> </LinearLayout> Before I set the adapter I do: ListView lv = getListView(); lv.addHeaderView(findViewById(R.layout.header)); This results in nothing happening to the ListView except for it being populated from my database. No buttons appear above it. Another approach I tried as adding padding to the top of the ListView. When I did this it successfully moved down, however, if I added any above it, it pushed the ListView over. No matter what I do it seems as though I cannot put a few buttons above the ListView when I used the ListActivity. Thanks in advance. synic, I tried your suggestion previously. I tried it again just for the sake of sanity, and the button did not display. Below is the layout file for the activity and the code I've implemented in the oncreate(). //My listactivity I am trying to add the header to public class AuditActivity extends ListActivity { Budget budget; @Override public void onCreate(Bundle savedInstanceState) { Cursor test; super.onCreate(savedInstanceState); setContentView(R.layout.audit); ListView lv = getListView(); LayoutInflater infalter = getLayoutInflater(); ViewGroup header = (ViewGroup) infalter.inflate(R.layout.header, lv, false); lv.addHeaderView(header); budget = new Budget(this); /* try { test = budget.getTransactions(); showEvents(test); } finally { } */ // switchTabSpecial(); } Layout.xml for activity: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent"> <ListView android:id="@android:id/list" android:layout_width="wrap_content" android:layout_height="wrap_content" /> <TextView android:id="@android:id/empty" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/empty" /> </LinearLayout>

    Read the article

  • C# Silverlight - Delay Child Window Load?!

    - by Goober
    The Scenario Currently I have a C# Silverlight Application That uses the domainservice class and the ADO.Net Entity Framework to communicate with my database. I want to load a child window upon clicking a button with some data that I retrieve from a server-side query to the database. The Process The first part of this process involves two load operations to load separate data from 2 tables. The next part of the process involves combining those lists of data to display in a listbox. The Problem The problem with this is that the first two asynchronous load operations haven't returned the data by the time the section of code to combine these lists of data is reached, thus result in a null value exception..... Initial Load Operations To Get The Data: public void LoadAudits(Guid jobID) { var context = new InmZenDomainContext(); var imageLoadOperation = context.Load(context.GetImageByIDQuery(jobID)); imageLoadOperation.Completed += (sender3, e3) => { imageList = ((LoadOperation<InmZen.Web.Image>)sender3).Entities.ToList(); }; var auditLoadOperation = context.Load(context.GetAuditByJobIDQuery(jobID)); auditLoadOperation.Completed += (sender2, e2) => { auditList = ((LoadOperation<Audit>)sender2).Entities.ToList(); }; } I Then Want To Execute This Immediately: IEnumerable<JobImageAudit> jobImageAuditList = from a in auditList join ai in imageList on a.ImageID equals ai.ImageID select new JobImageAudit { JobID = a.JobID, ImageID = a.ImageID.Value, CreatedBy = a.CreatedBy, CreatedDate = a.CreatedDate, Comment = a.Comment, LowResUrl = ai.LowResUrl, }; auditTrailList.ItemsSource = jobImageAuditList; However I can't because the async calls haven't returned with the data yet... Thus I have to do this (Perform the Load Operations, Then Press A Button On The Child Window To Execute The List Concatenation and binding): private void LoadAuditsButton_Click(object sender, RoutedEventArgs e) { IEnumerable<JobImageAudit> jobImageAuditList = from a in auditList join ai in imageList on a.ImageID equals ai.ImageID select new JobImageAudit { JobID = a.JobID, ImageID = a.ImageID.Value, CreatedBy = a.CreatedBy, CreatedDate = a.CreatedDate, Comment = a.Comment, LowResUrl = ai.LowResUrl, }; auditTrailList.ItemsSource = jobImageAuditList; } Potential Ideas for Solutions: Delay the child window displaying somehow? Potentially use DomainDataSource and the Activity Load control?! Any thoughts, help, solutions, samples comments etc. greatly appreciated.

    Read the article

  • Many-to-Many Relationship mapping does not trigger the EventListener OnPostInsert or OnPostDelete Ev

    - by san
    I'm doing my auditing using the Events listeners that nHibernate provides. It works fine for all mappings apart from HasmanyToMany mapping. My Mappings are as such: Table("Order"); Id(x => x.Id, "OrderId"); Map(x => x.Name, "OrderName").Length(150).Not.Nullable(); Map(x => x.Description, "OrderDescription").Length(800).Not.Nullable(); Map(x => x.CreatedOn).Not.Nullable(); Map(x => x.CreatedBy).Length(70).Not.Nullable(); Map(x => x.UpdatedOn).Not.Nullable(); Map(x => x.UpdatedBy).Length(70).Not.Nullable(); HasManyToMany(x => x.Products) .Table("OrderProduct") .ParentKeyColumn("OrderId") .ChildKeyColumn("ProductId") .Cascade.None() .Inverse() .AsSet(); Table("Product"); Id(x => x.Id, "ProductId"); Map(x => x.ProductName).Length(150).Not.Nullable(); Map(x => x.ProductnDescription).Length(800).Not.Nullable(); Map(x => x.Amount).Not.Nullable(); Map(x => x.CreatedOn).Not.Nullable(); ; Map(x => x.CreatedBy).Length(70).Not.Nullable(); Map(x => x.UpdatedOn).Not.Nullable(); Map(x => x.UpdatedBy).Length(70).Not.Nullable(); HasManyToMany(x => x.Orders) .Table("OrderProduct") .ParentKeyColumn("ProductId") .ChildKeyColumn("OrderId") .Cascade.None() .AsSet(); Whenever I do an update of an order (Eg: Changed the Orderdescription and deleted one of the products associated with it) It works fine as in it updated the order table and deletes the row in the orderproduct table. the event listener that I have associated with it captures the update of the order table but does NOT capture the associated delete event when the orderproduct is deleted. This behaviour is observed only in case of a ManyTomany mapped relationships. Since I would also like audit the packageproduct deletion, its kind of an annoyance when the event listener aren't able to capture the delete event. Any information about it would be greatly appreciated.

    Read the article

  • Variables are "lost" somewhere, then reappear... all with no error thrown.

    - by rob - not a robber
    Hi All, I'm probably going to make myself look like a fool with my horrible scripting but here we go. I have a form that I am collecting a bunch of checkbox info from using a binary method. ON/SET=1 !ISSET=0 Anyway, all seems to be going as planned except for the query bit. When I run the script, it runs through and throws no errors, but it's not doing what I think I am telling it to. I've hard coded the values in the query and left them in the script and it DOES update the DB. I've also tried echoing all the needed variables after the script runs and exiting right after so I can audit them... and they're all there. Here's an example. ####FEATURES RECORD UPDATE ### HERE I DECIDE TO RUN THE SCRIPT BASED ON WHETHER AN IMAGE BUTTON WAS USED if (isset($_POST["button_x"])) { ### HERE I AM ASSIGNING 1 OR 0 TO A VAR BASED ON WHTER THE CHECKBOX WAS SET if (isset($_POST["pool"])) $pool=1; if (!isset($_POST["pool"])) $pool=0; if (isset($_POST["darts"])) $darts=1; if (!isset($_POST["darts"])) $darts=0; if (isset($_POST["karaoke"])) $karaoke=1; if (!isset($_POST["karaoke"])) $karaoke=0; if (isset($_POST["trivia"])) $trivia=1; if (!isset($_POST["trivia"])) $trivia=0; if (isset($_POST["wii"])) $wii=1; if (!isset($_POST["wii"])) $wii=0; if (isset($_POST["guitarhero"])) $guitarhero=1; if (!isset($_POST["guitarhero"])) $guitarhero=0; if (isset($_POST["megatouch"])) $megatouch=1; if (!isset($_POST["megatouch"])) $megatouch=0; if (isset($_POST["arcade"])) $arcade=1; if (!isset($_POST["arcade"])) $arcade=0; if (isset($_POST["jukebox"])) $jukebox=1; if (!isset($_POST["jukebox"])) $jukebox=0; if (isset($_POST["dancefloor"])) $dancefloor=1; if (!isset($_POST["dancefloor"])) $dancefloor=0; ### I'VE DONE LOADS OF PERMUTATIONS HERE... HARD SET THE 1/0 VARS AND LEFT THE $estab_id TO BE PICKED UP. SET THE $estab_id AND LEFT THE COLUMN DATA TO BE PICKED UP. ALL NO GOOD. IT _DOES_ WORK IF I HARD SET ALL VARS THOUGH mysql_query("UPDATE thedatabase SET pool_table='$pool', darts='$darts', karoke='$karaoke', trivia='$trivia', wii='$wii', megatouch='$megatouch', guitar_hero='$guitarhero', arcade_games='$arcade', dancefloor='$dancefloor' WHERE establishment_id='22'"); ###WEIRD THING HERE IS IF I ECHO THE VARS AT THIS POINT AND THEN EXIT(); they all show up as intended. header("location:theadminfilething.php"); exit(); THANKS ALL!!!

    Read the article

  • Commitment to Zend Framework - any arguments against?

    - by Pekka
    I am refurbishing a big CMS that I have been working on for quite a number of years now. The product itself is great, but some components, the Database and translation classes for example, need urgent replacing - partly self-made as far back as 2002, grown into a bit of a chaos over time, and might have trouble surviving a security audit. So, I've been looking closely at a number of frameworks (or, more exactly, component Libraries, as I do not intend to change the basic structure of the CMS) and ended up with liking Zend Framework the best. They offer a solid MVC model but don't force you into it, and they offer a lot of professional components that have obviously received a lot of attention (Did you know there are multiple plurals in Russian, and you can't translate them using a simple ($number == 0) or ($number > 1) switch? I didn't, but Zend_Translate can handle it. Just to illustrate the level of thorougness the library seems to have been built with.) I am now literally at the point of no return, starting to replace key components of the system by the Zend-made ones. I'm not really having second thoughts - and I am surely not looking to incite a flame war - but before going onward, I would like to step back for a moment and look whether there is anything speaking against tying a big system closely to Zend Framework. What I like about Zend: As far as I can see, very high quality code Extremely well documented, at least regarding introductions to how things work (Haven't had to use detailed API documentation yet) Backed by a company that has an interest in seeing the framework prosper Well received in the community, has a considerable user base Employs coding standards I like Comes with a full set of unit tests Feels to me like the right choice to make - or at least, one of the right choices - in terms of modern, professional PHP development. I have been thinking about encapsulating and abstracting ZF's functionality into own classes to be able to switch frameworks more easily, but have come to the conclusion that this would not be a good idea because: it would be an unnecessary level of abstraction it could cost performance the big advantage of using a framework - the existence of a developer base that is familiar with its components - would partly be cancelled out therefore, the commitment to ZF would be a deep one. Thus my question: Is there anything substantial speaking against committing to the Zend Framework? Do you have insider knowledge of plans of Zend Inc.'s to go evil in 2011, and make it a closed source library? Is Zend Inc. run by vampires? Are there conceptual flaws in the code base you start to notice when you've transitioned all your projects to it? Is the appearance of quality code an illusion? Does the code look good, but run terribly slow on anything below my quad-core workstation?

    Read the article

  • Web service security not working. Java

    - by Nitesh Panchal
    Hello, I have a ejb module which contains my ejbs as well as web services. I am using Netbeans 6.8 and Glassfish V3 I right clicked on my web service and clicked "edit web service attributes" and then checked "secure service" and then selected keystore of my server. This is my sun-ejb-jar.xml file :- <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE sun-ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD Application Server 9.0 Servlet 2.5//EN" "http://www.sun.com/software/appserver/dtds/sun-web-app_2_5-0.dtd"> <sun-ejb-jar> <security-role-mapping> <role-name>Admin</role-name> <group-name>Admin</group-name> </security-role-mapping> <security-role-mapping> <role-name>General</role-name> <group-name>General</group-name> </security-role-mapping> <security-role-mapping> <role-name>Member</role-name> <group-name>Member</group-name> </security-role-mapping> <enterprise-beans> <ejb> <ejb-name>MemberBean</ejb-name> <webservice-endpoint> <port-component-name>wsMember</port-component-name> <login-config> <auth-method>BASIC</auth-method> <realm>file</realm> </login-config> </webservice-endpoint> </ejb> </enterprise-beans> </sun-ejb-jar> Here MemberBean is my ejb and wsMember is my webservice. Then i made another project and added web service client and again right clicked on "edit web service attributes" and gave password as test and test. This username and password (test) is in Glassfish server in file realm. But when i try to invoke my webservice i always get SEC5046: Audit: Authentication refused for [test]. SEC1201: Login failed for user: test What am i doing wrong? Am i missing something?

    Read the article

  • Why is IIS Anonymous authentication being used with administrative UNC drive access?

    - by Mark Lindell
    My account is local administrator on my machine. If I try to browse to a non-existent drive letter on my own box using a UNC path name: \mymachine\x$ my account would get locked out. I would also get the following warning (Event ID 100, Type “Warning”) 5 times under the “System” group in Event Viewer on my box: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: Logon failure: unknown user name or bad password. I would also get the following warning 3 times: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: The referenced account is currently locked out and may not be logged on to. On the domain controller, Event ID 680 of type “Failure Audit” would appear 4 times under the “Security” group in Event Viewer: Logon attempt by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon account: myaccount Followed by Event ID 644: User Account Locked Out: Target Account Name: myaccount Target Account ID: OURDOMAIN\myaccount Caller Machine Name: MYMACHINE Caller User Name: STAN$ Caller Domain: OURDOMAIN Caller Logon ID: (0x0,0x3E7) Followed by another 4 errors having Event ID 680. Strangely, every time I tried to browse to the UNC path I would be prompted for a user name and password, the above errors would be written to the log, and my account would be locked out. When I hit “Cancel” in response to the user name/password prompt, the following message box would display: Windows cannot find \mymachine\x$. Check the spelling and try again, or try searching for the item by clicking the Start button and then clicking Search. I checked with others in the group using XP and they only got the above message box when browsing to a “bad” drive letter on their box. No one else was prompted for a user name/password and then locked out. So, every time I tried to browse to the “bad” drive letter, behind the scenes XP was trying to login 8 times using bad credentials (or, at least a bad password as the login was correct), causing my account to get locked out on the 4th try. Interestingly, If I tried browsing to a “good” drive such as “c$” it would work fine. As a test, I tried logging on to my box as a different login and browsing the “bad” UNC path. Strangely, my “ourdomain\myaccount” account was getting locked out – not the one I was logged in as! I was totally confused as to why the credentials for the other login were being passed. After much Googling, I found a link referring to some IIS settings I was vaguely familiar with from the past but could not see how they would affect this issue. It was related to the IIS directory security setting “Anonymous access and authentication control” located under: Control Panel/Administrative Tools/Computer Management/Services and Applications/Internet Information Services/Web Sites/Default Web Site/Properties/Directory Security/Anonymous access and authentication control/Edit/Password I found no indication while scouring the Internet that this property was related to my UNC problem. But, I did notice that this property was set to my domain user name and password. And, my password did age recently but I had not reset the password accordingly for this property. Sure enough, keying in the new password corrected the problem. I was no longer prompted for a user name/password when browsing the UNC path and the account lock-outs ceased. Now, a couple of questions: Why would an IIS setting affect the browsing of a UNC path on a local box? Why had I not encountered this problem before? My password has aged several times and I’ve never encountered this problem. And, I can’t remember the last time I updated the “Anonymous access” IIS password it’s been so long. I’ve run the script after a password reset before and never had my account locked-out due to the UNC problem (the script accesses UNC paths as a normal part of its processing). Windows Update did install “Cumulative Security Update for Internet Explorer 7 for Windows XP (KB972260)” on my box on 7/29/2009. I wonder if this is responsible.

    Read the article

  • Does ModSecurity 2.7.1 work with ASP.NET MVC 3?

    - by autonomatt
    I'm trying to get ModSecurity 2.7.1 to work with an ASP.NET MVC 3 website. The installation ran without errors and looking at the event log, ModSecurity is starting up successfully. I am using the modsecurity.conf-recommended file to set the basic rules. The problem I'm having is that whenever I am POSTing some form data, it doesn't get through to the controller action (or model binder). I have SecRuleEngine set to DetectionOnly. I have SecRequestBodyAccess set to On. With these settings, the body of the POST never reaches the controller action. If I set SecRequestBodyAccess to Off it works, so it's definitely something to do with how ModSecurity forwards the body data. The ModSecurity debug shows the following (looks to me as if all passed through): Second phase starting (dcfg 94b750). Input filter: Reading request body. Adding request argument (BODY): name "[0].IsSelected", value "on" Adding request argument (BODY): name "[0].Quantity", value "1" Adding request argument (BODY): name "[0].VariantSku", value "047861" Adding request argument (BODY): name "[1].Quantity", value "0" Adding request argument (BODY): name "[1].VariantSku", value "047862" Input filter: Completed receiving request body (length 115). Starting phase REQUEST_BODY. Recipe: Invoking rule 94c620; [file "*********************"] [line "54"] [id "200001"]. Rule 94c620: SecRule "REQBODY_ERROR" "!@eq 0" "phase:2,auditlog,id:200001,t:none,log,deny,status:400,msg:'Failed to parse request body.',logdata:%{reqbody_error_msg},severity:2" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against REQBODY_ERROR. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 5549c38; [file "*********************"] [line "75"] [id "200002"]. Rule 5549c38: SecRule "MULTIPART_STRICT_ERROR" "!@eq 0" "phase:2,auditlog,id:200002,t:none,log,deny,status:44,msg:'Multipart request body failed strict validation: PE %{REQBODY_PROCESSOR_ERROR}, BQ %{MULTIPART_BOUNDARY_QUOTED}, BW %{MULTIPART_BOUNDARY_WHITESPACE}, DB %{MULTIPART_DATA_BEFORE}, DA %{MULTIPART_DATA_AFTER}, HF %{MULTIPART_HEADER_FOLDING}, LF %{MULTIPART_LF_LINE}, SM %{MULTIPART_MISSING_SEMICOLON}, IQ %{MULTIPART_INVALID_QUOTING}, IP %{MULTIPART_INVALID_PART}, IH %{MULTIPART_INVALID_HEADER_FOLDING}, FL %{MULTIPART_FILE_LIMIT_EXCEEDED}'" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against MULTIPART_STRICT_ERROR. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 554bd70; [file "********************"] [line "80"] [id "200003"]. Rule 554bd70: SecRule "MULTIPART_UNMATCHED_BOUNDARY" "!@eq 0" "phase:2,auditlog,id:200003,t:none,log,deny,status:44,msg:'Multipart parser detected a possible unmatched boundary.'" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against MULTIPART_UNMATCHED_BOUNDARY. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 554cbe0; [file "*********************************"] [line "94"] [id "200004"]. Rule 554cbe0: SecRule "TX:/^MSC_/" "!@streq 0" "phase:2,log,auditlog,id:200004,t:none,deny,msg:'ModSecurity internal error flagged: %{MATCHED_VAR_NAME}'" Rule returned 0. Hook insert_filter: Adding input forwarding filter (r 5541fc0). Hook insert_filter: Adding output filter (r 5541fc0). Initialising logging. Starting phase LOGGING. Recording persistent data took 0 microseconds. Audit log: Ignoring a non-relevant request. I can't see anything unusual in Fiddler. I'm using a ViewModel in the parameters of my action. No data is bound if SecRequestBodyAccess is set to On. I'm even logging all the Request.Form.Keys and values via log4net, but not getting any values there either. I'm starting to wonder if ModSecurity actually works with ASP.NET MVC or if there is some conflict with the ModSecurity http Module and the model binder kicking in. Does anyone have any suggestions or can anyone confirm they have ModSecurity working with an ASP.NET MVC website?

    Read the article

  • Diagramming Software for a Developer/Designer

    - by Craig Walker
    For a long time I've been looking for a good diagramming/vector-based drawing program that meets my needs as a developer. I'd like to: Draw database diagrams Draw flow charts Draw object-modeling diagrams (UML being the standard) Draw other free-form diagrams (basically boxes & arrows with the occasional clipart) Draw mockups of user interfaces and web pages EDIT: I want good-looking electronic-format diagrams that I can show to 3rd parties, not just something for my own internal use. EDIT 2: I'm also looking for Windows software, although I'm toying with the idea of switching to Mac, so a really good Mac-only product might get me to switch. Basically I need a good vector graphic program (with decent grouping, connecting lines, and ideally auto-routing). I'd prefer a diagramming tool that can also be used for drawing (for the UI mockups) rather than a drawing tool that can also be used for diagrams. I've tried Visio on several occasions, and every time I've been disappointed. The interface always seems to get in my way at some point. It's pretty close to what I want, and the latest version (I got the trail from MS) seems to be better than previous ones in terms of usability, but I really don't want to plunk down that sort of cash for a mediocre product. I've tried Dia and Inkscape, and while initially promising and with the right price tag, I found both of them to be lacking in several ways (including some recurring bugs). I've toyed with getting Adobe Illustrator, but I've never used it before, and I have a feeling that it wouldn't handle the diagramming aspect very well, and I don't want to buy a copy just to find out it doesn't meet my needs. So far, the product that I've had the most success with is, sadly, OpenOffice Draw. It's free of course (which lowers my expectations and thus improves my view of it) and its usability is pretty good, but in the end I'd like something more suited to diagramming. I'm willing to spend real money (in the $500-$1K range) for a really good piece of software if it does everything I want it to. The front runner is of course Visio but I'm hoping for more. Does anybody have any recommendations? CONCLUSION: @dlamblin had the most informative post, but the part I gained the most from was his/her (and others) mention of OmniGraffle, not Gliffy. I gave Gliffy a try, and it seemed neet for occational use, but since it's a Flash app (note: not AJAX as dlamblin mentioned) it's still a bit of a pain to use (no keyboard shortcuts for copy/paste was pretty much a deal breaker for me). I also tried SmartDraw, but it had 3-strikes-you're-out against it: The trial period was only 7 days long. It used some nonstandard (and visually jarring) GUI widget toolkit for its UI. At the very least it makes me suspicious (how do I know it will actually work & support the standard Windows features?) It crashed on me early into my trial. OmniGraffle looks like exactly what I want... except that it's Mac-only (so I couldn't give it a try). However, it got good reviews from my Mac-owning coworker, and I hope to try it on a friend's Mac soon. If it's good enough then I might spring for a new MacBook.

    Read the article

  • java.rmi.UnmarshalException: unable to pull client classes by server

    - by andrews
    Hi, I have an RMI client/server set-up on two machines that works fine in a simple situation when the server doesn't require a client-side defned class. However, when I need to use a class defined on the client side I am unable to have the server unmarshall those classes. I suspect this is an issue with my java.rmi.server.codebase property that I pass in as argument to the client app. I followed Sun's RMI Tutorial trail and I think I have followed the steps exactly except that I don't specify a classpath argument when executing client and server because they execute in the directory right above the root package directory (however I tried that too with no effect). The exceptions I get when attempting to execute the different client-side combinations described in detail below are all the same: RmiServer exception: java.rmi.ServerException: RemoteException occurred in server thread; nested exception is: java.rmi.UnmarshalException: error unmarshalling arguments; nested exception is: java.lang.ClassNotFoundException: test.MyTask at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:353) at sun.rmi.transport.Transport$1.run(Transport.java:177) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:173) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636) at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:255) at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:233) at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:142) at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:178) at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:132) at $Proxy0.execute(Unknown Source) at test.myClient.main(myClient.java:32) The details are: My client/server rmi is set up over a home network behind a router. The router is assigned to a static ip address I will call myhostname. Appropriate port-mapping is set-up in the router that points to the right machines. role, machine, os, ip-address: server, venice, linux ubuntu 9.10, 10.0.1.2 client, naples, mac os x leopard, 10.0.1.4 I startup the server side as follows inside /home/andrews/workspace/epsilon/bin: 1 starting registry on the default port 1099: venice% rmiregistry & 2 starting web-server on port 2001 pointing to code base for common interfaces: venice% java webserver/ClassFileServer 2001 /home/andrew/workspace/epsilon/bin 3 starting server app (main class in test/myServer) which registers the server object: venice% java -Djava.rmi.server.codebase="http://myhostname:2001/" -Djava.security.policy=server.policy -Djava.rmi.server.hostname=myhostname test/myServer & Now the client side inside /Users/andrews/Development/Java/workspace/epsilon/bin: 1 start a local web server that can server client-side classes to the server (not sure if this is needed, but I added I tried it, and still no success; I have added port-mapping to the router for 2001 to venice, for 2002 to naples) naples$ java webserver/ClassFileServer 2002 /Users/andrews/Development/Java/workspace/epsilon/bin/ Trying to run the client (note: I don't specify the -cp argument because client executes right above the root package directory): 1 try #1 using an http hostname naples$ java -Djava.rmi.server.codebase=http://10.0.1.4:2002/ -Djava.security.policy=client.policy test.myClient myhostname Note 1: the myhostname argument at the end is passed-in to the client so that it resolves to server's rmi hostname. Note 2: I tried using localhost:2002 instead of 10.0.1.4:2002 too. Note 3: I tried using myhostname:2002 since myhostname is assigned to the router and I have proper port-mapping set-up, this address should resolve to naples and not venice 2 try #2: naples$ java -Djava.rmi.server.codebase=file:/Users/andrews/Development/Java/workspace/epsilon/bin/ -Djava.security.policy=client.policy test.myClient myhostname Note 1: the code base url format is correct, I created a small program to convert current file directory path into a url and used that. using file:///Users... has same effect. Other notes: 1 my server and client policy files correctly specify the path, as I've tested this setup with good and bad paths, and getting a security exception for bad path 2 this setup works if I don't use client-side defined objects, the client connects correctly to the server and the server executes. 3 when I place the client-side class on the server in the server's classpath, all executes fine. All help is appreciated.

    Read the article

  • Log a user in to an ASP.net application using Windows Authentication without using Windows Authentic

    - by Rising Star
    I have an ASP.net application I'm developing authentication for. I am using an existing cookie-based log on system to log users in to the system. The application runs as an anonymous account and then checks the cookie when the user wants to do something restricted. This is working fine. However, there is one caveat: I've been told that for each page that connects to our SQL server, I need to make it so that the user connects using an Active Directory account. because the system I'm using is cookie based, the user isn't logged in to Active Directory. Therefore, I use impersonation to connect to the server as a specific account. However, the powers that be here don't like impersonation; they say that it clutters up the code. I agree, but I've found no way around this. It seems that the only way that a user can be logged in to an ASP.net application is by either connecting with Internet Explorer from a machine where the user is logged in with their Active Directory account or by typing an Active Directory username and password. Neither of these two are workable in my application. I think it would be nice if I could make it so that when a user logs in and receives the cookie (which actually comes from a separate log on application, by the way), there could be some code run which tells the application to perform all network operations as the user's Active Directory account, just as if they had typed an Active Directory username and password. It seems like this ought to be possible somehow, but the solution evades me. How can I make this work? Update To those who have responded so far, I apologize for the confusion I have caused. The responses I've received indicate that you've misunderstood the question, so please allow me to clarify. I have no control over the requirement that users must perform network operations (such as SQL queries) using Active Directory accounts. I've been told several times (online and in meat-space) that this is an unusual requirement and possibly bad practice. I also have no control over the requirement that users must log in using the existing cookie-based log on application. I understand that in an ideal MS ecosystem, I would simply dis-allow anonymous access in my IIS settings and users would log in using Windows Authentication. This is not the case. The current system is that as far as IIS is concerned, the user logs in anonymously (even though they supply credentials which result in the issuance of a cookie) and we must programmatically check the cookie to see if the user has access to any restricted resources. In times past, we have simply used a single SQL account to perform all queries. My direct supervisor (who has many years of experience with this sort of thing) wants to change this. He says that if each user has his own AD account to perform SQL queries, it gives us more of a trail to follow if someone tries to do something wrong. The closest thing I've managed to come up with is using WIF to give the user a claim to a specific Active Directory account, but I still have to use impersonation because even still, the ASP.net process presents anonymous credentials to the SQL server. It boils down to this: Can I log users in with Active Directory accounts in my ASP.net application without having the users manually enter their AD credentials? (Windows Authentication)

    Read the article

  • Drupal menu with submenus only shows submenu of current page. NOT a CSS problem

    - by jordanstephens
    This is not a CSS problem. The HTML isn't there. I need the menu, with submenus to exist in HTML on EACH page. Right now, the submenu only exists in HTML for the submenu related to the page being currently viewed. Here's an example of what it SHOULD be like. <ul id="menu"> <li>Page1 <ul class="sub"> <li>sub1.1</li> <li>sub1.2</li> <li>sub1.3</li> <li>sub1.4</li> </ul> </li> <li>Page2 <ul class="sub"> <li>sub2.1</li> <li>sub2.2</li> <li>sub2.3</li> <li>sub2.4</li> </ul> </li> <li>Page3 <ul class="sub"> <li>sub3.1</li> <li>sub3.2</li> <li>sub3.3</li> <li>sub3.4</li> </ul> </li> </ul> But here is what is actually happening (say I'm currently viewing Page2): <ul id="menu"> <li>Page1</li> <li>Page2 <ul class="sub"> <li>sub2.1</li> <li>sub2.2</li> <li>sub2.3</li> <li>sub2.4</li> </ul> </li> <li>Page3</li> </ul> Additionally, and maybe this doesn't have anything to do with it, but whichever list item <li> element is relative to the page i am currently on is given these classes expanded active-trail and any other <li> is given collapsed class. The classes aren't really so much of an issue, the problem is just that the content (html) isn't there. Does anyone have any idea what's going on here. I feel like I've been digging in the Drupal Admin menus forever now. I feel like it's got to have a PHP solution in the template file or something, but I don't know Drupal super well at this point. Thank you!

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36  | Next Page >