Search Results

Search found 4217 results on 169 pages for 'zend validate'.

Page 91/169 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • How to downgrade a certain package (php) back to karmic

    - by Eugene
    Hi! I've updated from 9.10 to 10.04 but unfortunately the PHP provided with 10.04 is not yet supported by zend optimizer. As far as I understand I need to somehow replace the PHP 5.3 package provided under 10.04 with an older PHP 5.2 package provided under 9.10. However I am not sure whether this is the right way to downgrade PHP and if yes, I don't know how to replace the 10.04 package with 9.10 package. Could you please help me with that?

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • HOWTO: Migrate SharePoint site from one farm to another?

    - by Ramiz Uddin
    Hi Everyone, I've a site deployed on a developed machine. The site was developed under WSS 3.0 which contains custom List, Features, Templates, Styles etc. What I've to do is to create a deployment package (setup) which I can give away to my client. I know about stsadm but I don't have the access of the production machine. Is there a way I can package all the dependencies in a single file (installation file) and run on the server which will include all the dependencies (including site content)? I've tried to experiment this with SharePoint Content Deployment Wizard. It all went well when Export the site but always fail to Import with the following message: [2/2/2010 3:43:25 PM]: Start Time: 2/2/2010 3:43:25 PM. [2/2/2010 3:43:25 PM]: Progress: Initializing Import. [2/2/2010 3:43:42 PM]: FatalError: Could not find WebTemplate #75805 with LCID 1033. at Microsoft.SharePoint.Deployment.ImportRequirementsManager.VerifyWebTemplate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.Validate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.DeserializeAndValidate() at Microsoft.SharePoint.Deployment.SPImport.VerifyRequirements() at Microsoft.SharePoint.Deployment.SPImport.Run() [2/2/2010 3:43:48 PM]: Progress: Import Completed. [2/2/2010 3:43:48 PM]: Finish Time: 2/2/2010 3:43:48 PM. [2/2/2010 3:43:48 PM]: Completed with 0 warnings. [2/2/2010 3:43:48 PM]: Completed with 1 errors. [2/2/2010 3:44:51 PM]: Start Time: 2/2/2010 3:44:51 PM. [2/2/2010 3:44:51 PM]: Progress: Initializing Import. [2/2/2010 3:45:08 PM]: FatalError: Could not find WebTemplate #75805 with LCID 1033. at Microsoft.SharePoint.Deployment.ImportRequirementsManager.VerifyWebTemplate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.Validate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.DeserializeAndValidate() at Microsoft.SharePoint.Deployment.SPImport.VerifyRequirements() at Microsoft.SharePoint.Deployment.SPImport.Run() [2/2/2010 3:45:14 PM]: Progress: Import Completed. [2/2/2010 3:45:14 PM]: Finish Time: 2/2/2010 3:45:14 PM. [2/2/2010 3:45:14 PM]: Completed with 0 warnings. [2/2/2010 3:45:14 PM]: Completed with 1 errors. [2/2/2010 3:56:17 PM]: Start Time: 2/2/2010 3:56:17 PM. [2/2/2010 3:56:17 PM]: Progress: Initializing Import. [2/2/2010 3:56:34 PM]: FatalError: Could not find WebTemplate #75805 with LCID 1033. at Microsoft.SharePoint.Deployment.ImportRequirementsManager.VerifyWebTemplate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.Validate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.DeserializeAndValidate() at Microsoft.SharePoint.Deployment.SPImport.VerifyRequirements() at Microsoft.SharePoint.Deployment.SPImport.Run() [2/2/2010 3:56:39 PM]: Progress: Import Completed. [2/2/2010 3:56:39 PM]: Finish Time: 2/2/2010 3:56:39 PM. [2/2/2010 3:56:39 PM]: Completed with 0 warnings. [2/2/2010 3:56:39 PM]: Completed with 1 errors. I actually couldn't find a good reference on how to use it. But, this software doesn't something I'm looking for which can create a simple deployment package (after that you don't need to do anything). I might not be correct but after two days of googling I think there is no such utility (freeware) that can create a simple package of a site and install on other farm without even need to configure anything before you run the installation package. You people might have an advise which can help me to look/think outside the box and get to the solution quickly instead adding more days working on the problem. Please, share only freewares. I can't afford to buy anything. I'm waiting to be surprised with a good share :) Have a good day! Thanks.

    Read the article

  • Simple Observation in Django: How Can I Correctly Modify The `attrs` sent to __new__ of a Django Mod

    - by DGGenuine
    Hello, I'm a strong proponent of the observer pattern, and this is what I'd like to be able to do in my Django models.py: class AModel(Model): __metaclass__ = SomethingMagical @post_save(AnotherModel) @classmethod def observe_another_model_saved(klass, sender, instance, created, **kwargs): pass @pre_init('YetAnotherModel') @classmethod def observe_yet_another_model_initializing(klass, sender, *args, **kwargs): pass @post_delete('DifferentApp.SomeModel') @classmethod def observe_some_model_deleted(klass, sender, **kwargs): pass This would connect a signal with sender = the decorator's argument and receiver = the decorated method. Right now my signal connection code all exists in __init__.py which is okay, but a little unmaintainable. I want this code all in one place, the models.py file. Thanks to helpful feedback from the community I'm very close (I think.) (I'm using a metaclass solution instead of the class decorator solution in the previous question/answer because you can't set attributes on classmethods, which I need.) I am having a strange error I don't understand. At the end of my post are the contents of a models.py that you can pop into a fresh project/application to see the error. Set your database to sqlite and add the application to installed apps. This is the error: Validating models... Unhandled exception in thread started by Traceback (most recent call last): File "/Library/Python/2.6/site-packages//lib/python2.6/site-packages/django/core/management/commands/runserver.py", line 48, in inner_run File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 253, in validate raise CommandError("One or more models did not validate:\n%s" % error_text) django.core.management.base.CommandError: One or more models did not validate: local.myothermodel: 'my_model' has a relation with model MyModel, which has either not been installed or is abstract. I've indicated a few different things you can comment in/out to fix the error. First, if you don't modify the attrs sent to the metaclass's __new__, then the error does not arise. (Note even if you copy the dictionary element by element into a new dictionary, it still fails; only using the exact attrs dictionary works.) Second, if you reference the first model by class rather than by string, the error also doesn't arise regardless of what you do in __new__. I appreciate your help. I'll be githubbing the solution if and when it works. Maybe other people would enjoy a simplified way to use Django signals to observe application happenings. #models.py from django.db import models from django.db.models.base import ModelBase from django.db.models import signals import pdb class UnconnectedMethodWrapper(object): sender = None method = None signal = None def __init__(self, signal, sender, method): self.signal = signal self.sender = sender self.method = method def post_save(sender): return _make_decorator(signals.post_save, sender) def _make_decorator(signal, sender): def decorator(view): return UnconnectedMethodWrapper(signal, sender, view) return decorator class ConnectableModel(ModelBase): """ A meta class for any class that will have static or class methods that need to be connected to signals. """ def __new__(cls, name, bases, attrs): unconnecteds = {} ## NO WORK newattrs = {} for name, attr in attrs.iteritems(): if isinstance(attr, UnconnectedMethodWrapper): unconnecteds[name] = attr newattrs[name] = attr.method #replace the UnconnectedMethodWrapper with the method it wrapped. else: newattrs[name] = attr ## NO WORK # newattrs = {} # for name, attr in attrs.iteritems(): # newattrs[name] = attr ## WORKS # newattrs = attrs new = super(ConnectableModel, cls).__new__(cls, name, bases, newattrs) for name, unconnected in unconnecteds.iteritems(): _connect_signal(unconnected.signal, unconnected.sender, getattr(new, name), new._meta.app_label) return new def _connect_signal(signal, sender, receiver, default_app_label): # full implementation also accepts basestring as sender and will look up model accordingly signal.connect(sender=sender, receiver=receiver) class MyModel(models.Model): __metaclass__ = ConnectableModel @post_save('In my application this string matters') @classmethod def observe_it(klass, sender, instance, created, **kwargs): pass @classmethod def normal_class_method(klass): pass class MyOtherModel(models.Model): ## WORKS # my_model = models.ForeignKey(MyModel) ## NO WORK my_model = models.ForeignKey('MyModel')

    Read the article

  • How to set BackGround color to a divider in JSplitPane

    - by Sunil Kumar Sahoo
    I have created a divider in JSplitPane. I am unable to set the color of divider. I want to set the color of divider. please help me how to set color of that divider import javax.swing.; import java.awt.; import java.awt.event.*; public class SplitPaneDemo { JFrame frame; JPanel left, right; JSplitPane pane; int lastDividerLocation = -1; public static void main(String[] args) { SplitPaneDemo demo = new SplitPaneDemo(); demo.makeFrame(); demo.frame.addWindowListener(new WindowAdapter() { public void windowClosing(WindowEvent e) { System.exit(0); } }); demo.frame.show(); } public JFrame makeFrame() { frame = new JFrame(); // Create a horizontal split pane. pane = new JSplitPane(JSplitPane.HORIZONTAL_SPLIT); left = new JPanel(); left.setBackground(Color.red); pane.setLeftComponent(left); right = new JPanel(); right.setBackground(Color.green); pane.setRightComponent(right); JButton showleft = new JButton("Left"); showleft.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { Container c = frame.getContentPane(); if (pane.isShowing()) { lastDividerLocation = pane.getDividerLocation(); } c.remove(pane); c.remove(left); c.remove(right); c.add(left, BorderLayout.CENTER); c.validate(); c.repaint(); } }); JButton showright = new JButton("Right"); showright.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { Container c = frame.getContentPane(); if (pane.isShowing()) { lastDividerLocation = pane.getDividerLocation(); } c.remove(pane); c.remove(left); c.remove(right); c.add(right, BorderLayout.CENTER); c.validate(); c.repaint(); } }); JButton showboth = new JButton("Both"); showboth.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { Container c = frame.getContentPane(); c.remove(pane); c.remove(left); c.remove(right); pane.setLeftComponent(left); pane.setRightComponent(right); c.add(pane, BorderLayout.CENTER); if (lastDividerLocation >= 0) { pane.setDividerLocation(lastDividerLocation); } c.validate(); c.repaint(); } }); JPanel buttons = new JPanel(); buttons.setLayout(new GridBagLayout()); buttons.add(showleft); buttons.add(showright); buttons.add(showboth); frame.getContentPane().add(buttons, BorderLayout.NORTH); pane.setPreferredSize(new Dimension(400, 300)); frame.getContentPane().add(pane, BorderLayout.CENTER); frame.pack(); pane.setDividerLocation(0.5); return frame; } } Thanks Sunil kumar Sahoo

    Read the article

  • Domain Validation in a CQRS architecture

    - by Jupaol
    Basically I want to know if there is a better way to validate my domain entities. This is how I am planning to do it but I would like your opinion The first approach I considered was: class Customer : EntityBase<Customer> { public void ChangeEmail(string email) { if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”); if(!email.IsEmail()) throw new DomainException(); if(email.Contains(“@mailinator.com”)) throw new DomainException(); } } I actually do not like this validation because even when I am encapsulating the validation logic in the correct entity, this is violating the Open/Close principle (Open for extension but Close for modification) and I have found that violating this principle, code maintenance becomes a real pain when the application grows up in complexity. Why? Because domain rules change more often than we would like to admit, and if the rules are hidden and embedded in an entity like this, they are hard to test, hard to read, hard to maintain but the real reason why I do not like this approach is: if the validation rules change, I have to come and edit my domain entity. This has been a really simple example but in RL the validation could be more complex So following the philosophy of Udi Dahan, making roles explicit, and the recommendation from Eric Evans in the blue book, the next try was to implement the specification pattern, something like this class EmailDomainIsAllowedSpecification : IDomainSpecification<Customer> { private INotAllowedEmailDomainsResolver invalidEmailDomainsResolver; public bool IsSatisfiedBy(Customer customer) { return !this.invalidEmailDomainsResolver.GetInvalidEmailDomains().Contains(customer.Email); } } But then I realize that in order to follow this approach I had to mutate my entities first in order to pass the value being valdiated, in this case the email, but mutating them would cause my domain events being fired which I wouldn’t like to happen until the new email is valid So after considering these approaches, I came out with this one, since I am going to implement a CQRS architecture: class EmailDomainIsAllowedValidator : IDomainInvariantValidator<Customer, ChangeEmailCommand> { public void IsValid(Customer entity, ChangeEmailCommand command) { if(!command.Email.HasValidDomain()) throw new DomainException(“...”); } } Well that’s the main idea, the entity is passed to the validator in case we need some value from the entity to perform the validation, the command contains the data coming from the user and since the validators are considered injectable objects they could have external dependencies injected if the validation requires it. Now the dilemma, I am happy with a design like this because my validation is encapsulated in individual objects which brings many advantages: easy unit test, easy to maintain, domain invariants are explicitly expressed using the Ubiquitous Language, easy to extend, validation logic is centralized and validators can be used together to enforce complex domain rules. And even when I know I am placing the validation of my entities outside of them (You could argue a code smell - Anemic Domain) but I think the trade-off is acceptable But there is one thing that I have not figured out how to implement it in a clean way. How should I use this components... Since they will be injected, they won’t fit naturally inside my domain entities, so basically I see two options: Pass the validators to each method of my entity Validate my objects externally (from the command handler) I am not happy with the option 1 so I would explain how I would do it with the option 2 class ChangeEmailCommandHandler : ICommandHandler<ChangeEmailCommand> { public void Execute(ChangeEmailCommand command) { private IEnumerable<IDomainInvariantValidator> validators; // here I would get the validators required for this command injected, and in here I would validate them, something like this using (var t = this.unitOfWork.BeginTransaction()) { var customer = this.unitOfWork.Get<Customer>(command.CustomerId); this.validators.ForEach(x =. x.IsValid(customer, command)); // here I know the command is valid // the call to ChangeEmail will fire domain events as needed customer.ChangeEmail(command.Email); t.Commit(); } } } Well this is it. Can you give me your thoughts about this or share your experiences with Domain entities validation EDIT I think it is not clear from my question, but the real problem is: Hiding the domain rules has serious implications in the future maintainability of the application, and also domain rules change often during the life-cycle of the app. Hence implementing them with this in mind would let us extend them easily. Now imagine in the future a rules engine is implemented, if the rules are encapsulated outside of the domain entities, this change would be easier to implement

    Read the article

  • Grails validation problems with sets of data: only getting one error message for all errors in a set

    - by Matt
    Hi, I'm trying to validate a domain class that has a number of subsets. class IebeUser { ... static hasMany = [openUserAnswers:OpenUserAnswer, closedUserAnswers:ClosedUserAnswer] } class OpenUserAnswer { OpenQuestion openQuestion String text static belongsTo = [user:IebeUser] static constraints = { openQuestion(nullable:false) text(blank:false) } } class ClosedUserAnswer { ClosedQuestion closedQuestion ClosedAnswer answer static belongsTo = [user:IebeUser] static constraints = { closedQuestion(nullable:false) answer(nullable:false) } } A closed question has a set of predefined answers and an open question lets the user enter a freeform answer. All is well until I come to validate the object after entry in a form: params: [closedUserAnswers[0].answer.id:, closedUserAnswers[0]:[answer:[id:], answer.id:], password:dfgdfgdf, openUserAnswers[0].text:gdfgdfgdfg, openUserAnswers[0]:[text:gdfgdfgdfg], _isOptedOut:, create:Continue, username:gdfgdfggdf, email:[email protected], closedUserAnswers[1].answer.id:, closedUserAnswers[1]:[answer:[id:], answer.id:], openUserAnswers[1].text:, openUserAnswers[1]:[text:], firstName:dfgdf, lastName:gdfgdfgd, action:save, controller:main] The key bits being: closedUserAnswers[0].answer.id:, closedUserAnswers[0]:[answer:[id:] closedUserAnswers[1].answer.id:, closedUserAnswers[1]:[answer:[id:] openUserAnswers[1].text:, openUserAnswers[1]:[text:] In my tests I have two objects of type closedUserAnswers and two of openUserAnswers. But when I call validation on IebeUser I only get validation errors for the closedUserAnswers or the openUserAnswers as a whole. I don't get validation errors for each object with a problem which is what I need. I really need an error per instance. Does anyone know what I'm doing wrong? Even when I call the validate method against each closedUserAnswer/openUserAnswer I still only get one per type. Here are my errors. Sorry for all the code, but thought I'd include as much of the code as possible so that it makes sense. Field error in object 'uk.co.cascaid.iebe.IebeUser' on field 'openUserAnswers.text': rejected value []; codes [uk.co.cascaid.iebe.OpenUserAnswer.text.blank.error.uk.co.cascaid.iebe.IebeUser.openUserAnswers.text,uk.co.cascaid.iebe.OpenUserAnswer.text.blank.error.openUserAnswers.text,uk.co.cascaid.iebe.OpenUserAnswer.text.blank.error.text,uk.co.cascaid.iebe.OpenUserAnswer.text.blank.error,openUserAnswer.text.blank.error.uk.co.cascaid.iebe.IebeUser.openUserAnswers.text,openUserAnswer.text.blank.error.openUserAnswers.text,openUserAnswer.text.blank.error.text,openUserAnswer.text.blank.error,uk.co.cascaid.iebe.OpenUserAnswer.text.blank.uk.co.cascaid.iebe.IebeUser.openUserAnswers.text,uk.co.cascaid.iebe.OpenUserAnswer.text.blank.openUserAnswers.text,uk.co.cascaid.iebe.OpenUserAnswer.text.blank.text,uk.co.cascaid.iebe.OpenUserAnswer.text.blank,openUserAnswer.text.blank.uk.co.cascaid.iebe.IebeUser.openUserAnswers.text,openUserAnswer.text.blank.openUserAnswers.text,openUserAnswer.text.blank.text,openUserAnswer.text.blank,blank.uk.co.cascaid.iebe.IebeUser.openUserAnswers.text,blank.openUserAnswers.text,blank.text,blank]; arguments [text,class uk.co.cascaid.iebe.OpenUserAnswer]; default message [Property [{0}] of class [{1}] cannot be blank] Field error in object 'uk.co.cascaid.iebe.IebeUser' on field 'closedUserAnswers.answer': rejected value [null]; codes [uk.co.cascaid.iebe.ClosedUserAnswer.answer.nullable.error.uk.co.cascaid.iebe.IebeUser.closedUserAnswers.answer,uk.co.cascaid.iebe.ClosedUserAnswer.answer.nullable.error.closedUserAnswers.answer,uk.co.cascaid.iebe.ClosedUserAnswer.answer.nullable.error.answer,uk.co.cascaid.iebe.ClosedUserAnswer.answer.nullable.error,closedUserAnswer.answer.nullable.error.uk.co.cascaid.iebe.IebeUser.closedUserAnswers.answer,closedUserAnswer.answer.nullable.error.closedUserAnswers.answer,closedUserAnswer.answer.nullable.error.answer,closedUserAnswer.answer.nullable.error,uk.co.cascaid.iebe.ClosedUserAnswer.answer.nullable.uk.co.cascaid.iebe.IebeUser.closedUserAnswers.answer,uk.co.cascaid.iebe.ClosedUserAnswer.answer.nullable.closedUserAnswers.answer,uk.co.cascaid.iebe.ClosedUserAnswer.answer.nullable.answer,uk.co.cascaid.iebe.ClosedUserAnswer.answer.nullable,closedUserAnswer.answer.nullable.uk.co.cascaid.iebe.IebeUser.closedUserAnswers.answer,closedUserAnswer.answer.nullable.closedUserAnswers.answer,closedUserAnswer.answer.nullable.answer,closedUserAnswer.answer.nullable,nullable.uk.co.cascaid.iebe.IebeUser.closedUserAnswers.answer,nullable.closedUserAnswers.answer,nullable.answer,nullable]; arguments [answer,class uk.co.cascaid.iebe.ClosedUserAnswer]; default message [Property [{0}] of class [{1}] cannot be null]

    Read the article

  • Error using Session in IIS7

    - by flashnik
    After deployment of my website to IIS I'm getting a following error message when trying to access session: Session state can only be used when enableSessionState is set to true, either in a configuration file or in the Page directive. Please also make sure that System.Web.SessionStateModule or a custom session state module is included in the \\ section in the application configuration. I access it in Page_Load or PreRender events (I tried both versions). With VS Dev Server it works without a problem. I tried both InProc an SessionState storage, 1 and multiple woker processes. I added a enableSessionState = "true" to my webpage explicitly. Here is part of web.config: <system.web> <globalization culture="ru-RU" uiCulture="ru-RU" /> <compilation debug="true" defaultLanguage="c#"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions.Design, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A" /> <add assembly="System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> </assemblies> </compilation> <pages enableEventValidation="false" enableSessionState="true"> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx" /> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false" /> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="SearchUrlRewriter" type="Synonymizer.SearchUrlRewriter, Synonymizer, Version=1.0.0.0, Culture=neutral" /> <add name="Session" type="System.Web.SessionStateModule" /> </httpModules> <sessionState cookieless="UseCookies" cookieName="My_SessionId" mode="InProc" stateNetworkTimeout="5" /> <customErrors mode="Off" /> </system.web> What else do I need to do to make it work?? UPD I tried to monitor if IIS accesses aspnet_client folder with ProcMon and didn't get any access.

    Read the article

  • Error using Session in IIS

    - by flashnik
    After deployment of my website to IIS I'm getting a following error message when trying to access session: Session state can only be used when enableSessionState is set to true, either in a configuration file or in the Page directive. Please also make sure that System.Web.SessionStateModule or a custom session state module is included in the \\ section in the application configuration. I access it in Page_Load or PreRender events (I tried both versions). With VS Dev Server it works without a problem. I tried both InProc an SessionState storage, 1 and multiple woker processes. I added a enableSessionState = "true" to my webpage explicitly. Here is part of web.config: <system.web> <globalization culture="ru-RU" uiCulture="ru-RU" /> <compilation debug="true" defaultLanguage="c#"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions.Design, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A" /> <add assembly="System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> </assemblies> </compilation> <pages enableEventValidation="false" enableSessionState="true"> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx" /> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false" /> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="SearchUrlRewriter" type="Synonymizer.SearchUrlRewriter, Synonymizer, Version=1.0.0.0, Culture=neutral" /> <add name="Session" type="System.Web.SessionStateModule" /> </httpModules> <sessionState cookieless="UseCookies" cookieName="My_SessionId" mode="InProc" stateNetworkTimeout="5" /> <customErrors mode="Off" /> </system.web> What else do I need to do to make it work??

    Read the article

  • Validating Petabytes of Data with Regularity and Thoroughness

    - by rickramsey
    by Brian Zents When former Intel CEO Andy Grove said “only the paranoid survive,” he wasn’t necessarily talking about tape storage administrators, but it’s a lesson they’ve learned well. After all, tape storage is the last line of defense to prevent data loss, so tape administrators are extra cautious in making sure their data is secure. Not surprisingly, we are often asked for ways to validate tape media and the files on them. In the past, an administrator could validate the media, but doing so was often tedious or disruptive or both. The debut of the Data Integrity Validation (DIV) and Library Media Validation (LMV) features in the Oracle T10000C drive helped eliminate many of these pains. Also available with the Oracle T10000D drive, these features use hardware-assisted CRC checks that not only ensure the data is written correctly the first time, but also do so much more efficiently. Traditionally, a CRC check takes at least 25 seconds per 4GB file with a 2:1 compression ratio, but the T10000C/D drives can reduce the check to a maximum of nine seconds because the entire check is contained within the drive. No data needs to be sent to a host application. A time savings of at least 64 percent is extremely beneficial over the course of checking an entire 8.5TB T10000D tape. While the DIV and LMV features are better than anything else out there, what storage administrators really need is a way to check petabytes of data with regularity and thoroughness. With the launch of Oracle StorageTek Tape Analytics (STA) 2.0 in April, there is finally a solution that addresses this longstanding need. STA bundles these features into one interface to automate all media validation activities across all Oracle SL3000 and SL8500 tape libraries in an environment. And best of all, the validation process can be associated with the health checks an administrator would be doing already through STA. In fact, STA validates the media based on any of the following policies: Random Selection – Randomly selects media for validation whenever a validation drive in the standalone library or library complex is available. Media Health = Action – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Action.” You can specify from one to five exchanges. Media Health = Evaluate – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Evaluate.” You can specify from one to five exchanges. Media Health = Monitor – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Monitor.” You can specify from one to five exchanges. Extended Period of Non-Use – Selects media that have not had an exchange for a specified number of days. You can specify from 365 to 1,095 days (one to three years). Newly Entered – Selects media that have recently been entered into the library. Bad MIR Detected – Selects media with an exchange resulting in a “Bad MIR Detected” error. A bad media information record (MIR) indicates degraded high-speed access on the media. To avoid disrupting host operations, an administrator designates certain drives for media validation operations. If a host requests a file from media currently being validated, the host’s request takes priority. To ensure that the administrator really knows it is the media that is bad, as opposed to the drive, STA includes drive calibration and qualification features. In addition, validation requests can be re-prioritized or cancelled as needed. To ensure that a specific tape isn’t validated too often, STA prevents a tape from being validated twice within 24 hours via one of the policies described above. A tape can be validated more often if the administrator manually initiates the validation. When the validations are complete, STA reports the results. STA does not report simply a “good” or “bad” status. It also reports if media is even degraded so the administrator can migrate the data before there is a true failure. From that point, the administrators’ paranoia is relieved, as they have the necessary information to make a sound decision about the health of the tapes in their environment. About the Photograph Photograph taken by Rick Ramsey in Death Valley, California, May 2014 - Brian Follow OTN Garage on: Web | Facebook | Twitter | YouTube

    Read the article

  • Is your Credit Card Number valid?

    - by Rekha
    The credit card numbers may look like some random unique 16 digits number but those digits inform more than what we think it could be. The first digit of the card is the Major Industry Identifier: 1 and 2 -  Airlines 3  – Travel and Entertainment 4 and 5 -  Banking and Financial 6 – Merchandizing and Banking 7 – Petroleum 8 – Telecommunications 9 – National assignment The first 6 digits represent the Issuer Identification Number: Visa – 4xxxxx Master Card – 51xxxx & 55xxxx The 7th and following digits, excluding the last digit, are the person’s account number which leads to trillion possible combinations if the maximum of 12 digits is used. Many cards only use 9 digits. The final digit is the checksum or check digit. It is used to validate the card number using Luhn algorithm. How To Validate Credit Card Number? Take any credit card number, for example 5588 3201 2345 6789. Step 1: Double every other digit from the right: 5*2      8*2      3*2      0*2      2*2      4*2      6*2      8*2 ————————————————————————- 10        16        6          0          4          8      12        16 Step 2: Add these new digits to undoubled digits. All double digit numbers are added as a sum of their digits, so 16 becomes 1+6 = 7: Undoubled digits:       5          8          2          1          3          5          7          9 Doubled Digits:          10       16         6          0          4          8         12         16 Sum:  5+1+0+8+1+6+2+6+1+0+3+4+5+8+7+1+2+9+1+6 = 76 If the final sum is divisible by 10, then the Credit Card number is valid, if not, the number is invalid or fake!!! Hence the example is a fake number? via mint  cc and image credit This article titled,Is your Credit Card Number valid?, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Full text search with Sphider

    - by Ravi Gupta
    I am searching for a good, light weight, open source, full text search engine for php. I came across a number of options like Lucene, Zend Lucene, Solr etc but at the same time I also find out many people suggesting Sphider for small/medium side websites. I looked at shipder website a lot but unable to find out how to use it as a Full Text Search Engine.If anybody worked on it could help me to figure out whether it supports full text search or not. Edit: Please don't suggest any other alternatives for full text search.

    Read the article

  • Proactive Support Sessions at OUG London and OUG Ireland

    - by THE
    .conf td { width: 350px; border: 1px solid black; background-color: #ffcccc; } table { border: 1px solid black; } tr { border: 0px solid black; } td { border: 1px solid black; padding: 5px; } Oracle Proactive Support Technology is proud to announce that two members of its team will be speaking at the UK and Ireland User Group Conferences this year. Maurice and Greg plan to run the following sessions (may be subject to change): Maurice Bauhahn OUG Ireland BI & EPM and Technology Joint SIG Meeting 20 November 2012 BI&EPM SIG event in Ireland (09:00-17:00) and OUG London EPM & Hyperion Conference 2012 Tuesday 23rd to Wednesday 24th Oct 2012 Profit from Oracle Diagnostic Tools Embedded in EPM Oracle bundles in many of its software suites valuable toolsets to collect logs and settings, slice/dice error messages, track performance, and trace activities across services. Become familiar with several enterprise-level diagnostic tools embedded in Enterprise Performance Management (Enterprise Manager Fusion Middleware Control, Remote Diagnostic Agent, Dynamic Monitoring Service, and Oracle Diagnostic Framework). Expedite resolution of Service Requests as you learn to upload output from these tools to My Oracle Support. Who will benefit from attending the session? Geeks will find this most beneficial, but anyone who raises Oracle technical service requests will learn valuable pointers that may speed resolution. The focus is on the EPM stack, but this session will benefit almost everyone who needs to drill deeper into Oracle software environments. What will delegates learn from the session? Delegates who participate in this session will learn: How to access and run Remote Diagnostic Agent, Enterprise Manager Fusion Middleware Control, Dynamic Monitoring Service, and Oracle Diagnostic Framework. How to exploit the strengths of each tool. How to pass the outputs to My Oracle Support. How to restrict exposure of sensitive information. OUG Ireland BI & EPM and Technology Joint SIG Meeting 20 November 2012 BI&EPM SIG event in Ireland (09:00-17:00) and OUG London EPM & Hyperion Conference 2012 Tuesday 23rd to Wednesday 24th Oct 2012 Using EPM-Specific Troubleshooting Tools EPM developers have created a number of EPM-specific tools to collect logs and configuration files, centralize configuration information, and validate a configured installation (Ziplogs, EPM Registry Editor, [Deployment Report, Registry Cleanup Utility, Reset Configuration Tool, EPMSYS Hostname Check] and Validate [EPM System Diagnostic]). Learn how to use these tools on your own or to expedite Service Request resolution. Who will benefit from attending the session? Anyone who monitors Oracle EPM environments or raises service requests will learn valuable lessons that could speed resolution of those requests. Anyone from novices to experts will benefit from this review of custom troubleshooting EPM tools. What will delegates learn from the session? Learn where to locate and start EPM troubleshooting tools created by EPM developers Learn how to collect and upload outputs of EPM troubleshooting tools. Adapt to history of changes in these tools across time and version. Learn how to make critical changes in configurations. Grzegorz Reizer OUG London EPM & Hyperion Conference 2012 Tuesday 23rd to Wednesday 24th Oct 2012 EPM 11.1.2.2: Detailed overview of new features and improvements in Financial Management products. This presentation is a detailed overview of new features and improvements introduced in Enterprise Performance Management 11.1.2.2 for Financial Management products (Hyperion Financial Managment, Hyperion Planning, Financial Close Management). The presentation will cover a number of new product features from recently introduced configurable dimensionality in HFM to new functionality enhancements in Planning. We'll close the session with an overview of upgrade options from earlier product releases.

    Read the article

  • Starting off with web dev with php

    - by pavan kumar
    I'm currently working with Java / C++. I'm interested in web development and am planning to shift my stream. I heard that PHP is a good platform to start off and also it does not require that much of knowledge in technologies like JSP / Servlets or frameworks like springs / struts / hibernate. I have basic ideas about HTML and Javascript as well. I have gone through previous posts in SO and found out the relevant resources as well: http://net.tutsplus.com/tutorials/php/the-best-way-to-learn-php/ http://www.webhostingtalk.com/archive/index.php/t-1028265.html http://www.killerphp.com/ http://phpforms.net/tutorial/tutorial.html http://www.php5-tutorial.com/ etc. Now, my question is: I heard of PHP frameworks like CodeIgniter, Zend Frameworkd and Yii. Doesn't learning PHP & MySql implicitly makes us aware of these frameworks? Am I making a good choice in stating with PHP? Is it a good idea to shift streams?

    Read the article

  • Good php editor

    - by Web Developer
    i have seen through other questions on the topic but most are a bit old. I looking for a good editor for developing on PHP in Linux(ubuntu). Here is my requirements Basic editor features Free Light-Weight Syntax highlighting Code Folding (class,function,if/else/while/foreach block) Code completion Invalid Syntax/Error highlighting as you type Auto code intending Snippet support(pieces of custom or language specific codes that i can insert) Extendable support It would be great if it had the following Debugging support WYSIWYG Code formatting Framework support(cakephp/yii/zend/smarty) Testing support Todo Native look and feel(Gnome) Flex/ROR support is welcome but not a requirement Mysql support I have tried the following editors Netbeans - it bloated, resource hogging and doesnt not have a native look and feel. Eclipse is okay but i cant fold if/while blocks and slow. Gedit can be extended and i have tried it but still i could not fold code or show error. I currently use Geany but it doesn't inform me of syntax errors as i type. If you have ways to solve the problems with above editors they too are also welcome

    Read the article

  • Website development from scratch v/s web framework [duplicate]

    - by Ali
    This question already has an answer here: What should every programmer know about web development? 1 answer Do people develop websites from scratch when there are no particular requirements or they just pick up an existing web framework like Drupal, Joomla, WordPress, etc. The requirements are almost similar in most cases; if personal, it will be a blog or image gallery; if corporate, it will be information pages that can be updated dynamically along with news section. And similarly, there are other requirements which can be fulfilled by WordPress, Joomla or Drupal. So, Is it advisable to develop a website from scratch and why ? Update: to explain more as got commentt from @Raynos (thanks for comment and helping me clearify the question), the question is about: Should web sites be developed and designed fully from scratch? Should they be done by using framework like Spring, Zend, CakePHP? Should they be done using CMS like Joomla, WordPress, Drupal (people in east are using these as frameworks)?

    Read the article

  • Sharing internet to Ubuntu 12.04 VMWare guest

    - by John Cogan
    Got the 12.04 distro and installed this on my Mac VMWare Fusion (Version 4.1.2). Install went fine and Ubuntu seemed to update itself during the install. Rebooted and now I cannot get Ubuntu to access the internet via my Macs Ethernet connection (Connected to a router) Have tried settings the VMWare network adaptor to NAT or bridged and Host only without success. In Ubuntu I have the eth0 set to Automatic DHCP. VMWare tools is installed as well. I know next to nothing about what to do here and net searches on both Stackoverflow and Superuser sites are producing not results for me. Could someone please help? What to eventually have Ubuntu run in the background as a development web server running Zend Framework and CE Server edition but need thew internet access to run updates etc on Ubuntu. Any help would be really appreciated. TIA John

    Read the article

  • Understanding the problem when things break in production

    - by bitcycle
    Scenario: You push to production The push broke multiple things That same build did not break qa or dev As a developer, you don't have prod access. There is lots of pressure from above to get things working agian. Specifics: PHP/MVC application that is API-driven in Zend. Deployed to a few servers. My question: While investigating, lets say I have a hunch that something is wrong. But, I don't know for sure. And, of course, I can't test things in production. If I have a suggested fix based on that hunch, would it be wise to try and apply it and see if it works, before understanding what the problem is?

    Read the article

  • Why would I want to install node.js in my Rails Application?

    - by Crazy JIm
    Okay guys, I'm super confused. I thought node.js was a sever side framwork, basically the js version of Ruby's Rails or PHP's Zend. However, I'm having some difficulty with turbolinks, and it seems to be the way to fix it is through installing node.js I mean, I don't understand this at all. How can two frameworks work together like this? Also, it's not a gem (that REALLY would have confused me), you have to install node.js it onto your local machine by running (in the case of Ubuntu) sudo apt-get install nodejs Firstly, how does this totally separate framwork have any bearing on rails? Secondly, surely this isn't fixing the problem forever? When you specify a gem in your gemfile, the server knows what external libraries to install. How does the server know to install nodejs?

    Read the article

  • NetBeans 6.9 Released

    - by Duncan Mills
    Great news, the first NetBeans release that has been conducted fully under the stewardship of Oracle has now been released. NetBeans IDE 6.9 introduces the JavaFX Composer, a visual layout tool for building JavaFX GUI applications, similar to the Swing GUI builder for Java SE applications. With the JavaFX Composer, developers can quickly build, visually edit, and debug Rich Internet Applications (RIA) and bind components to various data sources, including Web services. The NetBeans 6.9 release also features OSGi interoperability for NetBeans Platform applications and support for developing OSGi bundles with Maven. With support for OSGi and Swing standards, the NetBeans Platform now supports the standard UI toolkit and the standard module system, providing a unique combination of standards for modular, rich-client development. Additional noteworthy features in this release include support for JavaFX SDK 1.3, PHP Zend framework, and Ruby on Rails 3.0; as well as improvements to the Java Editor, Java Debugger, issue tracking, and more. Head over to NetBeans.org for more details and of course downloads!

    Read the article

  • Doubts about several best practices for rest api + service layer

    - by TheBeefMightBeTough
    I'm going to be starting a project soon that exposes a restful api for business intelligence. It may not be limited to a restful api, so I plan to delegate requests to a service layer that then coordinates multiple domain objects (each of which have business logic local to the object). The api will likely have many calls as it is a long-term project. While thinking about the design, I recalled a few best practices. 1) Use command objects at the controller layer (I'm using Spring MVC). 2) Use DTOs at the service layer. 3) Validate in both the controller and service layer, though for different reasons. I have my doubts about these recommendations. 1) Using command objects adds a lot of extra single-purpose classes (potentially one per request). What exactly is the benefit? Annotation based validation can be done using this approach, sure. What if I have two requests that take the same parameters, but have different validation requirements? I would have to have two different classes with exactly the same members but different annotations? Bleh. 2) I have heard that using DTOs is preferable to parameters because it makes for more maintainable code down the road (say, e.g., requirements change and the service parameters need to be altered). I don't quite understand this. Shouldn't an api be more-or-less set in stone? I would understand that in the early phases of a project (or, especially, an entire company) the domain itself will not be well understood, and thus core domain objects may change along with the apis that manipulate these objects. At this point however the number of api methods should be small and their dependents few, so changes to the methods could easily be tolerated from a maintainability standpoint. In a large api with many methods and a substantial domain model, I would think having a DTO for potentially each domain object would become unwieldy. Am I misunderstanding something here? 3) I see validation in the controller and service layer as redundant in most cases. Why would I validate that parameters are not null and are in general well formed in the controller if the service is going to do exactly the same (and more). Couldn't I just do all the validation in the service and throw a runtime exception with a list of bad parameters then catch that in the controller to make the error messages more presentable? Better yet, couldn't I just make the error messages user-friendly in the service and let the exception trickle up to a global handler (ControllerAdvice in spring, for example)? Is there something wrong with either of these approaches? (I do see a use case for controller validation if the input does not map one-to-one with the service input, but since the controllers are for a rest api and not forms, the api parameters will probably map directly to service parameters.) I do also have a question about unchecked vs checked exceptions. Namely, I'm not really sure why I'd ever want to use a checked exception. Every time I have seen them used they just get wrapped into general exceptions (DomainException, SystemException, ApplicationException, w/e) to reduce the signature length of methods, or devs catch Exception rather than dealing with the App1Exception, App2Exception, Sys1Exception, Sys2Exception. I don't see how either of these practices is very useful. Why not just use unchecked exceptions always and catch the ones you actually do care about? You could just document what unchecked exceptions the method throws.

    Read the article

  • USDM and Oracle Offer a New Part 11 Compliant Solution for Life Sciences

    - by Michael Snow
    Guest post today provided by Oracle partner, USDM  Regulated Content in WebCenterUSDM and Oracle offer a new Part 11 compliant solution for Life Sciences (White Paper) Life science customers now have the ability to take advantage of all of the benefits of Oracle’s WebCenter Content, a global leader in Enterprise Content Management.   For the past year, USDM has been developing best practice compliance solutions to meet regulated content management requirements for 21 CFR Part 11 in WebCenter Content. USDM has been an expert in ECM for life sciences since 1999 and in 2011, certified that WebCenter was a 21CFR Part 11 compliant content management platform (White Paper).  In addition, USDM has built Validation Accelerators Packs for WebCenter to enable life science organizations to quickly and cost effectively validate this world class solution.With the Part 11 certification, Oracle’s WebCenter now provides regulated life science organizations  the ability to manage REGULATORY content in WebCenter, as well as the ability to take advantage of ALL of the additional functionality of WebCenter, including  a complete, open, and integrated portfolio of portal, web experience management, content management and social networking technology.  Here are a few screen shot examples of Part 11 functionality included in the product: E-Sign, E-Sign Rendor, Meta Data History, Audit Trail Report, and Access Reporting. Gone are the days that life science companies have to spend millions of dollars a year to implement, maintain, and validate ECM systems that no longer meet the ever changing business and regulatory requirements.  Life science companies now have the ability to use WebCenter Content, an ECM system with a substantially lower cost of ownership and unsurpassed functionality.Oracle has been #1 in life sciences because of their ability to develop cost effective, easy-to-use, scalable solutions which help increase insight and efficiency to drive growth for their customers.  Adding a world class ECM solution to this product portfolio allows life science organizations the chance to get rid of costly ECM systems that no longer meet their needs and use WebCenter, part of the Oracle Fusion Technology stack, with their other leading enterprise applications.USDM provides:•    Expertise in Life Science ECM Business Processes•    Prebuilt Life Science Configuration in WebCenter •    Validation Accelerator Packs for WebCenterUSDM is very proud to support Oracle’s expanding commitment to Life Sciences…. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For more information please contact:  [email protected] Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • How to automate a monitoring system for ETL runs

    - by Jeffrey McDaniel
    Upon completion of the Primavera ETL process there are a few ways to determine if the process finished successfully.  First, in the <installation directory>\log folder,  there is a staretlprocess.log and staretl.html files. These files will give the output results of the ETL run. The staretl.html file will give a detailed summary of each step of the process, its run time, and its status. The .log file, based on the logging level set in the Configuration tool, can give extensive information about the ETL process. The log file can be used as a validation for process completion.  To automate the monitoring of these log files, perform the following steps: 1. Write a custom application to parse through the log file and search for [ERROR] . In most cases,  a major [ERROR] could cause the ETL process to fail. Searching the log and finding this value is worthy of an alert. 2. Determine the total number of steps in the ETL process, and validate that the log file recorded and entry for the final step.  For example validate that your log file contains an entry for Step 39/39 (could be different based on the version you are running). If there is no Step 39/39, then either the process is taking longer than expected or it didn't make it to the end.  Either way this would be a good cause for an alert. 3. Check the last line in the log file. The last line of the log file should contain an indication that the ETL run completed successfully. For example, the last line of a log file will say (results could be different based on Reporting Database versions):   [INFO] (Message) Finished Writing Report 4. You could write an Ant script to execute the ETL process and have it set to - failonerror="true" - and from there send results to an external tool to monitor the jobs, send to email, or send to database. With each ETL run, the log file appends to the existing log file by default. Because of this behavior, I would recommend renaming the existing log files before running a new ETL process. By doing this,  only log entries for the currently running ETL process is recorded in the new log files. Based on these log entries, alerts can be setup to notify the administrator or DBA. Another way to determine if the ETL process has completed successfully is to monitor the etl_processmaster table.  Depending on the Reporting Database version this could be in the Stage or Star databases. As of Reporting Database 2.2 and higher this would be in the Star database.  The etl_processmaster table records entries for the ETL run along with a Start and Finish time.  If the ETl process has failed the Finish date should be null. This table can be queried at a time when ETL process is expected to be finished and if null send an alert.  These are just some options. There are additional ways this can be accomplished based around these two areas - log files or database. Here is an additional query to gather more information about your ETL run (connect as Staruser): SELECT SYSDATE,test_script,decode(loc, 0, PROCESSNAME, trim(SUBSTR(PROCESSNAME, loc+1))) PROCESSNAME ,duration duration from ( select (e.endtime - b.starttime) * 1440 duration, to_char(b.starttime, 'hh24:mi:ss') starttime, to_char(e.endtime, 'hh24:mi:ss') endtime,  b.PROCESSNAME, instr(b.PROCESSNAME, ']') loc, b.infotype test_script from ( select processid, infodate starttime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'BEGIN' ) b  inner Join ( select processid, infodate endtime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'END' ) e on b.processid = e.processid  and b.PROCESSNAME = e.PROCESSNAME order by b.starttime)

    Read the article

  • How do I start working as a programmer - what do I need?

    - by giorgo
    i am currently learning Java and PHP as I have some projects from university, which require me to apply both languages. Specifically, a Java GUI application, connecting to a MySQL database and a web application that will be implemented in PHP/MySQL. I have started learning the MVC pattern, Struts, Spring and I am also learning PHP with zend. My first question is: How can I find employment as a programmer/software engineer? The reason I ask is because I have sent my CV into many companys, but all of them stated that I required work experience. I really need some guidance on how to improve my career opportunites. At present, I work on my own and haven't worked in collaboration with anyone on a particular project. I'm assuming most people create projects and submit them along with their CVs. My second question is: Everyone has to make a start from somewhere, but what if this somewhere doesn't come? What do I need to do to create the circumstances where I can easily progress forward? Thanks

    Read the article

  • Am I getting paid a reasonable wage for web engineering?

    - by sailtheworld
    I've been doing some research and it looks like most people in my line of work - WEB ENGINEERING/WEB APPLICATION DEVELOPMENT - that get paid hourly, make anywhere from $30-80 an hour for their work. With that said, I have SEVEN years of experience with web development including OOP-PHP, MySQL, jQuery, OOP-JS, interface design, ajax, database architecture, etc. I am also very strong with visual design and workflow - thus, I've made some really high quality interactive interfaces. I also have a lot of experience with Zend Framework, Symfony, Wordpress, Drupal, etc and a really strong portfolio to show for it. Here's the catch: I'm 20 years old, haven't graduated from college yet (I'm doing part time school and ~30 hours a week of web development.) But I've literally been doing web apps since I was 13 years old. So my question is: is $14 an hour a reasonable starting wage for working at a company part time?

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >