Search Results

Search found 49486 results on 1980 pages for 'remote working'.

Page 376/1980 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • How to work RavenDB Id with ASP.NET MVC Routes

    - by shiju
    By default RavenDB's Id would be sperated by "/". Let's say that we have a category object, the Ids would be like "categories/1". This will make problems when working with ASP.NET MVC's route rule. For a route category/edit/id, the uri would be category/edit/categories/1. You can solve this problem in two waysSolution 1 - Change Id SeparatorWe can use different Id Separator for RavenDB Ids in order to working with ASP.NET MVC route rules. The following code specify that Ids would be seperated by "-" rather than the default "/"  documentStore = new DocumentStore { Url = "http://localhost:8080/" };  documentStore.Initialize();  documentStore.Conventions.IdentityPartsSeparator = "-"; The above IdentityPartsSeparator would be generate Ids like "categories-1"Solution 2 - Modify ASP.NET MVC Route Modify the ASP.NET MVC routes in the Global.asax.cs file, as shown in the following code  routes.MapRoute(     "WithParam",                                           // Route name     "{controller}/{action}/{*id}"                         // URL with parameters     );  We just put "*" in front of the id variable that will be working with the default Id separator of RavenDB

    Read the article

  • ASP.NET MVC Validation Complete

    - by Ricardo Peres
    OK, so let’s talk about validation. Most people are probably familiar with the out of the box validation attributes that MVC knows about, from the System.ComponentModel.DataAnnotations namespace, such as EnumDataTypeAttribute, RequiredAttribute, StringLengthAttribute, RangeAttribute, RegularExpressionAttribute and CompareAttribute from the System.Web.Mvc namespace. All of these validators inherit from ValidationAttribute and perform server as well as client-side validation. In order to use them, you must include the JavaScript files MicrosoftMvcValidation.js, jquery.validate.js or jquery.validate.unobtrusive.js, depending on whether you want to use Microsoft’s own library or jQuery. No significant difference exists, but jQuery is more extensible. You can also create your own attribute by inheriting from ValidationAttribute, but, if you want to have client-side behavior, you must also implement IClientValidatable (all of the out of the box validation attributes implement it) and supply your own JavaScript validation function that mimics its server-side counterpart. Of course, you must reference the JavaScript file where the declaration function is. Let’s see an example, validating even numbers. First, the validation attribute: 1: [Serializable] 2: [AttributeUsage(AttributeTargets.Property, AllowMultiple = false, Inherited = true)] 3: public class IsEvenAttribute : ValidationAttribute, IClientValidatable 4: { 5: protected override ValidationResult IsValid(Object value, ValidationContext validationContext) 6: { 7: Int32 v = Convert.ToInt32(value); 8:  9: if (v % 2 == 0) 10: { 11: return (ValidationResult.Success); 12: } 13: else 14: { 15: return (new ValidationResult("Value is not even")); 16: } 17: } 18:  19: #region IClientValidatable Members 20:  21: public IEnumerable<ModelClientValidationRule> GetClientValidationRules(ModelMetadata metadata, ControllerContext context) 22: { 23: yield return (new ModelClientValidationRule() { ValidationType = "iseven", ErrorMessage = "Value is not even" }); 24: } 25:  26: #endregion 27: } The iseven validation function is declared like this in JavaScript, using jQuery validation: 1: jQuery.validator.addMethod('iseven', function (value, element, params) 2: { 3: return (true); 4: return ((parseInt(value) % 2) == 0); 5: }); 6:  7: jQuery.validator.unobtrusive.adapters.add('iseven', [], function (options) 8: { 9: options.rules['iseven'] = options.params; 10: options.messages['iseven'] = options.message; 11: }); Do keep in mind that this is a simple example, for example, we are not using parameters, which may be required for some more advanced scenarios. As a side note, if you implement a custom validator that also requires a JavaScript function, you’ll probably want them together. One way to achieve this is by including the JavaScript file as an embedded resource on the same assembly where the custom attribute is declared. You do this by having its Build Action set as Embedded Resource inside Visual Studio: Then you have to declare an attribute at assembly level, perhaps in the AssemblyInfo.cs file: 1: [assembly: WebResource("SomeNamespace.IsEven.js", "text/javascript")] In your views, if you want to include a JavaScript file from an embedded resource you can use this code: 1: public static class UrlExtensions 2: { 3: private static readonly MethodInfo getResourceUrlMethod = typeof(AssemblyResourceLoader).GetMethod("GetWebResourceUrlInternal", BindingFlags.NonPublic | BindingFlags.Static); 4:  5: public static IHtmlString Resource<TType>(this UrlHelper url, String resourceName) 6: { 7: return (Resource(url, typeof(TType).Assembly.FullName, resourceName)); 8: } 9:  10: public static IHtmlString Resource(this UrlHelper url, String assemblyName, String resourceName) 11: { 12: String resourceUrl = getResourceUrlMethod.Invoke(null, new Object[] { Assembly.Load(assemblyName), resourceName, false, false, null }).ToString(); 13: return (new HtmlString(resourceUrl)); 14: } 15: } And on the view: 1: <script src="<%: this.Url.Resource("SomeAssembly", "SomeNamespace.IsEven.js") %>" type="text/javascript"></script> Then there’s the CustomValidationAttribute. It allows externalizing your validation logic to another class, so you have to tell which type and method to use. The method can be static as well as instance, if it is instance, the class cannot be abstract and must have a public parameterless constructor. It can be applied to a property as well as a class. It does not, however, support client-side validation. Let’s see an example declaration: 1: [CustomValidation(typeof(ProductValidator), "OnValidateName")] 2: public String Name 3: { 4: get; 5: set; 6: } The validation method needs this signature: 1: public static ValidationResult OnValidateName(String name) 2: { 3: if ((String.IsNullOrWhiteSpace(name) == false) && (name.Length <= 50)) 4: { 5: return (ValidationResult.Success); 6: } 7: else 8: { 9: return (new ValidationResult(String.Format("The name has an invalid value: {0}", name), new String[] { "Name" })); 10: } 11: } Note that it can be either static or instance and it must return a ValidationResult-derived class. ValidationResult.Success is null, so any non-null value is considered a validation error. The single method argument must match the property type to which the attribute is attached to or the class, in case it is applied to a class: 1: [CustomValidation(typeof(ProductValidator), "OnValidateProduct")] 2: public class Product 3: { 4: } The signature must thus be: 1: public static ValidationResult OnValidateProduct(Product product) 2: { 3: } Continuing with attribute-based validation, another possibility is RemoteAttribute. This allows specifying a controller and an action method just for performing the validation of a property or set of properties. This works in a client-side AJAX way and it can be very useful. Let’s see an example, starting with the attribute declaration and proceeding to the action method implementation: 1: [Remote("Validate", "Validation")] 2: public String Username 3: { 4: get; 5: set; 6: } The controller action method must contain an argument that can be bound to the property: 1: public ActionResult Validate(String username) 2: { 3: return (this.Json(true, JsonRequestBehavior.AllowGet)); 4: } If in your result JSON object you include a string instead of the true value, it will consider it as an error, and the validation will fail. This string will be displayed as the error message, if you have included it in your view. You can also use the remote validation approach for validating your entire entity, by including all of its properties as included fields in the attribute and having an action method that receives an entity instead of a single property: 1: [Remote("Validate", "Validation", AdditionalFields = "Price")] 2: public String Name 3: { 4: get; 5: set; 6: } 7:  8: public Decimal Price 9: { 10: get; 11: set; 12: } The action method will then be: 1: public ActionResult Validate(Product product) 2: { 3: return (this.Json("Product is not valid", JsonRequestBehavior.AllowGet)); 4: } Only the property to which the attribute is applied and the additional properties referenced by the AdditionalFields will be populated in the entity instance received by the validation method. The same rule previously stated applies, if you return anything other than true, it will be used as the validation error message for the entity. The remote validation is triggered automatically, but you can also call it explicitly. In the next example, I am causing the full entity validation, see the call to serialize(): 1: function validate() 2: { 3: var form = $('form'); 4: var data = form.serialize(); 5: var url = '<%: this.Url.Action("Validation", "Validate") %>'; 6:  7: var result = $.ajax 8: ( 9: { 10: type: 'POST', 11: url: url, 12: data: data, 13: async: false 14: } 15: ).responseText; 16:  17: if (result) 18: { 19: //error 20: } 21: } Finally, by implementing IValidatableObject, you can implement your validation logic on the object itself, that is, you make it self-validatable. This will only work server-side, that is, the ModelState.IsValid property will be set to false on the controller’s action method if the validation in unsuccessful. Let’s see how to implement it: 1: public class Product : IValidatableObject 2: { 3: public String Name 4: { 5: get; 6: set; 7: } 8:  9: public Decimal Price 10: { 11: get; 12: set; 13: } 14:  15: #region IValidatableObject Members 16: 17: public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) 18: { 19: if ((String.IsNullOrWhiteSpace(this.Name) == true) || (this.Name.Length > 50)) 20: { 21: yield return (new ValidationResult(String.Format("The name has an invalid value: {0}", this.Name), new String[] { "Name" })); 22: } 23: 24: if ((this.Price <= 0) || (this.Price > 100)) 25: { 26: yield return (new ValidationResult(String.Format("The price has an invalid value: {0}", this.Price), new String[] { "Price" })); 27: } 28: } 29: 30: #endregion 31: } The errors returned will be matched against the model properties through the MemberNames property of the ValidationResult class and will be displayed in their proper labels, if present on the view. On the controller action method you can check for model validity by looking at ModelState.IsValid and you can get actual error messages and related properties by examining all of the entries in the ModelState dictionary: 1: Dictionary<String, String> errors = new Dictionary<String, String>(); 2:  3: foreach (KeyValuePair<String, ModelState> keyValue in this.ModelState) 4: { 5: String key = keyValue.Key; 6: ModelState modelState = keyValue.Value; 7:  8: foreach (ModelError error in modelState.Errors) 9: { 10: errors[key] = error.ErrorMessage; 11: } 12: } And these are the ways to perform date validation in ASP.NET MVC. Don’t forget to use them!

    Read the article

  • How do you measure the value of your software?

    - by Mike
    Hi, One of the principles of agile is that you should measure working software: Working software is the primary measure of progress - 12 principles of Agile The thing is, while I can measure my software in terms of stories done, bugs squashed or the volume of defect reports decreasing, I'm stuck on how to measure the value of my software. If I use Mike Cohn as an example and his helping SalesForce.com deliver 500% more value to it's customers compared to the previous year* - how do I measure that increase? How do I measure where I am right now? Other metrics he uses are the number of features and the number of features per developer. This is something I could work out if my backlog was in good order and the stories were cut up by 'feature', but we're just starting out with Agile, so I need some way of working out what the value is we deliver now, then use a similar metric in say, six months, to see if we've increased our output. I've heard about measuring value of software by an uptick in revenue, or an increase in customer satisfaction (how would you measure that though?) but those increases could be attributed to anything in the company (sales, accounting, support) and not directly to the work my department is doing. So, how do you guys measure the value of your software and how did you start? Thanks, Mike *Succeeding With Agile - Mike Cohn

    Read the article

  • Profile of Scott L Newman

    - by Ratman21
    To:       Whom It May Concern From: Scott L Newman Date:   4/23/2010 Re:      Profile Who is he, what can he do? Two very good questions. #1. I am a 20 + years experience Information Technology Professional (hold on don’t hit delete yet!). Who is not over the hill (I am on top of it) and still knows how to do (and can still do) that thing call work! #2. A can do attitude, that does not allow problems to sit unfixed. I have a broad range of skills, including: Certified CompTIA A+, Security+ and Network+ Technician §         2.5 years (NOC) Network experience on large Cisco based Wan – UK to Austria §         20 years experience MIS/DP – Yes I can do IBM mainframes and Tandem non-stops too §         18 years experience as technical Help Desk support – panicking users, no problem §         18 years experience with PC/Server based system, intranet and internet systems §         10+ years experienced on: Microsoft Office, Windows XP and Data Network Fundamentals (YES I do windows) §         Strong trouble shooting skills for software, hard ware and circuit issues (and I can tell you what kind of horrors I had to face on all of them). §         Very experienced on working with customers on problems – again panicking users, no problem §         Working experience with Remote Access (VPN/SecurID) – I didn’t just study them I worked on/with them §         Skilled in getting info for and creating documentation for Operation procedures (I do not just wait for them to give it to me I go out and get it. Waiting for info on working applications is, well dumb) Multiple software languages (Hey I have done some programming) And much more experiences in “IT” (Mortgage, stocks and financial information systems experience and have worked “IT” in a hospital) Can multitask, also have ability to adapt to change and learn quickly. (once was put in charge of a system that I had not worked with for over two years. Talk about having to relearn and adapt to changes fast. But I did it.)   The summarization is that I know what do, know keep things going and how to fix it when it breaks.   Scott L. Newman Confidential

    Read the article

  • jackd fails to start

    - by wickedchicken
    I'm trying to have a setup where JACK interfaces directly to ALSA and pulseaudio communicates to JACK. This setup worked OK (I had to manually start things a few times) but as I understood the Ubuntu daemon setup and perfected things jackd stopped working completely. I'm running 10.10. If I run something through ALSA I get sound no problem. However, when I run jack with realtime: /usr/bin/jackd -v -R -ch -Z -t2000 -d alsa -P I get the following error: jackd watchdog: timeout - killing jackd Conversely, if I run without realtime: /usr/bin/jackd -v -r -ch -Z -t2000 -d alsa -P I get: ALSA: poll time out, polled for 32032138 usecs DRIVER NT: could not run driver cycle Jack was working just fine before I made these changes; while I don't have an exact copy of my original configuration I recall running the bare minimum of options worked fine. I've seen some articles saying the problem is with ALSA capture. In fact, I tried enabling capture in alsamixer once and everything worked! On reboot that success was not repeated and I haven't been able to get jack working since. That shouldn't matter because specifying -P should obviate any capture issues. Short summary: I can't get jackd to work under any circumstances (unless I specify -d dummy). Sound works with other programs with ALSA, but when I run JACK the daemon opens the card but times out and dies. JACK worked fine before but I can't figure out what changed (or where to even look). I should mention I am running with CPU speed throttling on, but I'm using HPET to mitigate this (and I've run jack with no issue before). Thanks!

    Read the article

  • How to push changes from Test server to Live server?

    - by anonymous
    As a beginner, I finally noticed the issue with making changes to the live server I've been working on, now that I have a couple users on it, since I bring it down so often. I created an EC2 image of my live server and set up a separate instance on EC2, so now I have 2 EC2 instances, Stage and Production. I set up GitHub and push changes to stage and test my code there, and when it's all done and working, I push it to the production branch, and everything is good. And there is a slight issue here since I name my files config_stage.js and config_production.js and set up .gitignore on each server, and in my code, I would have it read the ENV flags and set up the appropriate configs, is this the correct approach? And my main question is: how do you keep track of non-code changes to the server? For example, I installed HAProxy, Stunnel, Redis, MongoDB and several other things onto the Stage server for testing and now that it's all working and good, how do I deploy them to production? Right now, I'm just keeping track of everything I installed and copying configuration files over, which is very tedious and I'm afraid I may have missed a step somewhere. Is there a better way to port these changes over from my test server to my live server?

    Read the article

  • How to use rhythmbox-client on LAN?

    - by Kaustubh P
    A few days ago I had asked this question, and according to one suggestion, used rhythmote. It is a web-interface to change songs on a rhythmbox playing on some PC. However, its not what I had thought of, and I stumbled upon documentation for rhythmbox-client. I tried a few ways of using it, but was unsuccessful. Let me show you a few ways of how I did it. The rhythmbox is running at address 192.168.1.4, lets call it jukebox. Passing the address as a parameter Hoping that I would be able to see and browse through songs on the jukebox rhythmbox-client 192.168.1.4 But, I get this message (rhythmbox-client:8370): Rhythmbox-WARNING **: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (rhythmbox-client:8370): Rhythmbox-WARNING **: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. SSH ssh -l jukebox 192.168.1.4 rhythmbox-client --print-playing Which spat this at me: (rhythmbox-client:9389): Rhythmbox-WARNING **: /bin/dbus-launch terminated abnormally with the following error: Autolaunch error: X11 initialization failed. rhythmbox-client as root gksudo rhythmbox-client 192.168.1.4 A rhythmbox client comes up, but with no music shown in the library. I am guessing this is running on my own computer. Can anyon tell me how rhythmbox-client is to be run, and is it even correct of me to think that I can get a rhythmbox window showing the songs on the jukebox? PS: There were a few other solutions mentioned, but I want to evaluate each and every one of them. Thanks.

    Read the article

  • Asus Sonic Master on Asus N53SV

    - by David Winchester
    I have read that there's a problem to get the subwoofer working in these laptops. I tried this solution No sound from external subwoofer but I don't how to prove that the subwoofer is properly functioning. I use Pulseaudio equalizer and the bass sound seems to work fine, but when I go to the Sound Settings, I can't move the bar where it says 'Subwoofer' in my sound card option, so I don't know if everything is alright. If someone has a solution I would like to know, because there isn't much information regarding this. By the way, I'm using Ubuntu 12.04 64 bits. Thanks beforehand, Dave EDIT ----------- Possible Solution Well, I will post a solution that worked for me and I think it will help a lot of users. I finally got the subwoofer working. Besides adding in /etc/modprobe.d/alsa-base.conf the line options snd-hda-intel model=asus-mode4 I deleted the lines with load-module module-combine and module-combine-sink in /etc/pulse/default.pa (in the home folder there's also a ~/.pulse/default.pa file, I don't know if it has the lines too) To assure all the channels are working, I think this command tells me that speaker-test -c6 -l1 -twav I use pulseaudio-equalizer and the bass sounds very well when properly adjusted. Also, all the channels seems to work fine and the sound is even better than in Windows (where I don't have an equalizer). I pointed out before a module-combine and module-combine-sink problem, because one day I turned on my laptop and pulseaudio didn't work. So I deleted the lines with that names (don't know if they came by default, maybe I added them sometime when I was trying to fix my speakers). After all this, I can now move the Subwoofer bar in the Sound configuration. Anyways, the Equalizer does a great job and it improves the sound a lot.

    Read the article

  • Weblogic domain scale up using EM Grid Control 11gR1

    - by dmitry.nefedkin(at)oracle.com
    As you know a weblogic domain consists of set of servers running independently or in a cluster mode, sharing the distributed resources. And in most environments weblogic  cluster consists of multiple managed servers running simultaneously and working together to provide increased scalability and reliability.  These servers can run on the same machine, or be located on different machines.  It's a common task to increase a cluster's capacity by adding new machines to the cluster to host the new server instances.  You can do it by manually installing weblogic binaries to the new host and use pack/unpack commands to add a managed server to this new host.  But with Enterprise Manager Grid Control 11gR1 (EMGC) there is  another way - Fusion Middleware Domain Scale Up  procedure. I'm going to show you how it works.Here is a picture of  my medrec_oradb weblogic domain, what is registered in EMGC. It contains an admin server and a cluster MedRecCluster with  the single managed server MS1. Both admin and managed servers are on the same host oel46-vmware, it's a virtual machine with OEL 4.6 that runs inside our Oracle VM infrastructure.  And here are the application deployments, note that couple of applications are deployed to the cluster.First of all I have to prepare a new machine that will host new managed sever of my cluster. I created new VM with OEL 5.4 using the corresponding Oracle VM template available in Oracle E-Delivery site for Oracle Linux and Oracle VM and named it wls1032. Next step is to install Oracle EM Grid Control 11gR1 Agent to this new host.  You can download it from the OTN page and install it manually,  or you can use Agent Installation Deployment procedure available in EMGC  (Deployments->Agent Installation->Install Agent). Anyway, when you agent is up and running on the new machine, you will see it in EMGC Console in the Targets->Hosts subtab.Now we are ready to scale up our weblogic domain. Click the Deployments tab in Oracle Enterprise Manager Grid Control, and then click Deployment Procedure. Select a Fusion Middleware Domain Scale Up procedure from the list, and click Schedule Deployment. The first page of the FMW Domain Scale Up Wizard is displayed and you can proceed with the deployment process.Select the domain from list, enter the working directory on the admin server host, and also fill the weblogic credentials for the administration server console and the OS credentials for the  admin server host.  Click Next button.  The next step allows you to configure you domain, to add a new manager server to the cluster you should select the cluster in the tree and click Add Server button. Select the newly added server in a tree, choose the target host and  enter the configuration details of your managed server. You can also add new machine and node manager details.  Please note that you cannot change the values in  Domain Location and Fusion Middleware Home fields, so these locations on the target host will be the same as for the admin server host.   Working directory on the target host should have enough free space to store FMW home binaries and domain configuration files.  In my experience the working directories should have at least 3 Gb of free space.  The last thing you should fill is the OS credentials for the target host. The next steps allows you to schedule the execution of the procedure, it is started immediately in my example. The last step is just a review the configuration for the domain scale up. Click Submit to launch the process. You can track the status of the procedure execution by selecting Deployments->Deployment Procedures->Procedure Completion Status in the EMGC Console.As you can see in the picture below, the procedure consists of the many steps, and I'm going to share my experience about the issues that I had at some of the steps. Please keep in mind that you can always continue the execution from the last successfully completed step by clicking Retry button.Check OUI Prerequisites  step may fail if the target host does  not pass prerequisites checks for Weblogic Server installation such as amount of RAM, linux packages installed, etc. Create FMW Clone Archive step may fail if you do not have enough free space in the working directory on the administration server host.Transfer cloning archive to targets  step  may fail if the EMGC agents on the admin server host or on target host are not secured.   You should secure the agent by issuing ./emctl secure agent  command from $AGENT_HOME/bin directory and entering the agent registration password.Both Transfer cloning archive to targets and Apply Clone at target hosts steps may fail if you do not have enough free space in the working directory on the target host. The most complicated issue I had on the Run Inventory Collection  step. The step failed and I noticed that the agent on the target server is also failed with the following error in the $AGENT_HOME/sysman/log/emagent.trc  log file:2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: Failed to upload file A0000008.xml: Fatal Error.Response received: 500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-2838952848 WARN  upload: FxferSend: received fatal error in header from repository: https://oel46-vmware:1159/em/uploadFATAL_ERROR::500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: number of fatal error exceeds the limit 32010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: agent will shutdown now2010-12-28 11:50:35,552 Thread-2838952848 ERROR : Signalled to Exit with status 55. Too many fatal upload failures2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-3044607680 ERROR main: EMAgent abnormal terminatingI checked the timezone of my domain target inside EMGC repositoryselect timezone_regionfrom mgmt_targets where target_type = 'weblogic_domain'  and display_name = 'medrec_oradb'"TIMEZONE_REGION""America/Los_Angeles"Then checked the timezone of my agents and indeed, they differedselect target_name, timezone_region from mgmt_targets where type_display_name = 'Agent'"TARGET_NAME"    "TIMEZONE_REGION""oel46-vmware:3872"    "America/Los_Angeles""wls1032.imc.fors.ru:3872"    "America/New_York"So I had to change the timezone on the wls1032 host and propagate this changes to the agent and to the EMGC repository. Here was the steps:issued system-config-date command on wls1032.imc.fors.ru  and set timezone to "America/Los_Angeles"propagated the changes to the agent bu executing ./emctl resetTZ agent  command from $AGENT_HOME/bin directoryconnected to EMGC repository as sysman and executed the following PL/SQL block:   begin      mgmt_target.set_agent_tzrgn('wls1032.imc.fors.ru:3872','America/Los_Angeles');      commit;   end;After that I had to clear the pending uploads on wls1032.imc.fors.ru:  rm -r $AGENT_HOME/sysman/emd/state/*  rm -r $AGENT_HOME/sysman/emd/collection/*  rm -r $AGENT_HOME/sysman/emd/upload/*  rm $AGENT_HOME/sysman/emd/lastupld.xml  rm $AGENT_HOME/sysman/emd/agntstmp.txt  $AGENT_HOME/bin/emctl start agent  $AGENT_HOME/bin/emctl clearstate agentThe last part of this solution was to resync the agent in EMGC console by clicking Agent Resynchronization button (please leave "Unblock agent on successful completion of agent resynchronization" checkbox checked in the next screen).After that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 host,  and my previous error disappeared,  but I catched another one: EMD upload error: Failed to upload file A0000004.xml: HTTP error.Response received: ERROR-400|Data will be rejected for upload from agent 'https://wls1032.imc.fors.ru:3872/emd/main/', max size limit for direct load exceeded [7544731/5242880]So the uploading XML file size was 7 Mb, and the limit on OMS was 5 Mb.  To increase the max file size limit to 20 Mb I had to connect to the OMS host and execute the following commands from $OMS_HOME/bin directory: ./emctl set property -name em.loader.maxDirectLoadFileSz -value 20971520 -module emoms ./emctl stop oms ./emctl start omsAfter that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 one more time and it completed successfully.   The agent uploaded the configuration information to the EMGC  repository and I was able to see the results of my weblogic domain scale-up in EMGC Console.DeploymentsSo, now the weblogic cluster contains 2 managed servers located on the different hosts. This powerful feature of the Enterprise Manager Grid Control  is a part of  the WebLogic Server Management Pack Enterprise Edition.

    Read the article

  • Ubuntu backlight problem with Nvidia graphics

    - by Vladimir
    I have a laptop mySN QMG6 / Chiligreen Mobilitas NW which is Quanta TW9 barebone with intel i3 and nvidia 335m GT onboard. On ubuntu distros 10.04, 10.10, 11.04 and 11.10 i had problem with changing screen backlight with nouveau and nvidia drivers. FN+F4/F5 buttons did not change my brightness. I tried to edit xorg.conf, adding Option “RegistryDwords” “EnableBrightnessControl=1? Also tried to add some lines to grug acpi_osi="Linux" acpi_backlight=vendor Neither worked for me. Today I installed Ubuntu 12.04 beta2 and... With nouveau driver my FN key works, and changes the brightness (is it a new 3.0.22 linux kernel, or patched nouveau driver, i don't know). This is a big step forward. But, when installing proprietary nvidia driver (295.33) FN button stops working and i can't change brightness. I also tried workaround with xorg and grub with no result. Tried to install acpi from apt - no result. Is there anything left to try? I really need that nvidia driver working with FN keys, as i would like to have a working 3D acceleration. P.S. Does the nouveau driver has 3d acceleration like nvidia drivers??? If there is need to provide some log data, please write what should i print, as i'm a bit new to Ubuntu. P.P.S. Same problems i had with other Linux distros (Mint, Fedora and others) P.P.P.S. Other FN buttons work with both drivers (Mute, VOL UP/DOWN, WiFi on/off, Bluetooth, Sleep, Start/Pause, Stop, Next/Prev song)

    Read the article

  • Oracle University Nouveaux cours (Week 35)

    - by swalker
    Parmi les nouveautés d’Oracle Université de ce mois-ci, vous trouverez : Fusion Middleware Oracle Directory Services 11g: Administration (5 days) Oracle SOA Suite 11g: Essential Concepts (Training on Demand) e-Business Suite R12 Oracle HRMS iRecruitment Fundamentals (Self-Study Course) R12 Oracle Payroll Fundamentals: Administration (Self-Study Course) R12 Oracle HRMS System Administration Fundamentals (Self-Study Course) R12 Oracle HRMS Self Service Fundamentals (Self-Study Course) R12 Oracle HRMS Implement and Use Fast Formula (Self-Study Course) R12 HRMS Work Structures Fundamentals (Self-Study Course) R12 HRMS Total Compensation Foundations (Self-Study Course) Siebel Siebel 8.1.x Chat and Voice Integration Using CCA (Self-Study Course) Siebel 8.1.x Search using Oracle Secure Enterprise Search (Self-Study Course) Siebel 8.1.x COM Web Services (Self-Study Course) Siebel 8.1.x COM Asset Based Order Management (Self-Study Course) Siebel 8.1.x COM: What is New in Product Configurator (Self-Study Course) Siebel 8.1.x COM Product Configurator Caching & Performance Management (Self-Study Course) Siebel 8.1.x COM PSP Engine Caching and Performance Management (Self-Study Course) Siebel 8.1.x Remote: Administration (Self-Study Course) Siebel 8.1.x Remote: Technical Foundations (Self-Study Course) Siebel Tools: Configuring Chart and Tree Applets (Self-Study Course) Sun - Server Administration SPARC SuperCluster Administration and Maintenance Seminar (2 days) OPN Only Sparc T4-Based Servers Installation Boot Camp (1 day) Primavera Primavera P6 Application Administration Rel 8.x (2 days) Oracle Retail Retail Merchandising System (RMS) Business Overview (Self-Study Course) Retail Invoice Matching (ReIM) Product Overview (Self-Study Course) Retail Invoice Matching (ReIM) Business Introduction (Self-Study Course) Retail Demand Forecasting: RDF Classic Product Overview (Self-Study Course) Retail Demand Forecasting Introduction (Self-Study Course) Retail Data Warehouse (RDW) Overview 13.1 (Self-Study Course) Oracle Retail Point-of-Service (POS) Product Overview (Self-Study Course) Retail Sales Audit (ReSA) Product Overview (Self-Study Course) Retail Price Management (RPM) Product Overview (Self-Study Course) Retail Merchandising System (RMS) Technical Introduction (Self-Study Course) Oracle Retail Integration Bus (RIB) Product Overview (Self-Study Course) Oracle Communiucations Unified Communications Suite Convergence Customization (2 days) OSM Foundations I: Tasks, Processes and Orders Contacter l’ équipe locale d’ Oracle University pour toute information et dates de cours. Restez connecté à Oracle University : LinkedIn OracleMix Twitter Facebook Google+

    Read the article

  • Are You Using Windows Live Mesh?

    - by Ben Griswold
    Most of the time, I’m the guy who authors the show notes for the Herding Code Podcast.  The workflow is relatively straight-forward: Jon shares the pre-production audio with me, I compete my write up and then ship the notes back to Jon for publishing with the edited audio.  All file sharing is all done with shared folders in the Windows Live Mesh. The director of my kid’s preschool was looking for a way to access her work computer from her home office.  VPN connection?  Remote desktop?  FTP?  Nope. I installed Windows Live Mesh in a matter of minutes, synchronized a number of folders and she was off and running.  (The neat thing is she’s running a PC in the office and a Mac at home.) I was using Dropbox before discovering Mesh. Dropbox is still very cool but I’m in and out of Mesh enough that it’s taken over.  Actually I still have a Dropbox folder – it’s just being synched by Mesh now. If you’re interested in giving Live Mesh a whirl, here’ are the notable links as found on the product’s site: What you need Create your mesh Sync folders Share folders Use your Live Desktop Connect to a remote computer Use a mobile phone Good luck!

    Read the article

  • Integrating NetBeans for Raspberry Pi Java Development

    - by speakjava
    Raspberry Pi IDE Java Development The Raspberry Pi is an incredible device for building embedded Java applications but, despite being able to run an IDE on the Pi it really pushes things to the limit.  It's much better to use a PC or laptop to develop the code and then deploy and test on the Pi.  What I thought I'd do in this blog entry was to run through the steps necessary to set up NetBeans on a PC for Java code development, with automatic deployment to the Raspberry Pi as part of the build process. I will assume that your starting point is a Raspberry Pi with an SD card that has one of the latest Raspbian images on it.  This is good because this now includes the JDK 7 as part of the distro, so no need to download and install a separate JDK.  I will also assume that you have installed the JDK and NetBeans on your PC.  These can be downloaded here. There are numerous approaches you can take to this including mounting the file system from the Raspberry Pi remotely on your development machine.  I tried this and I found that NetBeans got rather upset if the file system disappeared either through network interruption or the Raspberry Pi being turned off.  The following method uses copying over SSH, which will fail more gracefully if the Pi is not responding. Step 1: Enable SSH on the Raspberry Pi To run the Java applications you create you will need to start Java on the Raspberry Pi with the appropriate class name, classpath and parameters.  For non-JavaFX applications you can either do this from the Raspberry Pi desktop or, if you do not have a monitor connected through a remote command line.  To execute the remote command line you need to enable SSH (a secure shell login over the network) and connect using an application like PuTTY. You can enable SSH when you first boot the Raspberry Pi, as the raspi-config program runs automatically.  You can also run it at any time afterwards by running the command: sudo raspi-config This will bring up a menu of options.  Select '8 Advanced Options' and on the next screen select 'A$ SSH'.  Select 'Enable' and the task is complete. Step 2: Configure Raspberry Pi Networking By default, the Raspbian distribution configures the ethernet connection to use DHCP rather than a static IP address.  You can continue to use DHCP if you want, but to avoid having to potentially change settings whenever you reboot the Pi using a static IP address is simpler. To configure this on the Pi you need to edit the /etc/network/interfaces file.  You will need to do this as root using the sudo command, so something like sudo vi /etc/network/interfaces.  In this file you will see this line: iface eth0 inet dhcp This needs to be changed to the following: iface eth0 inet static     address 10.0.0.2     gateway 10.0.0.254     netmask 255.255.255.0 You will need to change the values in red to an appropriate IP address and to match the address of your gateway. Step 3: Create a Public-Private Key Pair On Your Development Machine How you do this will depend on which Operating system you are using: Mac OSX or Linux Run the command: ssh-keygen -t rsa Press ENTER/RETURN to accept the default destination for saving the key.  We do not need a passphrase so simply press ENTER/RETURN for an empty one and once more to confirm. The key will be created in the file .ssh/id_rsa.pub in your home directory.  Display the contents of this file using the cat command: cat ~/.ssh/id_rsa.pub Open a window, SSH to the Raspberry Pi and login.  Change directory to .ssh and edit the authorized_keys file (don't worry if the file does not exist).  Copy and paste the contents of the id_rsa.pub file to the authorized_keys file and save it. Windows Since Windows is not a UNIX derivative operating system it does not include the necessary key generating software by default.  To generate the key I used puttygen.exe which is available from the same site that provides the PuTTY application, here. Download this and run it on your Windows machine.  Follow the instructions to generate a key.  I remove the key comment, but you can leave that if you want. Click "Save private key", confirm that you don't want to use a passphrase and select a filename and location for the key. Copy the public key from the part of the window marked, "Public key for pasting into OpenSSH authorized_keys file".  Use PuTTY to connect to the Raspberry Pi and login.  Change directory to .ssh and edit the authorized_keys file (don't worry if this does not exist).  Paste the key information at the end of this file and save it. Logout and then start PuTTY again.  This time we need to create a saved session using the private key.  Type in the IP address of the Raspberry Pi in the "Hostname (or IP address)" field and expand "SSH" under the "Connection" category.  Select "Auth" (see the screen shot below). Click the "Browse" button under "Private key file for authentication" and select the file you saved from puttygen. Go back to the "Session" category and enter a short name in the saved sessions field, as shown below.  Click "Save" to save the session. Step 4: Test The Configuration You should now have the ability to use scp (Mac/Linux) or pscp.exe (Windows) to copy files from your development machine to the Raspberry Pi without needing to authenticate by typing in a password (so we can automate the process in NetBeans).  It's a good idea to test this using something like: scp /tmp/foo [email protected]:/tmp on Linux or Mac or pscp.exe foo pi@raspi:/tmp on Windows (Note that we use the saved configuration name instead of the IP address or hostname so the public key is picked up). pscp.exe is another tool available from the creators of PuTTY. Step 5: Configure the NetBeans Build Script Start NetBeans and create a new project (or open an existing one that you want to deploy automatically to the Raspberry Pi). Select the Files tab in the explorer window and expand your project.  You will see a build.xml file.  Double click this to edit it. This file will mostly be comments.  At the end (but within the </project> tag) add the XML for <target name="-post-jar">, shown below Here's the code again in case you want to use cut-and-paste: <target name="-post-jar">   <echo level="info" message="Copying dist directory to remote Pi"/>   <exec executable="scp" dir="${basedir}">     <arg line="-r"/>     <arg value="dist"/>     <arg value="[email protected]:NetBeans/CopyTest"/>   </exec>  </target> For Windows it will be slightly different: <target name="-post-jar">   <echo level="info" message="Copying dist directory to remote Pi"/>   <exec executable="C:\pi\putty\pscp.exe" dir="${basedir}">     <arg line="-r"/>     <arg value="dist"/>     <arg value="pi@raspi:NetBeans/CopyTest"/>   </exec> </target> You will also need to ensure that pscp.exe is in your PATH (or specify a fully qualified pathname). From now on when you clean and build the project the dist directory will automatically be copied to the Raspberry Pi ready for testing.

    Read the article

  • Git: Fixing a bug affecting two branches

    - by Aram Kocharyan
    I'm basing my Git repo on http://nvie.com/posts/a-successful-git-branching-model/ and was wondering what happens if you have this situation: Say I'm developing on two feature branches A and B, and B requires code from A. The X node introduces an error in feature A which affects branch B, but this is not detected at node Y where feature A and B were merged and testing was conducted before branching out again and working on the next iteration. As a result, the bug is found at node Z by the people working on feature B. At this stage it's decided that a bugfix is needed. This fix should be applied to both features, since the people working on feature A also need the bug fixed, since its part of their feature. Should a bugfix branch be created from the latest feature A node (the one branching from node Y) and then merged with feature A? After which both features are merged into develop again and tested before branching out? The problem with this is that it requires both branches to merge to fix the issue. Since feature B doesn't touch code in feature A, is there a way to change the history at node Y by implementing the fix and still allowing the feature B branch to remain unmerged yet have the fixed code from feature A? Mildly related: Git bug branching convention

    Read the article

  • Anonymous Access and Sharepoint Web Services

    - by Stacy Vicknair
    A month or so ago I was working on a feature for a project that required a level of anonymity on the Sharepoint site in order to function. At the same time I was also working on another feature that required access to the Sharepoint search.asmx web service. I found out, the hard way, that the Sharepoint Web Services do not operate in an expected way while the IIS site is under anonymous access. Even though these web services expect requests with certain permissions (in theory) they never attempt to request those credentials when the web service is contacted. As a result the services return a 401 Unauthorized response. The fix for my situation was to restrict anonymous access to the area that needed it (in this case the control in question had support for being used in an ASP.NET app that I could throw in a virtual directory). After that I removed anonymous access from IIS for the site itself and the QueryService requests were working once more. Here’s a related article with a bit more depth about a similar experience: http://chrisdomino.com/Blog/Post/401-Reasons-Why-SharePoint-Web-Services-Don-t-Work-Anonymously?Length=4 Technorati Tags: Sharepoint,QueryService,WSS,IIS,Anonymous Access

    Read the article

  • List SQL Server Instances using the Registry

    - by BuckWoody
    I read this interesting article on using PowerShell and the registry, and thought I would modify his information a bit to list the SQL Server Instances on a box. The interesting thing about listing instances this was is that you can touch remote machines, find the instances when they are off and so on. Anyway, here’s the scriptlet I used to find the Instances on my system: $MachineName = '.' $reg = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey('LocalMachine', $MachineName) $regKey= $reg.OpenSubKey("SOFTWARE\\Microsoft\\Microsoft SQL Server\\Instance Names\\SQL" ) $regkey.GetValueNames() You can read more of his article to find out the reason for the remote registry call and so forth – there are also security implications here for being able to read the registry. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • BIEE Drilling Down and then Across

    - by Tim Dexter
    Slightly off topic today but if you are working with OBIEE in conjunction with BIP its not that far off. Some of you may know, I now get to play with the whole BI suite, I have been for nearly 2 years. Today, I was working with BIEE and wanted to share what I thought was a neat trick. I have to thank Rob Lindsley on our team for the pointers to get it working. The problem I had was that I had set up a drill down hierarchy that took the user down a couple of levels to the bottom project number level. I needed for the user to then be able to click the project number to navigate across to another more detailed report on that project. By default, there is no link, you are at the bottom of a hierarchical drill! There is nothing you can do in the data model (that Im aware of) but you can use a neat trick to get BIEE to allow you to navigate from the bottom rung of the hierarchy. Add the bottom level column to an Answer report. Go into the column properties and set the navigation target. The trick is to then set the current column properties as the system-wide default for that column. You can then actually delete the column from your report. Now as you drill down the hierarchy and reach what was the bottom you will still have a link for the user to punch over to the detail report, sweeeet! The other benefit is that whenever you add the column to a report the link will be available to the detail report, unless you want to override it of course.

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • How do you deal with translating theory into practice?

    - by Mr. Shickadance
    Hello all! Being a computer scientist in a research field I am often tasked with working alongside professionals outside of the software domain (think math people, electrical engineer etc), and then translating their theories and ideas into real-world implementations. I often find it difficult when they present a theoretical problem which appears to be somewhat disconnected from reality. I am not saying that the theory is bogus, only that it is difficult to translate into real-world situations. For example, recently I have been working with software defined radios. We are exploring many different areas, but often the math specialists in my group would present a problem which is heavily grounded in theory (signal processing, physics, whatever). I often struggle at times where it is hard to draw direct parallels between the theory and the real-world implementation that I need to develop. Say we are working on an energy detector, the theory person in my group would say "you need to measure the noise variance with no signal present." This leads me to think "how the hell do I isolate noise from a signal in reality?" There are many examples, but I hope you see where I am going. So, my question is how does one deal with implementation of theoretical concepts when the theory seems detached from reality? Or at least when the connections are not so clear. Or perhaps, the person with the 'theory' may be ignorant of real restrictions? Note: I found this to be a hard question to ask - hopefully you are following me. If you have suggestions on how I could improve it, by all means let me know! Thanks for looking! EDIT: To be a bit more clear, I understand in situations like this that I must learn that specific domain myself to an extent (i.e. signal processing), but I am more concerned with when those theoretical concepts do not appear to be as grounded in practice as one would like.

    Read the article

  • List SQL Server Instances using the Registry

    - by BuckWoody
    I read this interesting article on using PowerShell and the registry, and thought I would modify his information a bit to list the SQL Server Instances on a box. The interesting thing about listing instances this was is that you can touch remote machines, find the instances when they are off and so on. Anyway, here’s the scriptlet I used to find the Instances on my system: $MachineName = '.' $reg = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey('LocalMachine', $MachineName) $regKey= $reg.OpenSubKey("SOFTWARE\\Microsoft\\Microsoft SQL Server\\Instance Names\\SQL" ) $regkey.GetValueNames() You can read more of his article to find out the reason for the remote registry call and so forth – there are also security implications here for being able to read the registry. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – 2011 – Multi-Monitor SSMS Windows

    - by pinaldave
    I have a dual screen arrangement at my home system. I love it because it’s very convenient. When I am working with SQL Server 2008 R2 or any earlier versions, I would want to use both of the Monitor so I open two separate SQL Server Management Studio and work along with it. I have no complaints with my system, at all. I am totally fine with it. However, sometimes I face small issues, like when I just want a small code open in a separate window but I do not want the windows to take over the whole of another window. But then again, I am already used to this current system. Recently when I was working with SQL Server 2011 ‘Denali’ CTP1, I dragged one of the windows by accident, and suddenly it magically appeared out of its ‘Shell’ of SSMS and was appearing on a separate monitor. I played around a bit and figured out that SSMS now supports multi-monitor (or multi screen) support with single SSMS instance. We can now drag out and drag in any window and resize them at any size. Fantastic! If you are multi-monitor user, I am sure you will like this feature. This leads me to ask you question? Do you use multi-monitor system while working with SQL Server? Leave a quick comment. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Project Manager that wants to lock in time estimate with a signed contract

    - by sunpech
    At a previous employment, a project manager (PM) wasn't satisfied with the delivery time of the code on a project I was on. I was told by my project lead that that the PM was considering having me sign a contract to lock-in my time estimates I gave for tasks and delivery dates. The situation on the project was that we were working with new technologies, codebase, coding standards, and very prone-to-change requirements. I was learning new things and applying them the best I could on requirements that kept on changing. The requirements throughout the iterations grew by 2-3 times, with my estimate-to-complete growing by roughly 5-8 times. The only things that didn't change were the estimates and delivery dates. Yes, I did end up missing most deadlines. And I was working on some very new technologies that no one else on the entire development team could really help out on because they wouldn't be familiar with it. At least not easily. It seemed to me then, that the PM wanted his numbers to add up-- and thus wanted me to sign a contract to "ensure" that I would always deliver working code on time. I suppose with a signed contract the PM could use it against me if I couldn't deliver on time. I believe what happened next was that other project managers and/or project leads defended me, and didn't let this happen. My question is, should this raise a red flag about the manager? Is it common practice for a manager to lock-in time estimates of a software developer with a signed contract? Or in this case, try to. Please note, I was a full time employee, not an independent consultant. Update: I want to add that I did give new estimates weekly, but it seems the original estimates and delivery dates were what the PM was fixated on.

    Read the article

  • Can't adjust backlight on an Nvidia 335m GT

    - by Vladimir
    I have a laptop mySN QMG6 / Chiligreen Mobilitas NW which is Quanta TW9 barebone with intel i3 and nvidia 335m GT onboard. On ubuntu distros 10.04, 10.10, 11.04 and 11.10 i had problem with changing screen backlight with nouveau and nvidia drivers. FN+F4/F5 buttons did not change my brightness. I tried to edit xorg.conf, adding Option “RegistryDwords” “EnableBrightnessControl=1? Also tried to add some lines to grub acpi_osi="Linux" acpi_backlight=vendor Neither worked for me. Today I installed Ubuntu 12.04 beta2 and... With nouveau driver my FN key works, and changes the brightness (is it a new 3.0.22 linux kernel, or patched nouveau driver, i don't know). This is a big step forward. But, when installing proprietary nvidia driver (295.33) FN button stops working and i can't change brightness. I also tried workaround with xorg and grub with no result. Tried to install acpi from apt - no result. Is there anything left to try? I really need that nvidia driver working with FN keys, as i would like to have a working 3D acceleration. P.S. Does the nouveau driver has 3d acceleration like nvidia drivers??? If there is need to provide some log data, please write what should i print, as i'm a bit new to Ubuntu. P.P.S. Same problems i had with other Linux distros (Mint, Fedora and others) P.P.P.S. Other FN buttons work with both drivers (Mute, VOL UP/DOWN, WiFi on/off, Bluetooth, Sleep, Start/Pause, Stop, Next/Prev song) Some new thoughts... CONFIG_BACKLIGHT_GENERIC=m could this be an issue? Made this by grep BACKLIGHT /boot/config-3.2.0-22-generic-pae Full grep output can be viewed here: http://pastebin.com/sMRd2Z4k

    Read the article

  • A Kingdom To Conquer: Character Sketches

    - by George Clingerman
    Still not 100% sold on my title so it remains a working title for now, but here’s a series of character sketches I’ve done for a turn based strategy game I’m playing at making. I’ve been sketching these on various pieces of paper throughout the last two weeks and just finished the last of them today (my plan was for 16 different types of units and well, now I have them, so I consider that done!).                    Pretty rough sketches for now, but I’m pretty happy with the art style overall. I was wrestling for quite a while just HOW I wanted the game to look and then I finally stumbled across Art Baltazar and I was like, THAT’S IT! There’s a few characters I need to re-do a bit more, I feel they’re a bit TOO much like some of the characters that inspired them but I’m happy that the ideas are finally sketched out. I’ve also been playing a bit in InkScape working on making these guys digital. A pretty new experience for me since I’m not used to working with vector images but I think I’ll get the hang of it. Here’s the Knight all vectorized. Now if I could just start making some progress on the actual game itself…

    Read the article

  • CodePlex Daily Summary for Tuesday, July 16, 2013

    CodePlex Daily Summary for Tuesday, July 16, 2013Popular ReleasesOsmSharp: OsmSharp v3.1.4840.2: Release notes: - Fixed issue: http://osmsharp.codeplex.com/workitem/1246 'Optimized route has zero TotalDistance' - Fixed lazy loading strategy for datasource routing. - Fixed precision bug in View. - Added unittests for instruction generation. - Added daily build nuget packages: Daily build packages at http://5.9.56.3:8080/guestAuth/app/nuget/v1/FeedService.svc/ - Fixed default direction on roundabout: roundabouts are always oneway. - Take average left+right turn. - Fixed UTF8 encoding issu...PMU Connection Tester: PMU Connection Tester v4.4.0: This is the current release build of the PMU Connection Tester, version 4.4.0 This version of the connection tester was released with openPDC 1.5 SP1 and openPDC 2.0 BETA. This application requires that .NET 4.0 already be installed on your system. Note this is the last release of the PMU Connection Tester that will built on .NET 4.0 using the TVA Code Library and the Time-series Framework. Future releases of the PMU Connection Tester will be built on .NET 4.5 (or later) using the Grid Sol...WatchersNET.SiteMap: WatchersNE​T.SiteMap 01.03.08: changes Localized Tab Name is now usedStackWalker - Walking the callstack: StackWalker - 2013-07-15: This project describes and implements the (documented) way to walk a callstack for any thread (own, other and remote). It has an abstraction layer, so the calling app does not need to know the internals. This release has some bugfixes for VC5/6.GoAgent GUI: GoAgent GUI 1.1.0 ???: ???????,??????????????,?????????????? ???????? ????: 1.1.0? ??? ?????????? ????????? ??:??cn/hk????,?????????????????????。Hosts??????google_hk。Wsus Package Publisher: Release v1.2.1307.15: Fix a bug where WPP crash if 'ShowPendingUpdates' is start with wrong credentials. Fix a bug where WPP crash if ArrivalDateAfter and ArrivalDateBefore is equal in the ComputerView. Add a filter in the ComputerView. (Thanks to NorbertFe for this feature request) Add an option, when right-clicking on a computer, you can ask for display the current logon user of the remote computer. Add an option in settings to choose if WPP ping remote computers using IPv4, IPv6 or IPv6 and, if fail, IP...Lab Of Things: vBeta1: Initial release of LoTVidCoder: 1.4.23: New in 1.4.23 Added French translation. Fixed non-x264 video encoders not sticking in video tab. New in 1.4 Updated HandBrake core to 0.9.9 Blu-ray subtitle (PGS) support Additional framerates: 30, 50, 59.94, 60 Additional sample rates: 8, 11.025, 12 and 16 kHz Additional higher bitrates for audio Same as Source Constant Framerate 24-bit FLAC encoding Added Windows Phone 8 and Apple TV 3 presets Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.96: Fix for issue #19957: EXE should output the name of the file(s) being minified. Discussion #449181: throw a Sev-2 warning when trailing commas are detected on an Array literal. Perfectly legal to do so, but the behavior ends up working differently on different browsers, so throw a cross-browser warning. Add a few more known global names for improved ES6 compatibility update Nuget package to version 2.5 and automatically add the AjaxMin.targets to your project when you update the package...Outlook 2013 Add-In: Categories and Colors: This new version has a major change in the drawing of the list items: - Using owner drawn code to format the appointments using GDI (some flickering may occur, but it looks a little bit better IMHO, with separate sections). - Added category color support (if more than one category, only one color will be shown). Here, the colors Outlook uses are slightly different than the ones available in System.Drawing, so I did a first approach matching. ;-) - Added appointment status support (to show fr...TypePipe: 1.15.2.0 (.NET 4.5): This is build 1.15.2.0 of the TypePipe for .NET 4.5. Find the complete release notes for the build here: Release Notes.re-linq: 1.15.2.0 (.NET 4.5): This is build 1.15.2.0 of re-linq for .NET 4.5. Find the complete release notes for the build here: Release Notes To use re-linq with .NET 3.5, use a 1.13.x build.Columbus Remote Desktop: 2.0 Sapphire: Added configuration settings Added update notifications Added ability to disable GPU acceleration Fixed connection bugsDataDevelop: Beta 0.6.5: Hotfix bug in Python Table.ImportAll method Updated External Libraries Fixes in Excel Exportation Modify ConnectionString refreshes the Properties Window correctlyUser Group Labs: User Group Data: 01.00.00: This release has the following updates and new features: Initial release with a minimal feature set Easy to use (just add to the social group details page) Edit common user group properties System Requirements DNN v07.00.02 or newer .Net Framework v4.0 or newerCarrotCake, an ASP.Net WebForms CMS: Binaries and PDFs - Zip Archive (v. 4.3 20130713): Product documentation and additional templates for this version is included in the zip archive, or if you want individual files, visit the http://www.carrotware.com website. Templates, in addition to those found in the download, can be downloaded individually from the website as well. If you are coming from earlier versions, make a precautionary backup of your existing website files and database. When installing the update, the database update engine will create the new schema items (if you...Dalmatian Build Script: Dalmatian Build 0.1.3.0: -Minor bug fixes -Added Choose<T> and ChooseYesNo to Console objectPushover.NET: Pushover.NET - Stable Release 10 July 2013: This is the first stable release of Pushover.NET. It includes 14 overloads of the SendNotification method, giving you total flexibility of sending Pushover notifications from your client. Assembly is built to .NET 2.x so it can be called from .NET 2.x, 3.x and 4.x projects. Also available is the Test Harness. This is a small GUI that you can use to test calls to Pushover.NET's main DLL. It's almost fully functional--the sound effects haven't been fully configured so no matter what you pick ...MCEBuddy 2.x: MCEBuddy 2.3.14: 2.3.14 BETA is available through the Early Access Program.Click here https://mcebuddy2x.codeplex.com/discussions/439439 for details and to get access to Early Access Program to download latest releases. Changelog for 2.3.14 (32bit and 64bit) NEW FEATURES: 1. ENHANCEMENTS: 2. Improved eMail notifications 3. Improved metrics details 4. Support for larger history (INI) file (about 45,000 sections, each section can have about 1500 entries) BUG FIXES: 5. Fix for extracting Movie release year from...Azure Depot: Flask: Flask Version 01New Projects(??:wwqgtxx-goagent)???goagent/??wallproxy???????????????: =*greatagent*(??:wwqgtxx-goagent)= = [downloads downloads??] = =[WhyMove ???????]= ==???goagent/??wallproxy???????????????,?????goagent?wallproxy??????????,??Azure Anonymous SAS URL Generator: The Azure Anonymous SAS URL Generator allows you to generate anonymous Shared Access Signature URLs for individual Blobs stored within a given storage containerBizTalk ESB Toolkit Enterprise Library machine.config Toggler: This tool provides an instant on/off switch for the enterpriseLibrary.ConfigurationSource changes that the BizTalk ESB Toolkit makes to machine.config.Common .NET Utilities: Just some common utilities I've developed over the years that are shared among my various projects.DIY Online System: To create a Philippine Standard Online Payroll SystemDynamics Ax CipherLib: The Dynamics Ax CipherLib is a simple X.509 certificate based cipher implementation that allows to encrypt/decrypt S/MIME or XML messages with Dynamics Ax 4.0,GpsBox: We wanted to create a device which is sending the current position and any user is able to look up the current position of it.Ingenious Framework: Ingenious Framework is an easy to use, lightweight framework that hides semantic web technologies from you as a developer, but harnesses its benefits.Lab Of Things: Lab of Things is a flexible platform for experimental research that uses connected devices in homes. Orchard Deployment: Deployment and replication of content between one or more Orchard CMS instances. Move content individually, using workflows or custom subscriptions.PluginManagement.Net: .NET Plugin Loader Framework using MEF(Microsoft Extensibility Framework). It has a generic loader which loads plugin that implements generic interface IPlugin.QlikView Management API in vb.net: For more info see: http://community.qlikview.com/docs/DOC-3647 a conversion from c# to vb.net Recover Deleted Emails Microsoft Exchange Server: Get an effective way to recover deleted emails from Exchange server and browse it with MS Outlook or MS Exchange server depending upon the users requirement.Shindo's Race System: Special Edition: Shindo's Race System: Special EditionSP Solution Builder: Simple summary of project will be added latertestfailure0716: testTFS Build Custom Activities: TFS Build Custom extensions offers you a list of TFS build activities and a WorkFlow to use them.Upida.net: If you use Asp.net MVC and any NHibernate, than Upida is for you. Upida favors knockout.js. You can download example code, implemented with knockout.js.

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >