Search Results

Search found 9154 results on 367 pages for 'replication jobs job'.

Page 112/367 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Cisco 7206vxr cpu reducing

    - by naimson
    I have a 7206VXR (NPE-G2) . At the rate of 140 kpps i gain 80% of cpu . So i looking for ways how to reduce it? So i want to turn off netflow(but don't want to this,monitoring is highly important for me), but it will give me only 10-20% ? At this moment with 84kpps i have 58% sh processes cpu sorted give me this. PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process 109 163534600 537236763 304 35.38% 32.83% 16.85% 0 IP Input 67 829396 52280 15864 0.15% 0.01% 0.00% 0 Per-minute Jobs 68 5542736 3053476 1815 0.15% 0.18% 0.16% 0 Per-Second Jobs 51 635852 1116315 569 0.07% 0.03% 0.02% 0 Net Background 329 120396 4607274 26 0.07% 0.00% 0.00% 0 EIGRP-IPv4 Hello 105 50508 95032488 0 0.07% 0.05% 0.05% 0 IPAM Manager 6 4068580 476916 8531 0.00% 0.07% 0.05% 0 Check heaps 7 7768 3634 2137 0.00% 0.00% 0.00% 0 Pool Manager 8 0 1 0 0.00% 0.00% 0.00% 0 DiscardQ Backgro 10 8 708 11 0.00% 0.00% 0.00% 0 WATCH_AFS 5 0 1 0 0.00% 0.00% 0.00% 0 RO Notify Timers 12 0 2 0 0.00% 0.00% 0.00% 0 ATM VC Auto Crea 9 0 2 0 0.00% 0.00% 0.00% 0 Timers 11 0 2 0 0.00% 0.00% 0.00% 0 ATM AutoVC Perio 13 296 610532 0 0.00% 0.00% 0.00% 0 IPC Event Notifi 16 0 1 0 0.00% 0.00% 0.00% 0 IPC Zone Manager 17 3584 2980311 1 0.00% 0.00% 0.00% 0 IPC Periodic Tim 4 0 1 0 0.00% 0.00% 0.00% 0 EDDRI_MAIN 19 0 1 0 0.00% 0.00% 0.00% 0 IPC Process leve 20 0 1 0 0.00% 0.00% 0.00% 0 IPC Seat Manager 21 96 174453 0 0.00% 0.00% 0.00% 0 IPC Check Queue 14 4 50890 0 0.00% 0.00% 0.00% 0 IPC Dynamic Cach 3 0 1 0 0.00% 0.00% 0.00% 0 cpf_process_tpQ 24 756 305371 2 0.00% 0.00% 0.00% 0 IPC Keep Alive M 25 2340 610561 3 0.00% 0.00% 0.00% 0 IPC Loadometer 22 0 1 0 0.00% 0.00% 0.00% 0 IPC Seat RX Cont 15 0 1 0 0.00% 0.00% 0.00% 0 IPC Session Serv 18 1620 2980310 0 0.00% 0.00% 0.00% 0 IPC Deferred Por 29 0 1 0 0.00% 0.00% 0.00% 0 Exception contro sh run(greped): http://pastie.org/5483194 Hardware: c7200p-adventerprisek9-mz.151-4.M1.bin Cisco 7206VXR (NPE-G2) processor (revision A) with 917504K/65536K bytes of memory. Processor board ID 2xxxxxxx MPC7448 CPU at 1666Mhz, Implementation 0, Rev 2.2 6 slot VXR midplane, Version 2.1

    Read the article

  • Configuring correct port for Oozie (invoking PIG script) in Cloudera Hue

    - by user2985324
    I am new to CDH4 Oozie workflow editor. While trying to invoke a pig script from Oozie workflow editor, i am getting the following error. HadoopAccessorException: E0900: Jobtracker [mymachine:8032] not allowed, not in Oozies whitelist It looks like Oozie is submitting the job to Yarn port (8032). I want it to submit to 8021 (MR jobtracker) port. Can someone help me in identify where to set the job tracker URL or port so that oozie picks up the correct one (using Hue or Cloudera manager). Previously I tried the following but none of them helped Modfied workflow.xml file /user/hue/oozie/workspaces/../workflow.xml file. However it gets overwritten when I submit the job from workflow editor. In cloudera Manager -- oozie -- configuration --Oozie Server (advanced) -- Oozie Server Configuration Safety Valve for oozie-site.xml property I set the following- <property> <name>oozie.service.HadoopAccessorService.nameNode.whitelist</name> <value>mymachine:8020</value> oozie.service.HadoopAccessorService.jobTracker.whitelist mymachine:8021 and restarted the oozie service. 3. Tried to override 'jobTracker' property while configuring the pig task. This appears as follows in the workflow file however it doesn't take effect (or doesn't override) and still uses 8032 port. <global> <configuration> <property> <name>jobTracker</name> <value>mymachine:8021</value> </property> </configuration> </global> I am using CDH4 version. Thanks for looking into my question.

    Read the article

  • Quartz.NET trigger not firing

    - by billy_bob_the
    i am using Quartz.NET in my ASP.NET web application. i put the following code in a button click handler to make sure that it executes (for testing purposes): Quartz.ISchedulerFactory factory = new Quartz.Impl.StdSchedulerFactory(); Quartz.IScheduler scheduler = factory.GetScheduler(); Quartz.JobDetail job = new Quartz.JobDetail("job", null, typeof(BackupJob)); Quartz.Trigger trigger = Quartz.TriggerUtils.MakeDailyTrigger(8, 30); // i edit this each time before compilation (for testing purposes) trigger.StartTimeUtc = Quartz.TriggerUtils.GetEvenSecondDate(DateTime.UtcNow); trigger.Name = "trigger"; scheduler.ScheduleJob(job, trigger); scheduler.Start(); here's "BackupJob": public class BackupJob : IJob { public BackupJob() { } public void Execute(JobExecutionContext context) { NSG.BackupJobStart(); } } my question: why is "BackupJobStart()" not firing? i've used similar code before and it worked fine. EDIT: @Andy White, i would have it in Application_Start in Global.asax. this doesn't work which is why i moved it to a button click handler to narrow down the problem.

    Read the article

  • Using db4o with multiple application instances under medium trust

    - by Anders Fjeldstad
    I recently stumbled over the object database engine db4o which I think looks really interesting. I would like to use it in an ASP.NET MVC application that will be deployed to a shared hosting environment under medium trust. Because of the trust level, I'm restricted to using db4o in embedded/in-process mode. That in itself should be no problem, but the hosting provider also transparently runs each web application in multiple (load-balanced) server instances with shared storage, which I would say is normally a very nice feature for a $10/month hoster. However, since an instance of a db4o server with write access (whether in-process or networked) locks the underlying database file, having multiple instances of the application using the same file won't work (or at least I can't see how it would). So the question is: is it possible to use db4o in this specific environment? I have considered letting each application have its own database which is synchronized with a master database using replication (dRS), but that approach will most likely end up with very frequent bi-directional replication (read master changes at beginning of each request, write to master after each change) which I don't think will be very efficient. Summary of the web application/environment characteristics: Read-intensive (but not entirely read-only) Some delay (a few seconds) is acceptible between the time that a change is made and the time when the change shows up in all the application instances' data Must run in medium trust No guarantee that the load-balancer uses "sticky sessions" All suggestions are much appreciated!

    Read the article

  • How do you prove your cleverness and programming skills?

    - by Sebi
    Lately, there were a lot of questions related to career planning and how to decide which languages to learn, how to learn new languages and so on. But I was thinking about a way, how do you will proof later that you're really learned something. Ok you can mentione it in your application for a job or in the interview, but by just stating that I was learning e.g. C++ at home, I don't think this will really be very sucessful. So some suggested to create something (homepage, application, whatsoever) to prove that you also can use these skills in practice. But still I'm not really sure if this will provide any benefit, because it would require a very special project to show all your skills and I don't think you can easily invent such a project (that sould also be useful). Others suggested to solve for example the Project Euler questions but still I'm not sure how this will be useful in career-planning. Are you going to mention at your job interview that you solved all this question in the company's favorite programming language?? :D I can't imagine. The reason why I'm asking is that I have some spare time and I would really like to learn some new programming languages (C++ and/or Python if this matters) and I'm looking for a smart way to do is while concurrently assure that it will be useful in my future career. (there are 3-4 companies id like to try to get a job andIi know all of them are using mainly C++/Pyhton...)

    Read the article

  • SED - Regular Expression over multiple lines

    - by herrherr
    Hi there, I'm stuck with this for several hours now and cycled through a wealth of different tools to get the job done. Without success. It would be fantastic, if someone could help me out with this. Here is the problem: I have a very large CSV file (400mb+) that is not formatted correctly. Right now it looks something like this: Alan Smithee ist ein Anagramm von „The [...] „Alan Smythee“, und „Adam Smithee“." ,Alan Smithee Die Aussagenlogik ist der Bereich der Logik, der sich mit [...] ihrer Teilaussagen bestimmen. ,Aussagenlogik As you can probably see the words ",Alan Smithee" and ",Aussagenlogik" should actually be on the same line as the foregoing sentence. Then it would look something like this: Alan Smithee ist ein Anagramm von „The Smitheeeee [...] „Alan Smythee“, und „Adam Smithee“.,Alan Smithee Die Aussagenlogik ist der Bereich der Logik, der sich mit [...] ihrer Teilaussagen bestimmen.,Aussagenlogik Please note that the end of the sentence can contain quotes or not. In the end they should be replaced too. Here is what I came up with so far: sed -n '1h;1!H;${;g;s/\."?.*,//g;p;}' out.csv > out1.csv This should actually get the job done of matching the expression over multiple lines. Unfortunately it doesn't :) The expression is looking for the dot at the end of the sentence and the optional quotes plus a newline character that I'm trying to match with .*. Help much appreciated. And it doesn't really matter what tool gets the job done (awk, perl, sed, tr, etc.). Thanks, Chris

    Read the article

  • Automatically stashing

    - by Readonly
    The section Last links in the chain: Stashing and the reflog in http://ftp.newartisans.com/pub/git.from.bottom.up.pdf recommends stashing often to take snapshots of your work in progress. The author goes as far as recommending that you can use a cron job to stash your work regularly, without having to do a stash manually. The beauty of stash is that it lets you apply unobtrusive version control to your working process itself: namely, the various stages of your working tree from day to day. You can even use stash on a regular basis if you like, with something like the following snapshot script: $ cat <<EOF > /usr/local/bin/git-snapshot #!/bin/sh git stash && git stash apply EOF $ chmod +x $_ $ git snapshot There’s no reason you couldn’t run this from a cron job every hour, along with running the reflog expire command every week or month. The problem with this approach is: If there are no changes to your working copy, the "git stash apply" will cause your last stash to be applied over your working copy. There could be race conditions between when the cron job executes and the user working on the working copy. For example, "git stash" runs, then the user opens the file, then the script's "git stash apply" is executed. Does anybody have suggestions for making this automatic stashing work more reliably?

    Read the article

  • Setting article properties for a publication using RMO in C# .NET

    - by Pavan Kumar
    I am using transaction replication with push subscription. I am developing a UI for replication using RMO in C#.NET between different instances of the same database within same machine holding similar schema and structure. I am using Single subscriber and multiple publisher topology. During creation of publication i want to set a few article properties such as Keep the existing object unchanged ,allow schema changes at subscriber to false a,copy foriegn key constarint and copy check constraints to true. How do i set the article properties using RMO in C# .NET. I am using Visual Studio 2008 SP1.I also want to know as how we can select all the objects including Tables,Views,Stored Procedures for publishing at one stretch. I could do it for one table but i want to select all the tables at one stretch. This is the code snippet i used for selecting single table for publishing. TransArticle ta = new TransArticle(); ta.Name = "Article_1"; ta.PublicationName = "TransReplication_DB2"; ta.DatabaseName = "DB2"; ta.SourceObjectName = "person"; ta.SourceObjectOwner = "dbo"; ta.ConnectionContext = conn; ta.Create();

    Read the article

  • Usage of putty in command line from Hudson

    - by kij
    Hi, I'm trying to use putty in command line from an hudson job. The command is the following one: putty -ssh -2 -P 22 USERNAME@SERVER_ADDR -pw PASS -m command.txt Where 'command.txt' is a shell script to execute in the server through SSH. If i launch this command from the Window command prompt, it works, the shell script is executed on the server machine. If i launch a build of the hudson job configured with this batch command, it doesn't work. The build is running... and running... and running.. without doing anything, and i have to stop it manually. So my question is: Is it possible to launch an external programm (i.e. putty) from an hudson job ? ps: i tried SSH plugin but... not a really good plugin (pre/post build, fail status of the commands launched not caught by hudson, etc.) Thanks in advance for your help. Best regards. kij EDIT: These are the build logs: [workspace] $ cmd /c call C:\WINDOWS\TEMP\hudson7429256014041663539.bat C:\Hudson\jobs\Artifact deployer\workspace>putty -ssh -2 -P 22 USER@SERV_ADD -pw PASS -m com.txt Le build a été annulé Finished: ABORTED And the Hudson.err.log file at the same time (after a stop): 3 juin 2010 18:27:28 hudson.model.Run run INFO: Artifact deployer #6 aborted java.lang.InterruptedException at java.lang.ProcessImpl.waitFor(Native Method) at hudson.Proc$LocalProc.join(Proc.java:179) at hudson.Launcher$ProcStarter.join(Launcher.java:278) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:83) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:601) at hudson.model.Build$RunnerImpl.build(Build.java:174) at hudson.model.Build$RunnerImpl.doRun(Build.java:138) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:416) at hudson.model.Run.run(Run.java:1241) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:124) My shell script only write "hello" in a "hello.txt" file on the server, and nothing is done.

    Read the article

  • crawl websites out of java web application without using bin/nutch

    - by Marcel
    hi :) i am trying to using nutch (1.1) without bin/nutch from my (java) mojarra 2.0.2 webapp... i am searching at google for examples, but there are no examples how i can realize this :/ ... i get an exception and the job fails :/ (i think of cause something with hadoop)... here is my code: public void run() throws Exception { final String[] args = new String[] { String.format("%s%s%s%s", JSFUtils.getWebAppRoot(), "nutch", File.separator, DIRECTORY_URLS), "-dir", String.format("%s%s%s%s", JSFUtils.getWebAppRoot(), "nutch", File.separator, DIRECTORY_CRAWL), "-threads", this.preferences.get("threads"), "-depth", this.preferences.get("depth"), "-topN", this.preferences.get("topN"), "-solr", this.preferences.get("solr") }; Crawl.main(args); } and a part of the logging: 10/05/17 10:42:54 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 10/05/17 10:42:54 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 10/05/17 10:42:54 INFO mapred.FileInputFormat: Total input paths to process : 1 10/05/17 10:42:54 INFO mapred.JobClient: Running job: job_local_0001 10/05/17 10:42:54 INFO mapred.FileInputFormat: Total input paths to process : 1 10/05/17 10:42:55 INFO mapred.MapTask: numReduceTasks: 1 10/05/17 10:42:55 INFO mapred.MapTask: io.sort.mb = 100 java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1232) at org.apache.nutch.crawl.Injector.inject(Injector.java:211) at org.apache.nutch.crawl.Crawl.main(Crawl.java:124) at lan.localhost.process.NutchCrawling.run(NutchCrawling.java:108) at lan.localhost.main.Index.indexing(Index.java:71) at lan.localhost.bean.FeedingBean.actionStart(FeedingBean.java:25) .... can someone help me or tell me how i can crawling from a java application? i have increased the Xms to 256m and Xmx to 768m, but nothing changed... best regards marcel

    Read the article

  • Idiomatic Scala way to deal with base vs derived class field names?

    - by Gregor Scheidt
    Consider the following base and derived classes in Scala: abstract class Base( val x : String ) final class Derived( x : String ) extends Base( "Base's " + x ) { override def toString = x } Here, the identifier 'x' of the Derived class parameter overrides the field of the Base class, so invoking toString like this: println( new Derived( "string" ).toString ) returns the Derived value and gives the result "string". So a reference to the 'x' parameter prompts the compiler to automatically generate a field on Derived, which is served up in the call to toString. This is very convenient usually, but leads to a replication of the field (I'm now storing the field on both Base and Derived), which may be undesirable. To avoid this replication, I can rename the Derived class parameter from 'x' to something else, like '_x': abstract class Base( val x : String ) final class Derived( _x : String ) extends Base( "Base's " + _x ) { override def toString = x } Now a call to toString returns "Base's string", which is what I want. Unfortunately, the code now looks somewhat ugly, and using named parameters to initialize the class also becomes less elegant: new Derived( _x = "string" ) There is also a risk of forgetting to give the derived classes' initialization parameters different names and inadvertently referring to the wrong field (undesirable since the Base class might actually hold a different value). Is there a better way? Edit: To clarify, I really only want the Base values; the Derived ones just seem necessary for initializing the Base ones. The example only references them to illustrate the ensuing issues. It might be nice to have a way to suppress automatic field generation if the derived class would otherwise end up hiding a base class field.

    Read the article

  • SAP related testing

    - by mgj
    Dear all, One notion that has been prevalent mostly as rumours for many aspiring programmers is that the testing phase of the SDLC(Software Development Life Cycle) is not that challenging and interesting as one's job as a tester after a period of time becomes monotonous because a person does the same thing repeatedly over and over again. Thats why many have this complexion of looking for a developers job rather than that of testers. Don't testers have a space for themselves in software companies to grow..? Please feel free to express your views for or against this. How true is that, could you please give e.g.'s of instances( need not be practical, even theoretical would suffice) which actually contradict this statement wrt a tester's career specifically wrt the SAP domain. E.g.'s from other domains are also welcome. This question is not meant to hurt someone's feelings who is in the testing domain. Its just that for e.g. in my case I want to know what actually would be the challenge's a tester could also face in real life situations.Something that would make their job also interesting and fun-filled. I myself am pro-testing and also interested in pursuing testing as a profession in a sw co, just curious to know more about it so...:) Thanks..:)

    Read the article

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

  • Asynchronous crawling F#

    - by jlezard
    When crawling on webpages I need to be careful as to not make too many requests to the same domain, for example I want to put 1 s between requests. From what I understand it is the time between requests that is important. So to speed things up I want to use async workflows in F#, the idea being make your requests with 1 sec interval but avoid blocking things while waiting for request response. let getHtmlPrimitiveAsyncTimer (uri : System.Uri) (timer:int) = async{ let req = (WebRequest.Create(uri)) :?> HttpWebRequest req.UserAgent<-"Mozilla" try Thread.Sleep(timer) let! resp = (req.AsyncGetResponse()) Console.WriteLine(uri.AbsoluteUri+" got response") use stream = resp.GetResponseStream() use reader = new StreamReader(stream) let html = reader.ReadToEnd() return html with | _ as ex -> return "Bad Link" } Then I do something like: let uri1 = System.Uri "http://rue89.com" let timer = 1000 let jobs = [|for i in 1..10 -> getHtmlPrimitiveAsyncTimer uri1 timer|] jobs |> Array.mapi(fun i job -> Console.WriteLine("Starting job "+string i) Async.StartAsTask(job).Result) Is this alright ? I am very unsure about 2 things: -Does the Thread.Sleep thing work for delaying the request ? -Is using StartTask a problem ? I am a beginner (as you may have noticed) in F# (coding in general actually ), and everything envolving Threads scares me :) Thanks !!

    Read the article

  • deepcopy and python - tips to avoid using it?

    - by blackkettle
    Hi, I have a very simple python routine that involves cycling through a list of roughly 20,000 latitude,longitude coordinates and calculating the distance of each point to a reference point. def compute_nearest_points( lat, lon, nPoints=5 ): """Find the nearest N points, given the input coordinates.""" points = session.query(PointIndex).all() oldNearest = [] newNearest = [] for n in xrange(nPoints): oldNearest.append(PointDistance(None,None,None,99999.0,99999.0)) newNearest.append(obj2) #This is almost certainly an inappropriate use of deepcopy # but how SHOULD I be doing this?!?! for point in points: distance = compute_spherical_law_of_cosines( lat, lon, point.avg_lat, point.avg_lon ) k = 0 for p in oldNearest: if distance < p.distance: newNearest[k] = PointDistance( point.point, point.kana, point.english, point.avg_lat, point.avg_lon, distance=distance ) break else: newNearest[k] = deepcopy(oldNearest[k]) k += 1 for j in range(k,nPoints-1): newNearest[j+1] = deepcopy(oldNearest[j]) oldNearest = deepcopy(newNearest) #We're done, now print the result for point in oldNearest: print point.station, point.english, point.distance return I initially wrote this in C, using the exact same approach, and it works fine there, and is basically instantaneous for nPoints<=100. So I decided to port it to python because I wanted to use SqlAlchemy to do some other stuff. I first ported it without the deepcopy statements that now pepper the method, and this caused the results to be 'odd', or partially incorrect, because some of the points were just getting copied as references(I guess? I think?) -- but it was still pretty nearly as fast as the C version. Now with the deepcopy calls added, the routine does it's job correctly, but it has incurred an extreme performance penalty, and now takes several seconds to do the same job. This seems like a pretty common job, but I'm clearly not doing it the pythonic way. How should I be doing this so that I still get the correct results but don't have to include deepcopy everywhere?

    Read the article

  • ASP.NET MVC static-asset aides/practices

    - by shannon
    I want to keep assets that are only used by one view in a view-specific folder, so my Search.aspx properly finds images/*.jpg, and helps me maintain my convention: ~/Areas/Candidate/Views/Job/Search.aspx -> ~/Assets/Candidate/Job/Search/images/*.jpg Perhaps with the ability to easily reference controller- or area-common assets manually or automatically: ~/Assets/Candidate/Job/images/*.jpg ~/Assets/Candidate/images/*.jpg If you wonder why I'm doing this, then speak up; I'm probably missing something. But here's why: I don't want stale static assets sitting in my ASP.NET MVC projects, which I expect to be an automatic outcome of the ~/Assets/Images folder: i.e. As a shared asset loses its last reference-count, who knows to delete it, especially with it being so difficult to trace content link validity in MVC projects? How do you, personally, do this? I can imagine, for example: Implement HtmlHelper extension methods for URL-generation. Extending ViewPage and ViewMasterPage with URL-generation methods. Implementing an inbound request filter to search related folders for static assets. and, are there good libraries out there for this? For example, something that also automatically appends timestamps for .JS and .CSS files, writes the / tags for me, and maybe even that allows me to inject includes in the head section from outside head code?

    Read the article

  • ASP.NET MVC: How to create a usable UrlHelper instance?

    - by Marek
    I am using quartz.net to schedule regular events within asp.net mvc application. The scheduled job should call a service layer script that requires a UrlHelper instance (for creating Urls based on correct routes (via urlHelper.Action(..)) contained in emails that will be sent by the service). I do not want to hardcode the links into the emails - they should be resolved using the urlhelper. The job: public class EvaluateRequestsJob : Quartz.IJob { public void Execute(JobExecutionContext context) { // where to get a usable urlHelper instance? ServiceFactory.GetRequestService(urlHelper).RunEvaluation(); } } Please note that this is not run within the MVC pipeline. There is no current request being served, the code is run by the Quartz scheduler at defined times. How do I get a UrlHelper instance usable on the indicated place? If it is not possible to construct a UrlHelper, the other option I see is to make the job "self-call" a controller action by doing a HTTP request - while executing the action I will of course have a UrlHelper instance available - but this seems a little bit hacky to me.

    Read the article

  • AS3 using PrintJob to print a MovieClip

    - by Chris Waugh
    Hello, I am currently trying to create a function which will allow me to pass in a movieclip and print it. Here is the simplified version of the function: function printMovieClip(clip:MovieClip) { var printJob:PrintJob = new PrintJob(); var numPages:int = 0; var printY:int = 0; var printHeight:Number; if ( printJob.start() ) { /* Resize movie clip to fit within page width */ if (clip.width > printJob.pageWidth) { clip.width = printJob.pageWidth; clip.scaleY = clip.scaleX; } numPages = Math.ceil(clip.height / printJob.pageHeight); /* Add pages to print job */ for (var i:int = 0; i < numPages; i++) { printJob.addPage(clip, new Rectangle(0, printY, printJob.pageWidth, printJob.pageHeight)); printY += printJob.pageHeight; } /* Send print job to printer */ printJob.send(); /* Delete job from memory */ printJob = null; } } printMovieClip( testMC ); Unfortunately this is not working as expected i.e. printing the full width of the Movieclip and doing page breaks on the length. Any help with this would be greatly appreciated. Many thanks, Chris

    Read the article

  • Quartz.Net scheduler works locally but not on remote host

    - by Glinkot
    Hi. I have a timed quartz.net job working fine on my dev machine, but once deployed to a remote server it is not triggering. I believe the job is scheduled ok, because if I postback, it tells me the job already exists (I normally check for postback however). The email code definitely works, as the 'button1_click' event sends emails successfully. I understand I have full or medium trust on the remove server. My host says they don't apply restrictions that they know of which would affect it. Any other things I need to do to get it running? using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using Quartz; using Quartz.Impl; using Quartz.Core; using Aspose.Network.Mail; using Aspose.Network; using Aspose.Network.Mime; using System.Text; namespace QuartzTestASP { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { ISchedulerFactory schedFact = new StdSchedulerFactory(); IScheduler sched = schedFact.GetScheduler(); JobDetail jobDetail = new JobDetail("testJob2", null, typeof(testJob)); //Trigger trigger = TriggerUtils.MakeMinutelyTrigger(1, 3); Trigger trigger = TriggerUtils.MakeSecondlyTrigger(10, 5); trigger.StartTimeUtc = DateTime.UtcNow; trigger.Name = "TriggertheTest"; sched.Start(); sched.ScheduleJob(jobDetail, trigger); } } protected void Button1_Click1(object sender, EventArgs e) { myutil.sendEmail(); } } class testJob : IStatefulJob { public testJob() { } public void Execute(JobExecutionContext context) { myutil.sendEmail(); } } public static class myutil { public static void sendEmail() { // tested code lives here and works fine when called from elsewhere } } }

    Read the article

  • Algorithm for optimally choosing actions to perform a task

    - by Jules
    There are two data types: tasks and actions. An action costs a certain time to complete, and a set of tasks this actions consists of. A task has a set of actions, and our job is to choose one of them. So: class Task { Set<Action> choices; } class Action { float time; Set<Task> dependencies; } For example the primary task could be "Get a house". The possible actions for this task: "Buy a house" or "Build a house". The action "Build a house" costs 10 hours and has the dependencies "Get bricks" and "Get cement", etcetera. The total time is the sum of all the times of the actions required to perform. We want to choose actions such that the total time is minimal. Note that the dependencies can be diamond shaped. For example "Get bricks" could require "Get a car" (to transport the bricks) and "Get cement" would also require a car. Even if you do "Get bricks" and "Get cement" you only have to count the time it takes to get a car once. Note also that the dependencies can be circular. For example "Money" - "Job" - "Car" - "Money". This is no problem for us, we simply select all of "Money", "Job" and "Car". The total time is simply the sum of the time of these 3 things. Mathematical description: Let actions be the chosen actions. valid(task) = ?action ? task.choices. (action ? actions ? ?tasks ? action.dependencies. valid(task)) time = sum {action.time | action ? actions} minimize time subject to valid(primaryTask)

    Read the article

  • Need help running Python app as service in Ubuntu with Upstart

    - by GreeenGuru
    I have written a logging application in Python that is meant to start at boot, but I've been unable to start the app with Ubuntu's Upstart init daemon. When run from the terminal with sudo /usr/local/greeenlog/main.pyw, the application works perfectly. Here is what I've tried for the Upstart job: /etc/init/greeenlog.conf # greeenlog description "I log stuff." start on startup stop on shutdown script exec /usr/local/greeenlog/main.pyw end script My application starts one child thread, in case that is important. I've tried the job with the expect fork stanza without any change in the results. I've also tried this with sudo and without the script statements (just a lone exec statement). In all cases, after boot, running status greeenlog returns greeenlog stop/waiting and running start greeenlog returns: start: Rejected send message, 1 matched rules; type="method_call", sender=":1.61" (uid=1000 pid=2496 comm="start) interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) Can anyone see what I'm doing wrong? I appreciate any help you can give. Thanks.

    Read the article

  • Cobol web development/hosting resources

    - by felixm
    Hello, I'm employed at a fairly big company here in Germany and got the job to create the main website for it which will feature: Static contents; Information and Presentations An employee area (around 6000 employees) featuring various things from calenders, job descriptions, some sort of groups Too many other dynamic things I can't list here I have decided to use COBOL for the job, it may be very underrated but it is a very powerful language, especially for business apps and, as my co-workers say, web (2.0) development too. I also need to use COBOL because all the backend and transactions system of the company is programmed in it (some small parts were programmed in LISP too, idk exactly why). I also have received an API that makes it possible to use COBOL with MySQL easily. This is a big project and it will probably take more than 2 months programming it. What do I have to expect when building a huge web app in COBOL? Are there web frameworks for COBOL available? Some sort of MVC? Are there any good resources for practical web-development with COBOL? Thanks in advance

    Read the article

  • Atomically maintaining a counter using Sub-sonic ActiveRecord

    - by cantabilesoftware
    I'm trying to figure out the correct way to atomically increment a counter in one table and use that incremented value as an pseudo display-only ID for a record in another. What I have is a companies table and a jobs table. I want each company to have it's own set of job_numbers. I do have an auto increment job_id, but those numbers are shared across all companies. ie: the job numbers should generally increment without gaps for each company. ie: companies(company_id, next_job_number) jobs(company_id, job_id, job_number) Currently I'm doing this (as a method on the partial job class): public void SaveJob() { using (var scope = new System.Transactions.TransactionScope()) { if (job_id == 0) { _db.Update<company>() .SetExpression("next_job_number").EqualTo("next_job_number+1") .Where<company>(x => x.company_id == company_id) .Execute(); company c = _db.companies.SingleOrDefault(x => x.company_id == company_id); job_number = c.next_job_number; } // Save the job this.Save(); scope.Complete(); } } It seems to work, but I'm not sure if there are pitfalls here? It just feels wrong, but I'm not sure how else to do it. Any advice appreciated.

    Read the article

  • Why would this Lua optimization hack help?

    - by Ian Boyd
    i'm looking over a document that describes various techniques to improve performance of Lua script code, and i'm shocked that such tricks would be required. (Although i'm quoting Lua, i've seen similar hacks in Javascript). Why would this optimization be required: For instance, the code for i = 1, 1000000 do local x = math.sin(i) end runs 30% slower than this one: local sin = math.sin for i = 1, 1000000 do local x = sin(i) end They're re-declaring sin function locally. Why would this be helpful? It's the job of the compiler to do that anyway. Why is the programmer having to do the compiler's job? i've seen similar things in Javascript; and so obviously there must be a very good reason why the interpreting compiler isn't doing its job. What is it? i see it repeatedly in the Lua environment i'm fiddling in; people redeclaring variables as local: local strfind = strfind local strlen = strlen local gsub = gsub local pairs = pairs local ipairs = ipairs local type = type local tinsert = tinsert local tremove = tremove local unpack = unpack local max = max local min = min local floor = floor local ceil = ceil local loadstring = loadstring local tostring = tostring local setmetatable = setmetatable local getmetatable = getmetatable local format = format local sin = math.sin What is going on here that people have to do the work of the compiler? Is the compiler confused by how to find format? Why is this an issue that a programmer has to deal with? Why would this not have been taken care of in 1993? i also seem to have hit a logical paradox: Optimizatin should not be done without profiling Lua has no ability to be profiled Lua should not be optimized

    Read the article

  • Use of putty in command line

    - by kij
    Hi, I'm trying to use putty in command line from an hudson job. The command is the following one: putty -ssh -2 -P 22 USERNAME@SERVER_ADDR -pw PASS -m command.txt Where 'command.txt' is a shell script to execute in the server through SSH. If i launch this command from the Window command prompt, it works, the shell script is executed on the server machine. If i launch a build of the hudson job configured with this batch command, it doesn't work. The build is running... and running... and running.. without doing anything, and i have to stop it manually. So my question is: Is it possible to launch an external programm (i.e. putty) from an hudson job ? ps: i tried SSH plugin but... not a really good plugin (pre/post build, fail status of the commands launched not caught by hudson, etc.) Thanks in advance for your help. Best regards. kij

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >