Search Results

Search found 8818 results on 353 pages for 'rake task'.

Page 124/353 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • How to cope with null results in SQL Tasks that return single rows in SSIS 2005?

    - by JSacksteder
    In a dataflow task, I can slip a rowcount into the processing flow and place the count into a variable. I can later use that variable to conditionally perform some other work if the rowcount was 0. This works well for me, but I have no corresponding strategy for sql tasks expected to return a single row. In that event, I'm returning those values into variables. If the lookup produces no rows, the sql task fails when assigning values into those variables. I can branch on that component failing, but there's a side effect of that - if I'm running the job as a SQL server agent job step, the step returns DTSER_FAILURE, causing the step to fail. I can tell the sql agent to disregard the step failure, but then I won't know if I have a legitimate error in that step. This seems harder than it should be. The only strategy I can think of is to run the same query with a count(*) aggregate and test if that returns a number 0 and if so running the query again without the count. That's ugly because I have the same query in two places that I need to keep in sync. Is there a better way?

    Read the article

  • Installer for asp.net web application

    - by Thurein
    Hi I am trying to implement a installer which is going to perform following tasks.1. Check and install .net 3.52. check and install SQL server 2008 (standard edition)3. create the databases4. create a virtual directory and deploy published resources5. Deploy SSIS and package for the datawarehousing and to run the SSAS package.Right now I am using wix, to deal with some of the task, its working for me for now, but I just want to know other options and better way to do this (is there any) .Thanks and regardsThurein I am trying to implement an installer, which I m gonna hand it to the end user as a product. Check and install .net 3.5 check and install SQL server 2008 (standard edition) create the databases create a virtual directory and deploy published resources Deploy SSIS and package for the datawarehousing and to run the SSAS package. Right now I am using wix, to deal with some of the task, it works for me, but I am just curious about other options and better ways to do this (is there any) . My main intension is, I would like to distribute my product (asp.net web application) to the end user for a trial, and end user with the limited IT knowledge could install and use that web application with in a group of user. After the end of trial period the user could ask for the activation key for further usages. Thanks Thurein

    Read the article

  • How do I tell nant to only call csc when there are cs files in to compile?

    - by rob_g
    In my NAnt script I have a compile target that calls csc. Currently it fails because no inputs are specified: <target name="compile"> <csc target="library" output="${umbraco.bin.dir}\Mammoth.${project::get-name()}.dll"> <sources> <include name="project/*.cs" /> </sources> <references> </references> </csc> </target> How do I tell NAnt to not execute the csc task if there are no CS files? I read about the 'if' attribute but am unsure what expression to use with it, as ${file::exists('*.cs')} does not work. The build script is a template for Umbraco (a CMS) projects and may or may not ever have .cs source files in the project. Ideally I would like to not have developers need to remember to modify the NAnt script to include the compile task when .cs files are added to the project (or exclude it when all .cs files are removed).

    Read the article

  • Rails runner command not saving to cache

    - by mark
    Hi I'm having a bit of a problem with a cron task generated by rails whenever plugin that should store remote data in the rails cache for display. What I have is this: schedule.rb set :path, '/var/www/apps/tuexplore/current' every 1.hour do runner "Weather.cache_remote", :environment => :production end calls this model class Weather def self.cache_remote Rails.cache.write('weather_data', Net::HTTP.get_response(URI.parse(WEATHER_URL)).body) end end Calling whenever returns this PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/deploy/.gem/ruby/1.8/bin 0 * * * * /var/www/apps/tuexplore/current/script/runner -e production "Weather.cache_remote" This doesn't work. Calling the weather model method from a controller works fine, but I need to schedule it hourly. The cron task causes a "Cache write: weather_data" entry to appear in the production log but data isn't stored nor output into the page. Additional information, I can log into production console and run Weather.cache_remote, then read the data from the rails cache. I'd be really appreciative if someone could point out the error of my ways. If further explanation is needed please ask. Thanks in advance for any pointers.

    Read the article

  • Is it possible to refer to metadata of the target from within the target implementation in MSBuild?

    - by mark
    Dear ladies and sirs. My msbuild targets file contains the following section: <ItemGroup> <Targets Include="T1"> <Project>A\B.sln"</Project> <DependsOnTargets>The targets T1 depends on</DependsOnTargets> </Targets> <Targets Include="T2"> <Project>C\D.csproj"</Project> <DependsOnTargets>The targets T2 depends on</DependsOnTargets> </Targets> ... </ItemGroup> <Target Name="T1" DependsOnTargets="The targets T1 depends on"> <MSBuild Projects="A\B.sln" Properties="Configuration=$(Configuration)" /> </Target> <Target Name="T2" DependsOnTargets="The targets T2 depends on"> <MSBuild Projects="C\D.csproj" Properties="Configuration=$(Configuration)" /> </Target> As you can see, A\B.sln appears twice: As Project metadata of T1 in the ItemGroup section. In the Target statement itself passed to the MSBuild task. I am wondering whether I can remove the second instance and replace it with the reference to the Project metadata of the target, which name is given to the Target task? Exactly the same question is asked for the (Targets.DependsOnTargets) metadata. It is mentioned twice much like the %(Targets.Project) metadata. Thanks. EDIT: I should probably describe the constraints, which must be satisfied by the solution: I want to be able to build individual projects with ease. Today I can simply execute msbuild file.proj /t:T1 to build the T1 target and I wish to keep this ability. I wish to emphasize, that some projects depend on others, so the DependsOnTargets attribute is really necessary for them.

    Read the article

  • When to drop an IT job

    - by Nippysaurus
    In my career I have had two programming jobs. Both these jobs were in a field that I am most familiar with (C# / MSSQL) but I have quit both jobs for the same reason: unmanageable code and bad (loose) company structure. There was something in common with both these jobs: small companies (in one I was the only developer). Currently I am in the following position: being given written instructions which are almost impossible to follow (somewhat of a fools errand). we are given short time constraints, but seldom asked how long work will take, and when we do it is always too long and needs to be shorter (and when it ends up taking longer than they need it to take, it's always our fault). there is no time for proper documenting, but we get blamed for not documenting (see previous point). Management is constantly screwing me around, saying I'm underperforming on a given task (which is not true, and switching me to a task which is much more confusing). So I must ask my fellow developers: how bad does a job need to be before you would consider jumping ship? And what to look out for when considering taking a job. In future I will be asking about documented procedures, release control, bug management and adoption of new technologies. EDIT: Let me add some more fuel to the fire ... I have been in my current job for just over a year, and the work I am doing almost never uses any of the knowledge I have gained from the other work I have been doing here. Everything is a giant learning curve. Because of this about 30% of my time is learning what is going on with this new product (who's owner / original developer has left the company), 30% trying to find the relevant documentation that helps the whole thing make sense, 30% actually finding where to make the change, 10% actually making the change.

    Read the article

  • How to write a flexible modular program with good interaction possibilities between modules?

    - by PeterK
    I went through answers on similar topics here on SO but could't find a satisfying answer. Since i know this is a rather large topic, i will try to be more specific. I want to write a program which processes files. The processing is nontrivial, so the best way is to split different phases into standalone modules which then would be used as necessary (since sometimes i will be only interested in the output of module A, sometimes i would need output of five other modules, etc). The thing is, that i need the modules to cooperate, because the output of one might be the input of another. And i need it to be FAST. Moreover i want to avoid doing certain processing more than once (if module A creates some data which then need to be processed by module B and C, i don't want to run module A twice to create the input for modules B,C ). The information the modules need to share would mostly be blocks of binary data and/or offsets into the processed files. The task of the main program would be quite simple - just parse arguments, run required modules (and perhaps give some output, or should this be the task of the modules?). I don't need the modules to be loaded at runtime. It's perfectly fine to have libs with a .h file and recompile the program every time there is a new module or some module is updated. The idea of modules is here mainly because of code readability, maintaining and to be able to have more people working on different modules without the need to have some predefined interface or whatever (on the other hand, some "guidelines" on how to write the modules would be probably required, i know that). We can assume that the file processing is a read-only operation, the original file is not changed. Could someone point me in a good direction on how to do this in C++ ? Any advice is wellcome (links, tutorials, pdf books...).

    Read the article

  • Setting acquired location to a text view: How to maintain?

    - by Mark
    Hi, I have built an app for the Motorola Droid which should automatically update a server with the phone's location. After the user performs a particular task on the main activity screen, an alarm is set to update the user's location periodically, using a service. The alarm is explicitly stopped when the user completes another task. Thing is, I have set up a location manager within the main activity's onCreate() method which is supposed to place the first acquired lat/long into two textview fields. Even though the manifest is set up for acquiring coarse and fine coords and I'm using requestLocationUpdates (String provider, long minTime, float minDistance, LocationListener listener), with minTime and minDistance set to zero, I'm not seeing the coords coming up on the screen. With that, I'm not recording any locations on the server. When I seed the textviews with sample coords, they are being recorded fine on the server. I am not at a computer that can run the IDE, so don't currently have the code, but am desperate for some help on this. One other thing is that the main activity screen calls a photography app before the user manually clicks "send data". I'm suspicious that I may need to override the main activity's onResume() method to do this location acquisition. Please help, thanks. Mark.

    Read the article

  • Spring Security ACL: NotFoundException from JDBCMutableAclService.createAcl

    - by user340202
    Hello, I've been working on this task for too long to abandon the idea of using Spring Security to achieve it, but I wish that the community will provide with some support that will help reduce the regret that I have for choosing Spring Security. Enough ranting and now let's get to the point. I'm trying to create an ACL by using JDBCMutableAclService.createAcl as follows: [code] public void addPermission(IWFArtifact securedObject, Sid recipient, Permission permission, Class clazz) { ObjectIdentity oid = new ObjectIdentityImpl(clazz.getCanonicalName(), securedObject.getId()); this.addPermission(oid, recipient, permission); } @Override @Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.READ_UNCOMMITTED, readOnly = false) public void addPermission(ObjectIdentity oid, Sid recipient, Permission permission) { SpringSecurityUtils.assureThreadLocalAuthSet(); MutableAcl acl; try { acl = this.mutableAclService.createAcl(oid); } catch (AlreadyExistsException e) { acl = (MutableAcl) this.mutableAclService.readAclById(oid); } // try { // acl = (MutableAcl) this.mutableAclService.readAclById(oid); // } catch (NotFoundException nfe) { // acl = this.mutableAclService.createAcl(oid); // } acl.insertAce(acl.getEntries().length, permission, recipient, true); this.mutableAclService.updateAcl(acl); } [/code] The call throws a NotFoundException from the line: [code] // Retrieve the ACL via superclass (ensures cache registration, proper retrieval etc) Acl acl = readAclById(objectIdentity); [/code] I believe this is caused by something related to Transactional, and that's why I have tested with many TransactionDefinition attributes. I have also doubted the annotation and tried with declarative transaction definition, but still with no luck. One important point is that I have used the statement used to insert the oid in the database earlier in the method directly on the database and it worked, and also threw a unique constraint exception at me when it tried to insert it in the method. I'm using Spring Security 2.0.8 and IceFaces 1.8 (which doesn't support spring 3.0 but definetely supprorts 2.0.x, specially when I keep caling SpringSecurityUtils.assureThreadLocalAuthSet()). My AppServer is Tomcat 6.0, and my DB Server is MySQL 6.0 I wish to get back a reply soon because I need to get this task off my way

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • I am not able to kill a child process using TerminateProcess

    - by user1681210
    I have a problem to kill a child process using TerminateProcess. I call to this function and the process still there (in the Task Manager) This piece of code is called many times launching the same program.exe many times and these process are there in the task manager which i think is not good. sorry, I am quiet new in c++ I will really appreciate any help. thanks a lot!! the code is the following: STARTUPINFO childProcStartupInfo; memset( &childProcStartupInfo, 0, sizeof(childProcStartupInfo)); childProcStartupInfo.cb = sizeof(childProcStartupInfo); childProcStartupInfo.hStdInput = hFromParent; // stdin childProcStartupInfo.hStdOutput = hToParent; // stdout childProcStartupInfo.hStdError = hToParentDup; // stderr childProcStartupInfo.dwFlags = STARTF_USESTDHANDLES | STARTF_USESHOWWINDOW; childProcStartupInfo.wShowWindow = SW_HIDE; PROCESS_INFORMATION childProcInfo; /* for CreateProcess call */ bOk = CreateProcess( NULL, // filename pCmdLine, // full command line for child NULL, // process security descriptor */ NULL, // thread security descriptor */ TRUE, // inherit handles? Also use if STARTF_USESTDHANDLES */ 0, // creation flags */ NULL, // inherited environment address */ NULL, // startup dir; NULL = start in current */ &childProcStartupInfo, // pointer to startup info (input) */ &childProcInfo); // pointer to process info (output) */ CloseHandle( hFromParent ); CloseHandle( hToParent ); CloseHandle( hToParentDup ); CloseHandle( childProcInfo.hThread); CloseHandle( childProcInfo.hProcess); TerminateProcess( childProcInfo.hProcess ,0); //this is not working, the process thanks

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • Alternative to using c:out to prevent XSS

    - by lynxforest
    I'm working on preventing cross site scripting (XSS) in a Java, Spring based, Web application. I have already implemented a servlet filter similar to this example http://greatwebguy.com/programming/java/simple-cross-site-scripting-xss-servlet-filter/ which sanitizes all the input into the application. As an extra security measure I would like to also sanitize all output of the application in all JSPs. I have done some research to see how this could be done and found two complementary options. One of them is the use of Spring's defaultHtmlEscape attribute. This was very easy to implement (a few lines in web.xml), and it works great when your output is going through one of spring's tags (ie: message, or form tags). The other option I have found is to not directly use EL expressions such as ${...} and instead use <c:out value="${...}" /> That second approach works perfectly, however due to the size of the application I am working on (200+ JSP files). It is a very cumbersome task to have to replace all inappropriate uses of EL expressions with the c:out tag. Also it would become a cumbersome task in the future to make sure all developers stick to this convention of using the c:out tag (not to mention, how much more unreadable the code would be). Is there alternative way to escape the output of EL expressions that would require fewer code modifications? Thank you in advance.

    Read the article

  • How can I send rich emails using the user's mail client ?

    - by Brann
    I need my .net program to send rich emails (usually containing table data, around 20 columns x 10 rows) using the user's mail infrastructure, allowing him to review/edit the mail before sending it, and storing the mail in his 'sent items' folder. mailto: seems the obvious choice, but unfortunately, it doesn't support neither attachments nor html bodies. It seems some clients support some extra features (e.g. Outlook 97 used to support a &Attach tag, but this is not the case for more recent versions). I could use mailto and try to format the text body to look nice (using tabs, etc), but this isn't really elegant and wouldn't support huge data. using automation seems a very huge task, as I would need to automate dozens of clients (4 or 5 versions of outlook, lotusnotes, thunderbid, etc.) ... This would be a huge task and it's not really my core business ... I could send emails through code and write my own mail form to let the user edit the mail, but this would have a lot of drawbacks : the user would need to manually configure the mail server settings he wouldn't have access to his contact directory the mail wouldn't be sent in his sent items folder This seems a quite common issue, but I haven't found any satisfying solution yet ; does someone knows of a library supporting this (ie containing automation logic for most mainstream email clients?). Or an alternative to mailto ?

    Read the article

  • Learn Prolog Now! DCG Practice Example

    - by Timothy
    I have been progressing through Learn Prolog Now! as self-study and am now learning about Definite Clause Grammars. I am having some difficulty with one of the Practical Session's tasks. The task reads: The formal language anb2mc2mdn consists of all strings of the following form: an unbroken block of as followed by an unbroken block of bs followed by an unbroken block of cs followed by an unbroken block of ds, such that the a and d blocks are exactly the same length, and the c and d blocks are also exactly the same length and furthermore consist of an even number of cs and ds respectively. For example, ε, abbccd, and aaabbbbccccddd all belong to anb2mc2mdn. Write a DCG that generates this language. I am able to write rules that generate andn, b2mc2m, and even anb2m and c2mndn... but I can't seem to join all these rules into anb2mc2mdn. The following are my rules that can generate andn and b2mc2m. s1 --> []. s1 --> a,s1,d. a --> [a]. d --> [d]. s2 --> []. s2 --> c,c,s2,d,d. c --> [c]. d --> [d]. Is anb2mc2mdn really a CFG, and is it possible to write a DCG using only what was taught in the lesson (no additional arguments or code, etc)? If so, can anyone offer me some guidance how I can join these so that I can solve the given task?

    Read the article

  • High precision event timer

    - by rahul jv
    #include "target.h" #include "xcp.h" #include "LocatedVars.h" #include "osek.h" /** * This task is activated every 10ms. */ long OSTICKDURATION; TASK( Task10ms ) { void XCP_FN_TYPE Xcp_CmdProcessor( void ); uint32 startTime = GetQueryPerformanceCounter(); /* Trigger DAQ for the 10ms XCP raster. */ if( XCPEVENT_DAQ_OVERLOAD & Xcp_DoDaqForEvent_10msRstr() ) { ++numDaqOverload10ms; } /* Update those variables which are modified every 10ms. */ counter16 += slope16; /* Trigger STIM for the 10ms XCP raster. */ if( enableBypass10ms ) { if( XCPEVENT_MISSING_DTO & Xcp_DoStimForEvent_10msRstr() ) { ++numMissingDto10ms; } } duration10ms = (uint32)( ( GetQueryPerformanceCounter() - startTime ) / STOPWATCH_TICKS_PER_US ); } What would be the easiest (and/or best) way to synchronise to some accurate clock to call a function at a specific time interval, with little jitter during normal circumstances, from C++? I am working on WINDOWS operating system now. The above code is for RTAS OSEK but I want to call a function at a specific time interval for windows operating system. Could anyone assist me in c++ language ??

    Read the article

  • YQL + PHP : how to make a facebook login

    - by Jonathan
    Hi! I was reading some stuff about the YQL api that Yahoo! has provided, I am not sure, but it appears to be a collection of lots of third party api into one common language, right? what I don't get is how to make the facebook login through it so I can get the user profile data... My project is to add a facebook(and other social networks) form login, because the website won't have his own login, people will have to use a social network to link in. Then I thought the YQL would help me out with this task so I wouldn't have to develop lots of functions to each one of the networks. Reading this http://developer.yahoo.com/yql/guide/yql-code-examples.html#sdk_yql, I understood how to make a Yahoo login so I can access some private data, but couldn't find how I could do it with facebook and others So my question... Can YQL help me with this? Can you give me a simple example of a facebook session using it within PHP? Are there alternatives to aid me in this task? thanks, Jonathan

    Read the article

  • How do I tweak columns in a Flat File Destination in SSIS?

    - by theog
    I have an OLE DB Data source and a Flat File Destination in the Data Flow of my SSIS Project. The goal is simply to pump data into a text file, and it does that. Where I'm having problems is with the formatting. I need to be able to rtrim() a couple of columns to remove trailing spaces, and I have a couple more that need their leading zeros preserved. The current process is losing all the leading zeros. The rtrim() can be done by simple truncation and ignoring the truncation errors, but that's very inelegant and error prone. I'd like to find a better way, like actually doing the rtrim() function where needed. Exploring similar SSIS questions & answers on SO, the thing to do seems to be "Use a Script Task", but that's ususally just thrown out there with no details, and it's not at all an intuitive thing to set up. I don't see how to use scripting to do what I need. Do I use a Script Task on the Control Flow, or a Script Component in the Data Flow? Can I do rtrim() and pad strings where needed in a script? Anybody got an example of doing this or similar things? Many thanks in advance.

    Read the article

  • Running Different Modules on Tomcat/Server

    - by umesh awasthi
    Hi All, I have started working on my own project idea but on the starting i have strucked in the structure decision may be lack of knowledge is the promary factor. i am wondering how we can make 2 different modules to co-operate on the server.Here is is explanation of what i am trying to ask as per my design i need 2 modules for my application 1. Back end handling where i can do all content handling as well as other admin task a console which will be capable of handling everything from ceating importing contents to everything. 2 USer end which is only an interface to the end user to use the application. to visualize the things its kinda e-commerce application one is back office management and other is user end of web-shop. since these two module will be very much interlinked but i don't want them to mix up i want them to develop independently since admin is one which is core and it will also going to serve user interface. my question is how i can develop two module independently but on the other hand i want them to co-operate to accomplish the task as a whole. Really sorry in advance if i am not making any sense. Thanks in advance

    Read the article

  • MSBuild: automate collecting of db migration scripts?

    - by P Dub
    Summary of environment. Asp.net web application (source stored in svn) sqlserver database. (Database schema (tables/sprocs) stored in svn) db version is synced with web application assembly version. (stored in table 'CurrentVersion') CI hudson server that checks out web app from repo and runs custom msbuild file to publish/package app. My msbuild script updates the assembly version of the web app (Major.Minor.Revision.Build) on each build. The 'Revision' is set to the currently checked out svn revision and the 'Build' to the hudson build number (incremented on each automated build). This way i can match the app to a specific trunk revision also get other build stats from the hudson build number. I'd like to automate the collecting of migration scripts (updated sprocs etc) to add to the zip package. I guess by comparing the svn revision of the db that has yet to be deployed to, to the revision being deployed, i can find what db files have changed in the trunk since the last deployment to that database/environment. This could easily be achieved by manually calling the svn diff -r REVNO:REVNO command to list changed .sql files. These files could then manually have to be added to the package. It would be great if this could be automated. Firstly i'd imagine I'll have to write a custom task to check the version of the db that has yet to be deployed to. After that I'm quite unsure. Does anyone have any suggestion on how this would be achieved through an msbuild task either existing or custom? Finally I'll have to autogen a script to add to the package that updates the database version table so as to be in sync with the application.

    Read the article

  • Find the order among tasks in a company by using prolog?

    - by Cem
    First of all,I wish a happy new year for everyone.I searched more and worked a lot but I could not solve this question.I am quite a new in prolog and I must do this homework. In my homework,the question is like this: Write a prolog program that determines a valid order for the tasks to be carried out in a company. The prolog program will consist of a set of "before" predicates which denotes the order between task pairs. Here is an example; before(a,b). before(a,e). before(d,c). before(b,c). before(c,e). Here, task a should be carried before tasks b and e, d before c and so on. Hence a valid ordering of the tasks would be [a, b, d, c, e]. The order predicate in your program will be queried as follows. ?- order([a,b,c,d,e],X). X = [a, b, d, c, e] ; X = [a, d, b, c, e] ; X = [d, a, b, c, e] ; false. Hint: Try to generate different orders for the tasks (permutation) and then check if the order is consistent with the "before" relationships given. Even if you can generate a single valid order, you will get reasonable partial credits.

    Read the article

  • heroku logs --ps run showign nothing

    - by Zarne Dravitzki
    I have two running apps on heroku staging and production. They are near identical enviornments. (Staging has extra configs IE RailsFootnotes, Bullet gem) When I run heroku logs --ps run --app jl-staging Returns as logs like 2012-08-30T01:30:42+00:00 heroku[run.1]: Starting process with command `bundle exec rake jewellover:warn_users` This log is a Task set to run with Heroku Schedular Free. Everything Works perfect but when I do the same with heroku logs --ps run --app jl-production There are no results. No heroku[run.1] process logs. Both environments have the same scheduled tasks, albeit at different times but none the less both run scheduled tasks at specified times. Is there something im missing about heroku[run.1] processes in production env? Does heroku only keep the -ps logs for a certain amount of time? It seems to show less activity than the normal logs. Maybe only show 24hrs worth of logs rather than Last 100 logs... I need to log and debug the [run.1] process from the production env... specifically the jewellover:warn_users task. any ideas?

    Read the article

  • Using parameterized function calls in SELECT statements. SQL Server

    - by geekzlla
    I have taken over some code from a previous developer and have come across this SQL statement that calls several SQL functions. As you can see, the function calls in the select statement pass a parameter to the function. How does the SQL statement know what value to replace the variable with? For the below sample, how does the query engine know what to replace nDeptID with when it calls, fn_SelDeptName_DeptID(nDeptID) nDeptID IS a column in table Note. SELECT STATEMENT: SELECT nCustomerID AS [Customer ID], nJobID AS [Job ID], dbo.fn_SelDeptName_DeptID(nDeptID) AS Department, nJobTaskID AS JobTaskID, dbo.fn_SelDeptTaskDesc_OpenTask(nJobID, nJobTaskID) AS Task, nStandardNoteID AS StandardNoteID, dbo.fn_SelNoteTypeDesc(nNoteID) AS [Note Type], dbo.fn_SelGPAStandardNote(nStandardNoteID) AS [Standard Note], nEntryDate AS [Entry Date], nUserName as [Added By], nType AS Type, nNote AS Note FROM Note WHERE nJobID = 844261 ORDER BY nJobID, Task, [Entry Date] ====================== Function fn_SelDeptName_DeptID: ALTER FUNCTION [dbo].[fn_SelDeptName_DeptID] (@iDeptID int) RETURNS varchar(25) -- Used by DataCollection for Job Tracking -- if the Deptartment isnt found return an empty string BEGIN -- Return the Department name for the given DeptID. DECLARE @strDeptName varchar(25) IF @iDeptID = 0 SET @strDeptName = '' ELSE BEGIN SET @strDeptName = (SELECT dName FROM Department WHERE dDeptID = @iDeptID) IF (@strDeptName IS NULL) SET @strDeptName = '' END RETURN @strDeptName END ========================== Thanks in advance.

    Read the article

  • How to hold a queue of messages and have a group of working threads without polling?

    - by Mark
    I have a workflow that I want to looks something like this: / Worker 1 \ =Request Channel= - [Holding Queue|||] - Worker 2 - =Response Channel= \ Worker 3 / That is: Requests come in and they enter a FIFO queue Identical workers then pick up tasks from the queue At any given time any worker may work only one task When a worker is free and the holding queue is non-empty the worker should immediately pick up another task When tasks are complete, a worker places the result on the Response Channel I know there are QueueChannels in Spring Integration, but these channels require polling (which seems suboptimal). In particular, if a worker can be busy, I'd like the worker to be busy. Also, I've considered avoiding the queue altogether and simply letting tasks round-robin to all workers, but it's preferable to have a single waiting line as some tasks may be accomplished faster than others. Furthermore, I'd like insight into how many jobs are remaining (which I can get from the queue) and the ability to cancel all or particular jobs. How can I implement this message queuing/work distribution pattern while avoiding a polling? Edit: It appears I'm looking for the Message Dispatcher pattern -- how can I implement this using Spring/Spring Integration?

    Read the article

  • Execute a block of database querys

    - by Nightmare
    I have the following task to complete: In my program a have a block of database querys or questions. I want to execute these questions and wait for the result of all questions or catch an error if one question fails! My Question object looks like this (simplified): public class DbQuestion(String sql) { [...] } [...] //The answer is just a holder for custom data... public void SetAnswer(DbAnswer answer) { //Store the answer in the question and fire a event to the listeners this.OnAnswered(EventArgs.Empty); } [...] public void SetError() { //Signal an Error in this query! this.OnError(EventArgs.Empty); } So every question fired to the database has a listener that waits for the parsed result. Now I want to fire some questions asynchronous to the database (max. 5 or so) and fire an event with the data from all questions or an error if only one question throws one! Which is the best or a good way to accomplish this task? Can I really execute more then one question parallel and stop all my work when one question throws an error? I think I need some inspiration on this... Just a note: I´m working with .NET framework 2.0

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >