Search Results

Search found 8299 results on 332 pages for 'job hierarchy'.

Page 123/332 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • python os.mkfifo() for Windows

    - by user302099
    Hello. Short version (if you can answer the short version it does the job for me, the rest is mainly for the benefit of other people with a similar task): In python in Windows, I want to create 2 file objects, attached to the same file (it doesn't have to be an actual file on the hard-drive), one for reading and one for writing, such that if the reading end tries to read it will never get EOF (it will just block until something is written). I think in linux os.mkfifo() would do the job, but in Windows it doesn't exist. What can be done? (I must use file-objects). Some extra details: I have a python module (not written by me) that plays a certain game through stdin and stdout (using raw_input() and print). I also have a Windows executable playing the same game, through stdin and stdout as well. I want to make them play one against the other, and log all their communication. Here's the code I can write (the get_fifo() function is not implemented, because that's what I don't know to do it Windows): class Pusher(Thread): def __init__(self, source, dest, p1, name): Thread.__init__(self) self.source = source self.dest = dest self.name = name self.p1 = p1 def run(self): while (self.p1.poll()==None) and\ (not self.source.closed) and (not self.source.closed): line = self.source.readline() logging.info('%s: %s' % (self.name, line[:-1])) self.dest.write(line) self.dest.flush() exe_to_pythonmodule_reader, exe_to_pythonmodule_writer =\ get_fifo() pythonmodule_to_exe_reader, pythonmodule_to_exe_writer =\ get_fifo() p1 = subprocess.Popen(exe, shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE) old_stdin = sys.stdin old_stdout = sys.stdout sys.stdin = exe_to_pythonmodule_reader sys.stdout = pythonmodule_to_exe_writer push1 = Pusher(p1.stdout, exe_to_pythonmodule_writer, p1, '1') push2 = Pusher(pythonmodule_to_exe_reader, p1.stdin, p1, '2') push1.start() push2.start() ret = pythonmodule.play() sys.stdin = old_stdin sys.stdout = old_stdout

    Read the article

  • Why would paperclip not assign an ID to my uploaded photos?

    - by Trip
    I just deployed to a cluster server, and my delayed_jobs recipe was overwritten in the process. I solved that, delayed_jobs is up and running but can't find the ID of images that are uploaded. The images are saved correctly : Processing PhotosController#create (for 173.161.167.41 at 2010-06-01 05:09:14) [POST] Parameters: {"Filename"="1.jpg", "gallery_id"="1298", "action"="create", "amp"=nil, "authenticity_token"="qmbnpwFY8a5E3YtS/4fMWF/Z8evCE4hMxqKVJw0I7Ek=", "Upload"="Submit Query", "controller"="photos", "organization_id"="470", "_hq_channel_session"="BAh7CSIYdXNlcl9jcmVkZW50aWFsc19pZGkHIhV1c2VyX2NyZWRlbnRpYWxzIgGAOGRlZDc0NGJlOWU3NTNlNDFlYmVlMDdjMzIzYjA1ZjQxNGE5ZDY4YjNmYjFmNjNkMDQ2OWY2ZDQyOTljZDhiMDFlNmRkMDljNThmMzBmOWJhMTIwNDhkMDI5MTMxYmU5MDczYjIxZmI4YmQxMDVlMTBmNjZmOWFhODE1ZTBjMGM6EF9jc3JmX3Rva2VuIjFxbWJucHdGWThhNUUzWXRTLzRmTVdGL1o4ZXZDRTRoTXhxS1ZKdzBJN0VrPToPc2Vzc2lvbl9pZCIlMjAwMDQ3ZDQ3ZWUyZTgzODIxYzdjOGI3OTdmZGJiMDM=--ac6aa580262938bf5a4d6b9a740722b680eb5d48", "Filedata"=#} [paperclip] Saving attachments. [paperclip] saving /data/HQ_Channel/releases/20100530153454/public/system/photos/9253/original/1.jpg [paperclip] Saving attachments. [paperclip] Saving attachments. Completed in 127ms (View: 2, DB: 91) | 200 OK [http://invent.hqchannel.com/organizations/470/media/galleries/1298/photos?_hq_channel_session=BAh7CSIYdXNlcl9jcmVkZW50aWFsc19pZGkHIhV1c2VyX2NyZWRlbnRpYWxzIgGAOGRlZDc0NGJlOWU3NTNlNDFlYmVlMDdjMzIzYjA1ZjQxNGE5ZDY4YjNmYjFmNjNkMDQ2OWY2ZDQyOTljZDhiMDFlNmRkMDljNThmMzBmOWJhMTIwNDhkMDI5MTMxYmU5MDczYjIxZmI4YmQxMDVlMTBmNjZmOWFhODE1ZTBjMGM6EF9jc3JmX3Rva2VuIjFxbWJucHdGWThhNUUzWXRTLzRmTVdGL1o4ZXZDRTRoTXhxS1ZKdzBJN0VrPToPc2Vzc2lvbl9pZCIlMjAwMDQ3ZDQ3ZWUyZTgzODIxYzdjOGI3OTdmZGJiMDM%3D--ac6aa580262938bf5a4d6b9a740722b680eb5d48&authenticity_token=qmbnpwFY8a5E3YtS%2F4fMWF%2FZ8evCE4hMxqKVJw0I7Ek%3D] And then delayed_jobs keeps spinning around in circles on this one : 2010-06-01T05:09:02-0700: * [Worker(delayed_job host:ip-10-251-197-159 pid:19994)] acquired lock on PhotoJob 2010-06-01T05:09:02-0700: * [JOB] delayed_job host:ip-10-251-197-159 pid:19994 failed with ActiveRecord::RecordNotFound: Couldn't find Photo with ID=9247 - 0 failed attempts 2010-06-01T05:09:02-0700: * [Worker(delayed_job host:ip-10-251-197-159 pid:19994)] acquired lock on PhotoJob 2010-06-01T05:09:02-0700: * [JOB] delayed_job host:ip-10-251-197-159 pid:19994 failed with ActiveRecord::RecordNotFound: Couldn't find Photo with ID=9245 - 0 failed attempts 2010-06-01T05:09:02-0700: * [Worker(delayed_job host:ip-10-251-197-159 pid:19994)] acquired lock on PhotoJob So what I get is that the photos are not being assigned ID's by paperclip. Anyone know where I could poke and pry from here? UPDATE: I created a clone application on a single server. And there are no problems. The images on the cluster do show up (occassionally). If I keep clicking on the folders that lead to photos, it will 50% of the time return a 404 with it not being able to find the photo, and the other half it will present the photo. So the problem has got to be with the server interaction between the ActiveRecord through multiple servers.

    Read the article

  • Best method to search hierarchical data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (i.e. Glasgow) or in a larger area (i.e. Scotland). The two approaches I am considering are: keep a note of children in the database for each record (i.e. cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. Use a recursive function to find all results who have a relation to the selected area. The problems I see from both are: Holding this data in the db will mean having to keep track of all changes to structure. Recursion is slow and inefficent. Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

  • Should I pass a SqlDataReader by reference or not when passing it out to multiple threads.

    - by deroby
    Hi all, being new to c# I've run into this 'conundrum' when passing around a SqlDataReader between different threads. Without going into too much detail, the idea is to have a main thread fetching data from the database (a large recordset) and then have a helper-task run through this record by record and doing some stuff based upon the contents of this. There is no feedback to the recordset, it simply wades through until no records are left. This works fine, but given the nature of the job at hand it should be possible to have this job spread over different threads (CPUs) to maximize throughput (the order of execution is of no significance). The question then becomes, when I pass this recordset in a SqlDataReader, do I have to use ref or not ? It kind of boils down to the question : if I pass the object around without specifying ref, won't it create new copies in memory and have records processed n times ? Or, don't I risk having the record-position being moved forward while not all fields have been fully read yet ? The latter seems more like a 'data racing' issue and probably is covered by the lock()ing mechanism (or not?). My initial take on the problem was that it doesn't really hurt passing the variable using ref, yet as a colleague put it : "you only need ref when you're doing something wrong" =) Additionally using ref restricts me from applying a Using() construction too which isn't very nice either. I thus create a "basic" project that tackles the same approach but without the ref notation. Tests so far show that it works flawlessly on a Core2Duo (2cpu) using any number of threads, yet I'm still a bit wary... What do you experts think about this ? Use ref or not ? You can find the test-project here as it seems I can't upload it to this question directly ?!? ps: it's just a test-project and I'm new to c#, so please be gentle on me when breaking down the code =P

    Read the article

  • Using parameterized function calls in SELECT statements. SQL Server

    - by geekzlla
    I have taken over some code from a previous developer and have come across this SQL statement that calls several SQL functions. As you can see, the function calls in the select statement pass a parameter to the function. How does the SQL statement know what value to replace the variable with? For the below sample, how does the query engine know what to replace nDeptID with when it calls, fn_SelDeptName_DeptID(nDeptID) nDeptID IS a column in table Note. SELECT STATEMENT: SELECT nCustomerID AS [Customer ID], nJobID AS [Job ID], dbo.fn_SelDeptName_DeptID(nDeptID) AS Department, nJobTaskID AS JobTaskID, dbo.fn_SelDeptTaskDesc_OpenTask(nJobID, nJobTaskID) AS Task, nStandardNoteID AS StandardNoteID, dbo.fn_SelNoteTypeDesc(nNoteID) AS [Note Type], dbo.fn_SelGPAStandardNote(nStandardNoteID) AS [Standard Note], nEntryDate AS [Entry Date], nUserName as [Added By], nType AS Type, nNote AS Note FROM Note WHERE nJobID = 844261 ORDER BY nJobID, Task, [Entry Date] ====================== Function fn_SelDeptName_DeptID: ALTER FUNCTION [dbo].[fn_SelDeptName_DeptID] (@iDeptID int) RETURNS varchar(25) -- Used by DataCollection for Job Tracking -- if the Deptartment isnt found return an empty string BEGIN -- Return the Department name for the given DeptID. DECLARE @strDeptName varchar(25) IF @iDeptID = 0 SET @strDeptName = '' ELSE BEGIN SET @strDeptName = (SELECT dName FROM Department WHERE dDeptID = @iDeptID) IF (@strDeptName IS NULL) SET @strDeptName = '' END RETURN @strDeptName END ========================== Thanks in advance.

    Read the article

  • how to let the parser print help message rather than error and exit

    - by fluter
    Hi, I am using argparse to handle cmd args, I wanna if there is no args specified, then print the help message, but now the parse will output a error, and then exit. my code is: def main(): print "in abing/start/main" parser = argparse.ArgumentParser(prog="abing")#, usage="%(prog)s <command> [args] [--help]") parser.add_argument("-v", "--verbose", action="store_true", default=False, help="show verbose output") subparsers = parser.add_subparsers(title="commands") bkr_subparser = subparsers.add_parser("beaker", help="beaker inspection") bkr_subparser.set_defaults(command=beaker_command) bkr_subparser.add_argument("-m", "--max", action="store", default=3, type=int, help="max resubmit count") bkr_subparser.add_argument("-g", "--grain", action="store", default="J", choices=["J", "RS", "R", "T", "job", "recipeset", "recipe", "task"], type=str, help="resubmit selection granularity") bkr_subparser.add_argument("job_ids", nargs=1, action="store", help="list of job id to be monitored") et_subparser = subparsers.add_parser("errata", help="errata inspection") et_subparser.set_defaults(command=errata_command) et_subparser.add_argument("-w", "--workflows", action="store_true", help="generate workflows for the erratum") et_subparser.add_argument("-r", "--run", action="store_true", help="generate workflows, and run for the erratum") et_subparser.add_argument("-s", "--start-monitor", action="store_true", help="start monitor the errata system") et_subparser.add_argument("-d", "--daemon", action="store_true", help="run monitor into daemon mode") et_subparser.add_argument("erratum", action="store", nargs=1, metavar="ERRATUM", help="erratum id") if len(sys.argv) == 1: parser.print_help() return args = parser.parse_args() args.command(args) return how can I do that? thanks.

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Loading views dynamically

    - by Dan
    Case 1: I have created View-based sample application and tried execute below code. When I press on "Job List" button it should load another view having "Back Btn" on it. In test function, if I use [self.navigationController pushViewController:jbc animated:YES]; nothing gets loaded, but if I use [self presentModalViewController:jbc animated:YES]; it loads another view haveing "Back Btn" on it. Case 2: I did create another Navigation Based Applicaton and used [self.navigationController pushViewController:jbc animated:YES]; it worked as I wanted. Can someone please explain why it was not working in Case 1. Does it has something to do with type of project that is selected? @interface MWViewController : UIViewController { } -(void) test; @end @interface JobViewCtrl : UIViewController { } @end @implementation MWViewController (void)viewDidLoad { UIButton* btn = [UIButton buttonWithType:UIButtonTypeRoundedRect]; btn.frame = CGRectMake(80, 170, 150, 35); [btn setTitle:@"Job List!" forState:UIControlStateNormal]; [btn addTarget:self action:@selector(test) forControlEvents:UIControlEventTouchUpInside]; [self.view addSubview:btn]; [super viewDidLoad]; } -(void) test { JobViewCtrl* jbc = [[JobViewCtrl alloc] init]; [self.navigationController pushViewController:jbc animated:YES]; //[self presentModalViewController:jbc animated:YES]; [jbc release]; } (void)dealloc { [super dealloc]; } @end @implementation JobViewCtrl -(void) loadView { self.view = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]]; self.view.backgroundColor = [UIColor grayColor]; UIButton* btn = [UIButton buttonWithType:UIButtonTypeRoundedRect]; btn.frame = CGRectMake(80, 170, 150, 35); [btn setTitle:@"Back Btn!" forState:UIControlStateNormal]; [self.view addSubview:btn]; } @end

    Read the article

  • Are we in demand?

    - by dotnetdev
    I was made redundant in the end of November. This wasn't because I lacked required skills (although I'm a youngster and in career levels a junior dev - though I knew a lot more than was called for in my job). Anyway, I was laid off due to the whole recession/credit crunch thing going on. I worked for a small company and money got tight and I had to go. I haven't made a thread about this but I have seen threads about others being laid off and experiencing a similar fate. This leads me to the question: What is the job market like for developers? Are we in demand? I ask this question on a global level, but I live in London UK (in case anyone comes across this thread from the same area). I am a .NET dev but my secondary skillset is Flex (actionscript too) and Java, which my personal portfolio is made with. I hope to be strong enough in this to do this commercially, with a few more months of practise. Then I will have more jobs applicable to me. Unfortunately, I use agencies and sites like Jobserve/Monster.com but no new jobs are ever posted on there so when you apply to all the relevant jobs, then what? Whatsmore, a lot of companies are putting a freeze on recruitment. Thanks

    Read the article

  • How can I keep curl output out of mail from my cronjob?

    - by Russell C.
    I have written a Perl script that runs as a daily crontab job that uploads files to Amazon S3 via CURL. I want the output of the cron job emailed to me which works fine but I don't want that email to include messages related to the CURL upload (only those message my script is outputting). Here are the CURL related messages I'm seeing in the daily email right now: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 230M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 230M 0 0 0 544k 0 1519k 0:02:35 --:--:-- 0:02:35 1807k 0 230M 0 0 0 1744k 0 1286k 0:03:03 0:00:01 0:03:02 1342k 1 230M 0 0 1 2880k 0 1219k 0:03:13 0:00:02 0:03:11 1250k 1 230M 0 0 1 4016k 0 1198k 0:03:17 0:00:03 0:03:14 1218k 2 230M 0 0 2 5168k 0 1186k 0:03:19 0:00:04 0:03:15 1202k 2 230M 0 0 2 6336k 0 1181k 0:03:19 0:00:05 0:03:14 1157k 3 230M 0 0 3 7488k 0 1177k 0:03:20 0:00:06 0:03:14 1147k 3 230M 0 0 3 8592k 0 1167k 0:03:22 0:00:07 0:03:15 1142k 4 230M 0 0 4 9744k 0 1166k 0:03:22 0:00:08 0:03:14 1145k 4 230M 0 0 4 10.6M 0 1163k 0:03:23 0:00:09 0:03:14 1142k 5 230M 0 0 5 11.7M 0 1161k 0:03:23 0:00:10 0:03:13 1140k 5 230M 0 0 5 12.8M 0 1158k 0:03:23 0:00:11 0:03:12 1133k 6 230M 0 0 6 13.9M 0 1155k 0:03:24 0:00:12 0:03:12 1138k 6 230M 0 0 6 15.0M 0 1155k 0:03:24 0:00:13 0:03:11 1138k 7 230M 0 0 7 16.1M 0 1152k 0:03:25 0:00:14 0:03:11 1131k 7 230M 0 0 7 17.2M 0 1152k 0:03:25 0:00:15 0:03:10 1132k 7 230M 0 0 7 18.4M 0 1152k 0:03:24 0:00:16 0:03:08 1140k I am using a simple Perl system() call to invoke CURL. Does anyone know what command line argument I can supply CURL to turn off the reporting of the upload progress?

    Read the article

  • How to focus a control in an MDIParent when all child windows have closed?

    - by Tim Gradwell
    I have a control in my parent mdi form which I wish to be given the focus if all mdi children have closed. I've tried hooking into the child form's FormClosed event and setting the focus from there but when I test it, my control is not left with focus when I close the mdi child. Can anyone tell me what I'm missing? In the sample below, "first focus" gets hit and does its job correctly (if I comment out the first focus line, my tree does not get focus on startup, so it must be doing its job, right?) Unfortunately, even though "second focus" gets hit, my tree does not end up with the focus when I close the child window. Sample using System; using System.Windows.Forms; namespace mdiFocus { class ParentForm : Form { public ParentForm() { IsMdiContainer = true; tree = new TreeView(); tree.Nodes.Add("SomeNode"); tree.Dock = DockStyle.Left; Controls.Add(tree); } protected override void OnShown(EventArgs e) { Form child = new Form(); child.MdiParent = this; child.Show(); child.FormClosed += new FormClosedEventHandler(child_FormClosed); tree.Focus(); // first focus works ok } void child_FormClosed(object sender, FormClosedEventArgs e) { tree.Focus(); // second focus doesn't seem to work, even though it is hit :( } TreeView tree; } static class Program { [STAThread] static void Main() { Application.Run(new ParentForm()); } } }

    Read the article

  • shell script segment to avoid overwriting files

    - by johndashen
    I have a perl script (or any executable) E which will take a file foo.xml and write a file foo.txt. I use a Beowulf cluster to run E for a large number of XML files, but I'd like to write a simple job server script in shell (bash) which doesn't overwrite existing txt files. I'm currently doing something like #!/bin/sh PATTERN="[A-Z]*0[1-2][a-j]"; # this matches foo in all cases todo=`ls *.xml | grep $PATTERN`; isdone=`ls *.foo | grep $PATTERN`; whatsleft=todo - isdone; # what's the unix magic? #and then call the job server; jobserve E "$whatsleft"; and then I don't know how to get the difference between $todo and $isdone. I'd prefer using sort/uniq to something like a for loop with grep inside, but I'm not sure how to do it (pipes? temporary files?) As a bonus question, is there a way to do lookahead search in bash grep?

    Read the article

  • What would happen if a same file being read and appended at the same time(python programming)?

    - by Shane
    I'm writing a script using two separate thread one doing file reading operation and the other doing appending, both threads run fairly frequently. My question is, if one thread happens to read the file while the other is just in the middle of appending strings such as "This is a test" into this file, what would happen? I know if you are appending a smaller-than-buffer string, no matter how frequently you read the file in other threads, there would never be incomplete line such as "This i" appearing in your read file, I mean the os would either do: append "This is a test" - read info from the file; or: read info from the file - append "This is a test" to the file; and such would never happen: append "This i" - read info from the file - append "s a test". But if "This is a test" is big enough(assuming it's a bigger-than-buffer string), the os can't do appending job in one operation, so the appending job would be divided into two: first append "This i" to the file, then append "s a test", so in this kind of situation if I happen to read the file in the middle of the whole appending operation, would I get such result: append "This i" - read info from the file - append "s a test", which means I might read a file that includes an incomplete string?

    Read the article

  • Why do you program? Why do you do what you do? [closed]

    - by Pirate for Profit
    To me, writing a new program is like a puzzle. Before you write any code for a large system, you have to carefully craft each piece in your mind and imagine how all the pieces will fit together. If you don't, your solution may end up being undefined. What I mean is, I often don't know what I'm doing so I'll come to this site and beg for a code snippet, and then somehow try to hack it into my projects. I started writing GW-Basic when I was around 8 years old. Then it progressed from there, went to california university and did some Python and C++, but really didn't learn anything(college = highsk00l++). I've mostly been self-taught, took awhile to break bad habits and I'd say only in recent years would I consider myself understanding of design patterns and all that stuff (no but honestly procedural dudes, I would not want to design and maintain a large system procedurally, yous crazy). And despite my username, money has NOT been a big motivator. I've gone from job to job, I can usually get the work done perfect very quickly, any delays on my part are understandable (well about as understandable as it gets in the industry). But I ain't gonna work for peanuts because I got mouths to feed. Why do you program? Why do you do what you do?

    Read the article

  • Using parameterized function calls in SELECT statements. MS SQL Server

    - by geekzlla
    I have taken over some code from a previous developer and have come across this SQL statement that calls several SQL functions. As you can see, the function calls in the select statement pass a parameter to the function. How does the SQL statement know what value to replace the variable with? For the below sample, how does the query engine know what to replace nDeptID with when it calls, fn_SelDeptName_DeptID(nDeptID)? nDeptID IS a column in table Note. SELECT STATEMENT: SELECT nCustomerID AS [Customer ID], nJobID AS [Job ID], dbo.fn_SelDeptName_DeptID(nDeptID) AS Department, nJobTaskID AS JobTaskID, dbo.fn_SelDeptTaskDesc_OpenTask(nJobID, nJobTaskID) AS Task, nStandardNoteID AS StandardNoteID, dbo.fn_SelNoteTypeDesc(nNoteID) AS [Note Type], dbo.fn_SelGPAStandardNote(nStandardNoteID) AS [Standard Note], nEntryDate AS [Entry Date], nUserName as [Added By], nType AS Type, nNote AS Note FROM Note WHERE nJobID = 844261 xORDER BY nJobID, Task, [Entry Date] ====================== Function fn_SelDeptName_DeptID: ALTER FUNCTION [dbo].[fn_SelDeptName_DeptID] (@iDeptID int) RETURNS varchar(25) -- Used by DataCollection for Job Tracking -- if the Deptartment isnt found return an empty string BEGIN -- Return the Department name for the given DeptID. DECLARE @strDeptName varchar(25) IF @iDeptID = 0 SET @strDeptName = '' ELSE BEGIN SET @strDeptName = (SELECT dName FROM Department WHERE dDeptID = @iDeptID) IF (@strDeptName IS NULL) SET @strDeptName = '' END RETURN @strDeptName END ========================== Thanks in advance.

    Read the article

  • Coding the R-ight way - avoiding the for loop

    - by mropa
    I am going through one of my .R files and by cleaning it up a little bit I am trying to get more familiar with writing the code the r-ight way. As a beginner, one of my favorite starting points is to get rid of the for() loops and try to transform the expression into a functional programming form. So here is the scenario: I am assembling a bunch of data.frames into a list for later usage. dataList <- list (dataA, dataB, dataC, dataD, dataE ) Now I like to take a look at each data.frame's column names and substitute certain character strings. Eg I like to substitute each "foo" and "bar" with "baz". At the moment I am getting the job done with a for() loop which looks a bit awkward. colnames(dataList[[1]]) [1] "foo" "code" "lp15" "bar" "lh15" colnames(dataList[[2]]) [1] "a" "code" "lp50" "ls50" "foo" matchVec <- c("foo", "bar") for (i in seq(dataList)) { for (j in seq(matchVec)) { colnames (dataList[[i]])[grep(pattern=matchVec[j], x=colnames (dataList[[i]]))] <- c("baz") } } Since I am working here with a list I thought about the lapply function. My attempts handling the job with the lapply function all seem to look alright but only at first sight. If I write f <- function(i, xList) { gsub(pattern=c("foo"), replacement=c("baz"), x=colnames(xList[[i]])) } lapply(seq(dataList), f, xList=dataList) the last line prints out almost what I am looking for. However, if i take another look at the actual names of the data.frames in dataList: lapply (dataList, colnames) I see that no changes have been made to the initial character strings. So how can I rewrite the for() loop and transform it into a functional programming form? And how do I substitute both strings, "foo" and "bar", in an efficient way? Since the gsub() function takes as its pattern argument only a character vector of length one.

    Read the article

  • Best method to search heriarachal data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (ie Glasgow) or in a larger area (ie Scotland). The two approaches I am considering are 1: keep a note of children in the database for each record (ie cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. 2: Use a recursive function to find all results who have a relation to the selected area The problems I see from both are 1: holding this data in the db will mean having to keep track of all changes to structure 2: Recursion is slow and inefficent Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

  • First Time Working With Others?

    - by cam
    I've been at my very first programming job for about 8 months now and I've learned incredible amounts so far. Unfortunately, I'm the sole developer for a small startup company for internal applications. For the first time ever though, I'll be handing off some of my projects to someone else when I leave this job. I've documented all my projects thoroughly (at least I think so), but I still feel nervous about someone else reading my code. For example, I've always done this sort of thing. for (int i = 0; i < blah.length; i++) { //Do stuff } Should I name 'i' something descriptive? It's only a temporary variable, and will only exist within that loop, and it seems that it's pretty obvious what the loop does with 'i'. This is just one example. Another one is that I name variables differently... I don't really conform to a standard of naming besides starting all private members with an underscore. Are there any resources that could show me how to make it easier for the next developer? Are there standards for this type of thing?

    Read the article

  • Should I be worried if I don't get any internships by 3rd year's end?

    - by karamba
    I am in some mediocre college in some corner of India. Am about to complete 3rd year C.S. in a month and a half. I have no idea how to go about "finding an internship" as everyone seems to put it. Looking at online advice, I find that a primary way is to "use your contacts". I am sad to say that I don't have many friends(those I have, I am trying to get help from them for all it's worth), and my family can't help me as they have no idea about the software industry. My college has no official facility for aiding students in this, and the few faculty members who had contacts in "whatever" part of the industry have favoured some students that they have personally come to know. (Though I hear that the "internships" they got involve them stocking equipment in some small companies.... still it's something?) I'm getting nervous. I am considering just spending the coming summer refining my skills in C++ and begin learning MySQL and C#, both of which I have zero experience in. Maybe work on my own project... like a library management system. Relative to those in my college, I think I am among the best programmers there, but that isn't saying much as a lot of students can barely write basic code. I have experience in teaching myself C++, and DirectX9 having created a Tetris clone, some basic 3D apps (bouncing balls), and a basic console-based, text-file-database-using library management system (which I plan to improve this summer). Is it alright if I spend my summer so? Will I be able to get a job later on? I know I have to improve my social skills to get anywhere in life, and I will try, but say I am stuck like this till 4th year's end... will such self studying, online learning help me in landing a decent job? Perhaps after I have learned a bit more, joining some open source project?

    Read the article

  • Change timer intervall in windows service

    - by AyKarsi
    I have timer job inside a windows service, for which the intervall should be incremented when errors occur. My problem is that I can't get the timer.Change Method to actually change the intervall. The "DoSomething" is always called after the inital interval.. This is probably something simple .. Code follows: protected override void OnStart(string[] args) { //job = new CronJob(); timerDelegate = new TimerCallback(DoSomething); seconds = secondsDefault; stateTimer = new Timer(timerDelegate, null, 0, seconds * 1000); } public void DoSomething(object stateObject) { AutoResetEvent autoEvent = (AutoResetEvent)stateObject; if(!Busker.BitCoinData.Helpers.BitCoinHelper.BitCoinsServiceIsUp()) { secondsDefault += secondsIncrementError; if (seconds >= secondesMaximum) seconds = secondesMaximum; Loggy.AddError("BitcoinService not available. Incrementing timer to " + secondsDefault + " s",null); stateTimer.Change(seconds * 100, seconds * 100); return; } else if (seconds > secondsDefault) { // reset the timer interval if the bitcoin service is back up... seconds = secondsDefault; Loggy.Add ("BitcoinService timer increment has been reset to " + secondsDefault + " s"); } // do the the actual processing here }

    Read the article

  • Changing/Adding controls to the windows Open/Save common dialog

    - by ajp
    Is there a way of changing/adding to the windows Open/Save common dialog to add extra functionality? At work we have an area on a server with hundreds of 'jobfolders'- just ordinary windows folders created/managed automatically by the database application to house information about a job (emails/scanned faxes/Word docs/Spreadsheets/Photos etc) The folders are named by the job Number. I would like to expand the standard open/save dialog with a combobox which searches for jobfolders based on tags from the database, so that whatever my users are doing they can easily find their way to the correct jobfolder to find/save their work Connecting to the database and providing the functionality to search is no problem, but is there a way to add a combobox control (ideally with a keypress/keydown event) to the dialog? Or Create my own dialog and have it called/used in place of the standard one? i.e. from ANY app my dialog would be called allowing easy access to the jobfolders. If they are in outlook they can find a jobfolder quickly, if there are using Notepad they can still find the folder easily. This would mean a new unified way of finding jobfolders from any app. Ideally someone would know a way using VB/VB.net/C# but I'm guessing, if its possible, its probably going to be C++.

    Read the article

  • Is there a better way to detab (expand tabs) using Perl?

    - by Uri
    I wanted to detab my source files. (Please, no flame about WHY I wanted to detab my sources. That's not the point :-) I couldn't find a utility to do that. Eclipse didn't do it for me, so I implemented my own. I couldn't fit it into a one liner (-e) program. I came with the following, which did the job just fine. while( <> ) { while( /\t/ ) { s/^(([^\t]{4})*)\t/$1 /; s/^((([^\t]{4})*)[^\t]{1})\t/$1 /; s/^((([^\t]{4})*)[^\t]{2})\t/$1 /; s/^((([^\t]{4})*)[^\t]{3})\t/$1 /; } print; } However, it makes me wonder if Perl - the champion language of processing text - is the right tool. The code doesn't seem very elegant. If I had to detab source that assume tab=8 spaces, the code would look even worse. Specifically because I can think of a deterministic state machine with only 4 states to do the job. I have a feeling that a more elegant solution exists. Am I missing a Perl idiom? In the spirit of TIMTOWTDI I'm curious about the other ways to do it. u.

    Read the article

  • can upstart expect/respawn be used on processes that fork more than twice?

    - by johnjamesmiller
    I am using upstart to start/stop/automatically restart daemons. One of the daemons forks 4 times. The upstart cookbook states that it only supports forking twice. Is there a workaround? how it fails If I try to use expect daemon or expect fork upstart uses the pid of the second fork. When I try to stop the job nobody responds to upstarts SIGKILL signal and it hangs until you exhaust the pid space and loop back around. It gets worse if you add respawn. Upstart thinks the job died and immediately starts another one. bug acknowledged by upstream A bug has been entered for upstart. The solutions presented are stick with the old sysvinit, rewrite your daemon, or wait for a re-write. rhel is close to 2 years behind the latest upstart package so by the time the rewrite is released and we get updated the wait will probably be 4 years. The daemon is written by a subcontractor of a subcontractor of a contractor so it will not be fixed any time soon either.

    Read the article

  • Fake ISAPI Handler to serve static files with extention that are rewritted by url rewriter

    - by developerit
    Introduction I often map html extention to the asp.net dll in order to use url rewritter with .html extentions. Recently, in the new version of www.nouvelair.ca, we renamed all urls to end with .html. This works great, but failed when we used FCK Editor. Static html files would not get serve because we mapped the html extension to the .NET Framework. We can we do to to use .html extension with our rewritter but still want to use IIS behavior with static html files. Analysis I thought that this could be resolve with a simple HTTP handler. We would map urls of static files in our rewriter to this handler that would read the static file and serve it, just as IIS would do. Implementation This is how I coded the class. Note that this may not be bullet proof. I only tested it once and I am sure that the logic behind IIS is more complicated that this. If you find errors or think of possible improvements, let me know. Imports System.Web Imports System.Web.Services ' Author: Nicolas Brassard ' For: Solutions Nitriques inc. http://www.nitriques.com ' Date Created: April 18, 2009 ' Last Modified: April 18, 2009 ' License: CPOL (http://www.codeproject.com/info/cpol10.aspx) ' Files: ISAPIDotNetHandler.ashx ' ISAPIDotNetHandler.ashx.vb ' Class: ISAPIDotNetHandler ' Description: Fake ISAPI handler to serve static files. ' Usefull when you want to serve static file that has a rewrited extention. ' Example: It often map html extention to the asp.net dll in order to use url rewritter with .html. ' If you want to still serve static html file, add a rewritter rule to redirect html files to this handler Public Class ISAPIDotNetHandler Implements System.Web.IHttpHandler Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest ' Since we are doing the job IIS normally does with html files, ' we set the content type to match html. ' You may want to customize this with your own logic, if you want to serve ' txt or xml or any other text file context.Response.ContentType = "text/html" ' We begin a try here. Any error that occurs will result in a 404 Page Not Found error. ' We replicate the behavior of IIS when it doesn't find the correspoding file. Try ' Declare a local variable containing the value of the query string Dim uri As String = context.Request("fileUri") ' If the value in the query string is null, ' throw an error to generate a 404 If String.IsNullOrEmpty(uri) Then Throw New ApplicationException("No fileUri") End If ' If the value in the query string doesn't end with .html, then block the acces ' This is a HUGE security hole since it could permit full read access to .aspx, .config, etc. If Not uri.ToLower.EndsWith(".html") Then ' throw an error to generate a 404 Throw New ApplicationException("Extention not allowed") End If ' Map the file on the server. ' If the file doesn't exists on the server, it will throw an exception and generate a 404. Dim fullPath As String = context.Server.MapPath(uri) ' Read the actual file Dim stream As IO.StreamReader = FileIO.FileSystem.OpenTextFileReader(fullPath) ' Write the file into the response context.Response.Output.Write(stream.ReadToEnd) ' Close and Dipose the stream stream.Close() stream.Dispose() stream = Nothing Catch ex As Exception ' Set the Status Code of the response context.Response.StatusCode = 404 'Page not found ' For testing and bebugging only ! This may cause a security leak ' context.Response.Output.Write(ex.Message) Finally ' In all cases, flush and end the response context.Response.Flush() context.Response.End() End Try End Sub ' Automaticly generated by Visual Studio ReadOnly Property IsReusable() As Boolean Implements IHttpHandler.IsReusable Get Return False End Get End Property End Class Conclusion As you see, with our static files map to this handler using query string (ex.: /ISAPIDotNetHandler.ashx?fileUri=index.html) you will have the same behavior as if you ask for the uri /index.html. Finally, test this only in IIS with the html extension map to aspnet_isapi.dll. Url rewritting will work in Casini (Internal Web Server shipped with Visual Studio) but it’s not the same as with IIS since EVERY request is handle by .NET. Versions First release

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >