Search Results

Search found 3070 results on 123 pages for 'cron jobs'.

Page 100/123 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • What motivates people to learn a new programming language?

    - by szabgab
    There are plenty of question asking Which Programming Language Should I Learn? but I have not found an answer yet to the question what really motivates people to learn a specific new language?. There are the people who think they should learn a new language every year for educational purpose. How do they decide on the languages to be learned? Then I guess there are people who learn a new language because people around them told it is a fun language and they can build nice things with it. Of course if the current job requires it people would learn a new language but I think if the language seems to have a potential to earn money (e.g. There are plenty of jobs in Java or ObjectiveC can be used to write apps for the iPhone and make money). So why are you learning a new language or why have you learned the languages you know?

    Read the article

  • what do you expect from flash in the near future?

    - by algro
    The recent article of steve jobs link made me think about the future of flash. I'm learning actionscript 3.0 in my studies but is it the right decision still to go for it? I was pretty sure that I will be able to build application in as3 for iphones/ipads in the near future. It seems to me, while I would stay with flash, the market will be polarized by apple and adobe and you will always work double for both clientele, or just lose half of them. Which decision would you take as a designer, if you were still at university and you intend to become a freelancer?

    Read the article

  • Unable to bind in asp.net grid Template Column

    - by OneSmartGuy
    I am having trouble accessing the data field. I receive the error: Databinding methods such as Eval(), XPath(), and Bind() can only be used in the context of a databound control. I can get the value but using <%# getOpenJobs((string)Eval("ParentPart")) % but I need to use it in the if to display a certian picture if it passes the condition. Is there a better way to do this or am i just missing something simple? <telerik:GridTemplateColumn UniqueName="hasOpenJobs" HeaderText=""> <ItemTemplate> <% if (getOpenJobs((string)Eval("ParentPart")) > 1) { %> <img src="../images/job-icon.gif" alt="Open Jobs" /> <%} %> </ItemTemplate> </telerik:GridTemplateColumn>

    Read the article

  • Reading log files from web application

    - by Egorinsk
    Hi! I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • Hadoop Map Reduce job never finishes

    - by rohanbk
    I am running a Hadoop Map Reduce job using a Python Mapper and Reducer script, and Hadoop Streaming. Both my Map and Reduce jobs run till they are both 100%, but the job doesn't end. I know that when things go sour, Hadoop will terminate the job, but in this case, both stages reach a 100% and just never end. Has anyone else encountered anything similar? Also, how do I debug my program to figure out where things are going wrong? If I use a smaller input file, and I just run something like: $> cat input_file | mapper.py | sort | reduce.py >> output_file everything works perfectly fine. However, when I use Hadoop, things don't work out.

    Read the article

  • visually documenting web server configuration and infrastructure

    - by Alex Ciarlillo
    I have just finished a large re-organization and update of our institutions web server(s). This server hosts 3 virtual hosts, 3-4 blogs, 2 wikis, some legacy static HTML pages, and many hosted documents (PDF, .jpg, .xls). I have organized the site into a structure of something like: /var/www/sites/vhost1, vhost2, vhost3 .../wordpress/blogX .../mediawiki/wikiX Data is in a seperate directory structure so I can run a cron task over it to make sure it is all writeable and such. I then symlink to these data directories for each application. /var/www/data/vhost1, vhost2, vhost3 .../wordpress/blogX/uploads .../mediawiki/wikiX/images All Apache configs are in /etc/httpd/conf.d/vhosts.d/vhost1,2,3.conf On top of this there is also a testing server which mirrors this setup. Once changes are fully tested, they are rsynced down to the live server. All the wordpress installs and mediawiki installs are straight form SVN and updates are done by switching branches or "svn up". So my question is how can I best document to share with a) co-workers, b) possible future replacement, c) myself 6 months from now. Obviously I can make a wiki page, excel document, whatever and fill it with text, but I am looking for a more visual representation that I can use to explain the architecture to less-technical people. Ideally it would be awesome if this visual representation could then be expanded to get more technical details.

    Read the article

  • Who deleted my sql table rows?

    - by vikasde
    I have an SQL table, from which data is being deleted. Nobody knows how the data is being deleted. I added a trigger and know the time, however no jobs are running that would delete the data. I also added a trigger whenever rows are being deleted from this table. I am then inserting the deleted rows and the SYSTEM_USER to a log table, however that doesnt help much. Is there anything better I can do to know who and how the data gets deleted? Would it be possible to get the server id or something? Thanks for any advice. Sorry: I am using SQL Server 2000.

    Read the article

  • How do I control output files name and content of an Hadoop streaming job?

    - by Eran Kampf
    Is there a way to control the output filenames of an Hadoop Streaming job? Specifically I would like my job's output files content and name to be organized by the ket the reducer outputs - each file would only contain values for one key and its name would be the key. Update: Just found the answer - Using a Java class that derives from MultipleOutputFormat as the jobs output format allows control of the output file names. http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/mapred/lib/MultipleOutputFormat.htmlhttp://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/mapred/lib/MultipleOutputFormat.html I havent seen any samples for this out there... Can anyone point out to an Hadoop Streaming sample that makes use of a custom output format Java class?

    Read the article

  • SharePoint job queued, but never starts.

    - by Eric Uniacke
    I'm attempting to retract a solution from SharePoint. I've started the job via Central Admin site as well as from stsadm. The job is queued, however, it will stay in a state of pending for days. The SharePoint services - Admin and Timer service - are started. Any suggestoins of why the retraction jobs are queued, but never started? I'm really at a loss of what to try next, so anything will be helpful. Thank you!

    Read the article

  • cause for mysql crash

    - by user1322092
    A cron job automatically restarted my mysql database. What's the cause for the crash, or can you suggest how to resolve or monitor. I would REALLY appreciate your input. 120715 14:38:58 mysqld started 120715 14:38:58 InnoDB: Started; log sequence number 0 411137570 120715 14:38:58 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution 120715 15:14:21 [Note] /usr/libexec/mysqld: Normal shutdown 120715 15:14:23 InnoDB: Starting shutdown... 120715 15:14:25 InnoDB: Shutdown completed; log sequence number 0 411166467 120715 15:14:25 [Note] /usr/libexec/mysqld: Shutdown complete 120715 08:14:25 mysqld ended 120715 08:14:26 mysqld started 120715 8:14:26 InnoDB: Started; log sequence number 0 411166467 120715 8:14:26 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution 121212 09:15:32 mysqld started InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 121212 9:15:58 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 121212 9:17:28 InnoDB: Started; log sequence number 0 554145193 121212 9:17:57 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution

    Read the article

  • Max. # of printers allowed for google cloud print

    - by user1858673
    I intend to write an online shopping program using PHP. When the buyer completes an order I want to print the receipt to his/her printer using Google Cloud Print. For that I will need the buyer to shared his/her printer with my Google account. My questions are: 1. Is there an upper bound for the number of printers a Google account is allowed to print to? 2. Is there a daily upper bound for the number of print jobs a Google account is allow? Thanks for reading and thanks for the answers in advance.

    Read the article

  • Why is "start in" needed for Windows scheduled tasks?

    - by GomoX
    We develop a web application that can be deployed on Windows or Linux. The Linux implementation uses cron, and the Windows one uses scheduled tasks to run a single PHP script that processes all scheduled tasks for our system. The task is scheduled using schtasks during the install process, like: This has always worked both under W2003 and W2008. A week ago a customer reported that scheduled tasks were not running. He is running on Windows 2008. We checked over and over and finally solved the issue by entering the folder that contains the .vbs script as the "start in" folder for the scheduled task. This said, there is no way to set up the "start in..." value from schtasks without using an XML definition of the tasks. XML definitions don't work in Windows 2003, so I would have to add windows version detection to the installer, additional testing, etc (I'd like to avoid this if at all possible). The only atypical thing I noticed about the install is that the system is installed in D:\ as opposed to the default C:\Program Files (x86)\, but I don't see how this would matter. All the paths are absolute in all the scripts. Can anyone suggest a reasonable solution for this?

    Read the article

  • How do I make use of multiple cores in Large SQL Server Queries?

    - by Jonathan Beerhalter
    I have two SQL Servers, one for production, and one as an archive. Every night, we've got a SQL job that runs and copies the days production data over to the archive. As we've grown, this process takes longer and longer and longer. When I watch the utilization on the archive server running the archival process, I see that it only ever makes use of a single core. And since this box has eight cores, this is a huge waste of resources. The job runs at 3AM, so it's free to take any and all resources it can find. So what I need to do if figure out how to structure SQL Server jobs so they can take advantage of multiple cores, but I can't find any literature on tackling this problem. We're running SQL Server 2005, but I could certainly push for an upgrade if 2008 takes of this problem.

    Read the article

  • PHP - How to retrieve session in php

    - by Klaus Jasper
    I created a table that contains id - names - jobs and page that shows the names only and beside each name there is button Job and session that contains the id. this is my code $query = mysql_query("SELECT * FROM table"); while($fetch = mysql_fetch_array("$query")){ $name = $fetch['names']; $id = $fetch['id']; echo '</br>'; echo $name; $_SESSION['name'] = $id; echo "<button>Job</button>"; } I want when the user click on button Job redirect to a page that contains the job of that session. so how can I do it?

    Read the article

  • ec2 spot instance for daily processing task

    - by chaft
    I don't have much experience as a sysadmin or with amazon aws, so I hope someone can explain in simple terms or refer me to a good guide on how to achieve the below. I have a system running on ec2 and amazon rds getting data in and saving it to the db. I need to run a script once a day (at the end of the day) to process all that data and prepare a daily report. This process will take approximately an hour to run. It needs to run on a high memory instance.. From what i've read so far, I guess the best way to do it is to have a high memory spot instance run every day, set it up to execute the script on startup and and shut down when done. Is that the right way to do it? If so, how to do it? how to tell the spot instance to run every day? through a cron job on the other server or is there a better way? How to set it up to run the script on startup? through cloudinit? Any help would be appreciated. One last thing, the job is not very time sensitive as long as it runs every day.. thanks

    Read the article

  • Freelancer - client agreement. What things are worth to write explicitely?

    - by Dzida
    Hi guys, This is not technical question, though I think it is quite important for software developers who work as a freelancers. There is some general good advice to make paper contracts before starting new jobs. When I started my work as a freelancer I hadn't got any clue how such contract should looks like. Now I have some ideas but I believe many of freelancers gathered on SO can add some interesting advices for less-experienced colleagues (like me). So the question is: what clauses freelancers should put on agreements with their clients to make their projects less stressful and better secured. What are your experiences here? PS: Please write your country on the bottom of your post - I guess some stuff might be country specific. Probably there are differences in form of agreements depending on country where business is made.

    Read the article

  • How to find the reason for a weekly downtime on an Ubuntu web server hosted by AWS?

    - by IceSheep
    We started monitoring our web server using Pingdom and found out that we have a downtime of a few minutes every Sunday at 0:00 UTC. The test runs every minute and checks if a successful HTTP response (code 200) is returned on port 80. The test fails due to a timeout (no response after 30 seconds). Here's what we've already checked – without success: Since we run our webserver behind a load balancer, I've set the Pingdom test on the load balancer's public DNS and the webserver's public DNS in order to find out if there's a problem with the AWS load balancer – both tests return the same result We set up Munin on our webserver. Everything looked fine even after the failure. Since the last failure lasted only 2 minutes I suppose Munin couldn't capture a potential problem (it only checks every 5 minutes) I have checked /var/log/apache2/error.log and /var/log/syslog for suspicious entries I have checked /etc/cron.weekly and /etc/crontab for suspicious entries I have searched for files created or last-modified during 0:00 and 0:15 using this method: touch -t 201209020000 start touch -t 201209020015 end find / -newer start -and ! -newer end (nothing found) Has anybody experienced a similar problem? Any proposals on how to find the reason for this behavior? It's Ubuntu 10.04 LTS running on an AWS m1.large instance. Thanks!

    Read the article

  • Why would the Apache parent process restart silently?

    - by miracle
    I run apache 2.2.9 with mpm prefork on debian lenny. Following http://httpd.apache.org/docs/2.2/mod/prefork.html, I would expect that there is one parent process, running as root and listening as configured, which would start child processes as defined by the Min/Max/etc. directives. I expect the children to be restarted as per MaxRequestsPerChild, but the parent process to stay put with one process id until I restart it manually. Out of a little paranoia, I started monitoring listening ports including process ids. I have a cron job every 20 minutes to run netstat -ap | grep LISTEN and diff the output. Sometimes (about once per day) I see a series of this: 8c8 < tcp6 0 0 [::]:www [::]:* LISTEN 6194/apache2 --- tcp6 0 0 [::]:www [::]:* LISTEN 6607/apache2 10c10 < tcp6 0 0 [::]:https [::]:* LISTEN 6194/apache2 --- tcp6 0 0 [::]:https [::]:* LISTEN 6607/apache2 Over a period of an hour or three, the parent would change its pid at least once every 20 minutes, without any explanation in the log files or any other hint that anything is going wrong. This is not what I expected. What am I missing?

    Read the article

  • SQL Server (TSQL) - Is it possible to EXEC statements in parallel?

    - by Investor5555
    SQL Server 2008 R2 Here is a simplified example: EXECUTE sp_executesql N'PRINT ''1st '' + convert(varchar, getdate(), 126) WAITFOR DELAY ''000:00:10''' EXECUTE sp_executesql N'PRINT ''2nd '' + convert(varchar, getdate(), 126)' The first statement will print the date and delay 10 seconds before proceeding. The second statement should print immediately. The way T-SQL works, the 2nd statement won't be evaluated until the first completes. If I copy and paste it to a new query window, it will execute immediately. The issue is that I have other, more complex things going on, with variables that need to be passed to both procedures. What I am trying to do is: Get a record Lock it for a period of time while it is locked, execute some other statements against this record and the table itself Perhaps there is a way to dynamically create a couple of jobs? Anyway, I am looking for a simple way to do this without having to manually PRINT statements and copy/paste to another session. Is there a way to EXEC without wait / in parallel?

    Read the article

  • Table naming convention?

    - by MattSlay
    In our manufacturing shop, each Employee hits the time clock every time they change Jobs or Machines (work centers) during their work day. Each record created in the Time Clock app has foreign keys that link the record to: the Employee, the Job, and the Machine which they are about to operate. I’m trying to determine the best name for this table… If I were tempted to call it ClockRecords or TimeClockRecords, why wouldn’t I also consider naming it JobTimeRecords, or why not MachineTimeRecords. Any ideas on a good name?

    Read the article

  • Python as your main language. Possible?

    - by Deinumite
    I am currently attending college and the languages that I will 'know' by graduation are C++ and Java. That being said, i am also in the process of teaching myself Python. I know that every programming language has its own pros and cons, but would it be possible to become a python developer out of school? I always have more 'fun' programming in Python than i do in C++ or Java, and I am also in love with Pythons documentation. I know C++ will always be on top in terms of speed, but what would be the benefit of memorizing every javadoc against focusing on Python instead? are there good jobs to be had with Python? edit: also, would it be beneficial for me to look at C# as well? Microsoft is really throwing their support at it so that could be a decent career path as well.

    Read the article

  • How to deploy new instances of the same application (on 1 server) automatically?

    - by Intru
    I'm working on a SaaS application where each customer runs its own version of the application. All the application instances currently run on a single server. This works quite well for us (we need less resources in total). The application doesn't use a lot of resources, so even a small VPS would be overkill (and more expensive). Adding a new customer is currently quite a bit of work: Create a user that is allowed to ssh Create a new MySQL database and user Create a virtual host for the application Log in with the new user, do a git checkout of the application (in the right location) Create tables in the new database, and add some init data Add some cron jobs Create a first user that can log in Add this new instance to capistrano What would be the best way to automate these tasks? Are the applications that can (given proper configuration) do this? Ideally this should be usable for a sales-person (so something web-based). I could write a (bash) script that does most of these tasks, and then maybe add a small web-based wrapper where someone could provider the domain/default user information. Of course, this would also require a delete-script, since some customers will eventually leave, which means that you need a list of all existing customers/instances.

    Read the article

  • Commands implicitly threaded in Makefiles ?

    - by apple92
    Hi, I have a "super" makefile which launches two "sub" make file: libwebcam: @echo -e "\nInvoking libwebcam make." $(MAKE) -C $(TOPDIR)/libwebcam uvcdynctrl: @echo -e "\nInvoking uvcdynctrl make." $(MAKE) -C $(TOPDIR)/uvcdynctrl uvcdynctrl uses libwebcam... I noticed that those two builds are launched as separate threads by make ! Thus sometimes the lib is not available when uvcdynctrl starts being built, and I get errors. By default, make should not launch commands as threads since this is available only through -j (number of jobs) and, according to the make manual, there is no thread by default. I run this on an Ubuntu. Did someone face the same issue ? Apple92

    Read the article

  • Best practice: How to persist simple data without a database in django?

    - by Infinity
    I'm building a website that doesn't require a database because a REST API "is the database". (Except you don't want to be putting site-specific things in there, since the API is used by mostly mobile clients) However there's a few things that normally would be put in a database, for example the "jobs" page. You have master list view, and the detail views for each job, and it should be easy to add new job entries. (not necessarily via a CMS, but that would be awesome) e.g. example.com/careers/ and example.com/careers/77/ I could just hardcode this stuff in templates, but that's no DRY- you have to update the master template and the detail template every time. What do you guys think? Maybe a YAML file? Or any better ideas? Thx

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >