Search Results

Search found 3615 results on 145 pages for 'cron daily'.

Page 116/145 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Export Sharepoint 2007 Custom List as RSS File

    - by matt
    Here's our scenario: We've created a sharepoint 2007 calendar on our intranet site We want to run a daily job to export a subset of the events to an rss file Another job will move the rss file to our public web site We have some funny restrictions where we can't simply publish the rss feed to the public. We have to go this export route. I'm not clear on how to accomplish step 2. Ideally, we wouldn't have to write a lot of custom code to accomplish this. Thanks.

    Read the article

  • SQL one table aggrigation

    - by Lostdrifter
    Ok, for the last few days I have been attempting to find a method to pull a very important set of information form a table that contains what I call daily counts. I have a table that is setup as follows. person|company|prod1|prod2|prod3|gen_date Each company has more then one person, and each person can have different combination of products that they have purchased. What I have been trying to figure out is a SQL statement that will list the number of people that have bought a particular product per company. So an output similar to this: Comp ABC | 13 Prod1 | 3 Prod2 | 5 Prod 3 Comp DEF | 2 Prod1 | 15 Prod2 | 0 Prod 3 Comp HIJ | 0 Prod1 | 0 Prod2 | 7 Prod 3 Currently if a person did not select a product the value being stored is NULL. Best I have right now is 3 different statements that can produce this information if run on there own. SELECT Count(person) as puchases, company FROM Sales WHERE prod1 = '1' and gendate = '3/24/2010' Group BY company

    Read the article

  • Why is Harvest being purchased at all?

    - by Mike Caron
    Does your work environment use Harvest SCM? I've used this now at two different locations and find it appalling. In one situation I wrote a conversion script so I could use CVS locally and then daily import changes to the Harvest system while I was sleeping. The corp was fanatic about using Harvest, despite 80% of the programmers crying for something different. It was needlessly complicated, slow and heavy. It is now a job requirement for me that Harvest is not in use where I work. Has anyone else used Harvest before? What's your experience? As bad as mine? Did you employ other, different workarounds? Why is this product still purchased today?

    Read the article

  • Calculating intraday candlesticks by time intervals

    - by Sam
    This maybe an over asked question, but my mind draws blank at this moment. I know what a candlestick chart is and how to draw it daily. But how to draw it intraday at asked time periods. I have this server, written in Java, that gives me trade depth (each trade done since the start of the day). Its just a stream of raw data: price, shares, timestamp. How does one go about calculating candlestick data from that? Lets say, they want to have 5 min candlestick or 1min candlestick. Or is there a library that will do that for me if I feed it data? Any help is appreciated!

    Read the article

  • How bad is opening and closing a SQL connection for several times? What is the exact effect?

    - by Eren
    For example, I need to fill lots of DataTables with SQLDataAdapter's Fill() method: DataAdapter1.Fill(DataTable1); DataAdapter2.Fill(DataTable2); DataAdapter3.Fill(DataTable3); DataAdapter4.Fill(DataTable4); DataAdapter5.Fill(DataTable5); .... .... Even all the dataadapter objects use the same SQLConnection, each Fill method will open and close the connection unless the connection state is already open before the method call. What I want to know is how does unnecessarily opening and closing SQLConnections affect the performance of the application. How much does it need to scale to see the bad effects of this problem (100,000s of concurrent users?). In a mid-size website (daily 50000 users) does it worth bothering and searching for all the Fill() calls, keeping them together in the code and opening the connection before any Fill() call and closing afterwards?

    Read the article

  • How can I automatically synchronize a directory tree on multiple machines?

    - by Blacklight Shining
    I have two Mac laptops and a Debian server, each with a directory that I would like to keep in sync between the three. The solution should meet the following criteria (in rough order of importance): It must not use any third-party service (e.g. Dropbox, SugarSync, Google whatever). This does not include installing additional software (as long as it's free). It must not require me to use specific directories or change my way of storing things. (Dropbox does this IIRC) It must work in all directions (changes made on /any/ machine should be pushed to the others) All data sent must be encrypted (I have ssh keypairs set up already) It must work even when not all machines are available (changes should be pushed to a machine when it comes back online) It must work even when the /directories/ on some machines are not available (they may be stored on disk images which will not always be mounted) This can be solved for Macs by using launchd to automatically launch and kill (or in some way change the behavior of) whatever daemon is used for syncing when the images are mounted and unmounted. It must be immediate (using an event-based system, not a periodic one like cron) It must be flexible (if more machines are added, I should be able to incorporate them easily) I also have some preferences that I would like to be fulfilled, but do not have to be: It should notify me somehow if there are conflicts or other errors. It should recognize symbolic and hard links and create corresponding ones. It should allow me to create a list of exceptions (subdirectories which will not be synced at all). It should not require me to set up port forwarding or otherwise reconfigure a network. This can be solved by using an ssh tunnel with reverse port forwarding. If you have a solution that meets some, but not all of the criteria, please contribute it in the comments as it might be useful in some way, and it might be possible to meet some of the criteria separately. What I tried, and why it didn't work: rsync and lsyncd do not support bidirectional synchronization csync2 is designed for server clusters and does not appear to work with machines with dynamic IPs DRBD (suggested by amotzg) involves installing a kernel module and does not appear to work on systems running OS X

    Read the article

  • Table Design in mysql

    - by RIDDHI
    Hi everyone, I need to create one table, Description : i need to create table based on schedule like daily, weekly & monthly, columns are like : sno, startdate, enddate, day, scheduletype For example i ll take weekly data, for my point of view : From sunday to saturday (1 - 7 )Id i create.... So lots of posibilities are creates like (1,2)(1,3) ..(1,2,3)....up to n....this twise posibility only but that will created up to 7 posibility in one. so how can i store this posibility in mysql database? If any one have an issue get back to me... Thanks in advanced!!!! Riddhi

    Read the article

  • Multiple webservice calls

    - by Mujtaba Hassan
    I have a webservice (ASP.NET) deployed on a webfarm. A client application consumes it on daily basis. The problem is that some of its calls are duplicated (with difference of milliseconds). For example I have a function Foo(string a,string b). The client app calls this webmethod as Foo('test1','test2') once but my log shows that it is being called twice or sometimes 3 or 4 times randomly. Is this anything wrong with the webfarm or the code? Note that the webmethod has simple straighfarward insert and update statements.

    Read the article

  • Store time of the day in SQL

    - by nute
    How would you store a time or time range in SQL? It won't be a datetime because it will just be let's say 4:30PM (not, January 3rd, 4:30pm). Those would be weekly, or daily meetings. The type of queries that I need are of course be for display, but also later will include complex queries such as avoiding conflicts in schedule. I'd rather pick the best datatype for that now. I'm using MS SQL Server Express 2005. Thanks! Nathan

    Read the article

  • What's the fastest way to scrape a lot of pages in php?

    - by Yegor
    I have a data aggregator that relies on scraping several sites, and indexing their information in a way that is searchable to the user. I need to be able to scrape a vast number of pages, daily, and I have ran into problems using simple curl requests, that are fairly slow when executed in rapid sequence for a long time (the scraper runs 24/7 basically). Running a multi curl request in a simple while loop is fairly slow. I speeded it up by doing individual curl requests in a background process, which works faster, but sooner or later the slower requests start piling up, which ends up crashing the server. Are there more efficient ways of scraping data? perhaps command line curl?

    Read the article

  • Server side alerts for client side app

    - by jpwagner
    Here's the scenario: User interacts with Adobe flex webpage to configure reports based on some data stored server side. They configure their view and have THAT view emailed to them daily. I've got the report builder, the part I'm trying to figure out is how to render the report server side and send it out as email (native flex functionality? convert to html? take screenshot? assume something is running client side?...) Please help me with some ideas. Thanks!

    Read the article

  • Should I stick only to AWS RDS Automated Backup or DB Snapshots?

    - by James Wise
    I am using AWS RDS for MySQL. With it comes on backup, I understand that amazon provides two types of backup - automated backup and database (DB) snapshot. The difference is explain in here - http://aws.amazon.com/rds/faqs/#23. However, I am still confuse if should I stick to automated backup only or both automated and manual (db snapshots). What do you think guys? What's the setup of your own? I heard to others that automated backup is not reliable due to some unrecoverable database when the DB instance is crashed so the DB snapshots are the way to rescue you. If I will do daily DB snapshots as similar settings to automated backup, I have gonna pay much bunch of bucks. Hope anyone could enlighten me or advise me the right set up. Thanks. James

    Read the article

  • Unable to ssh to a Linux VM after a day

    - by jogabonito
    I have a machine running 4 VMs on it. There is one Fedora VM which is causing me some trouble. The IPs of the VMs are something like 10.100.100.* I have a Windows PC which is in the same network. It has an IP 10.100.25.77. When I reboot the Fedora VM, I am able to ping it from my Windows PC as well as use putty to ssh to it. The next day, I cant ping it or ssh from my Windows PC. However I can ping and ssh to the other VMs on the machine. If I ssh to one of the other VMs, I can ping and ssh to the Fedora VM. Next if I restart it, things get back to normal and I can access it without any issues. The IP of the VM doesn't change after rebooting and it is statically assigned I would like to know what is causing this and how to get it fixed. As a last resort, I am thinking of running a cron job to restart the VM every night, it is not a critical server, but will be generally used occasionally in the day time.

    Read the article

  • How to discard time intervals with Time Series / XYPlots using JFreeChart?

    - by Alex Arnon
    Hi All, I am building a set of chart displays, one of which is for a month display of daily trading - that is, one point of data per day (closing). Since there is no trade during weekends and holidays, I need to discard these data points. Not only that, but data points should still appear adjacent to each other, regardless of any gaps in time. This can be seen in any such chart e.g. in the 3 month graph for Nasdaq on Yahoo Finance - see how weekends are skipped. My question is: how should one correctly implement this in JFreeChart? Thanks in advance!

    Read the article

  • Rkhunter reports file properties have changed

    - by CountMurphy
    I am running a fully updated LTS copy of Ubuntu server. Today I ran rkhunter (as I do from time to time). This is the output I got: Warning: The file properties have changed: [15:52:25] File: /bin/ps [15:52:25] Current hash: f22991ec93ae966c856d367f42fc3d8a484bd827 [15:52:25] Stored hash : 1892268bf195ac118076b1b0f53e7a637eb6fbb3 [15:52:25] Current inode: 142902 Stored inode: 130894 [15:52:25] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:25] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:33] File: /usr/bin/ldd [15:52:33] Current hash: f1e2ca5aa3a28994e2cebb64c993a72b7d97b28c [15:52:33] Stored hash : 295d9cedb121a5e431a39a6d201ecd7ce5640497 [15:52:33] Current inode: 2236210 Stored inode: 2234359 [15:52:33] Current size: 5280 Stored size: 5279 [15:52:33] Current file modification time: 1331165514 (07-Mar-2012 16:11:54) [15:52:33] Stored file modification time : 1295653965 (21-Jan-2011 15:52:45) Warning: The file properties have changed: [15:52:37] File: /usr/bin/pgrep [15:52:37] Current hash: 3eada9a96760f3e2c9111cfe32901d1432813c1d [15:52:37] Stored hash : ce265d0db9964b173fe5036f703a9b8d66e55df3 [15:52:37] Current inode: 2229646 Stored inode: 2224867 [15:52:37] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:37] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:41] File: /usr/bin/top [15:52:41] Current hash: 6be13737d8b0950cea2f1ae3a46d4af713dbe971 [15:52:41] Stored hash : c7b495ecef3982eeb6f08a511861b1a1ae8775e6 [15:52:41] Current inode: 2229629 Stored inode: 2224862 [15:52:41] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:41] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:53] File: /usr/sbin/cron [15:52:53] Current hash: e783ca973f970aa8a4bf5edc670e690b33914c3d [15:52:53] Stored hash : 4718257a8060736b9058aed025c992f02a74a5a7 [15:52:53] Current inode: 2224719 Stored inode: 2228839 [15:52:54] Current file modification time: 1330965568 (05-Mar-2012 08:39:28) There were also a few other I left out. Has my server been rooted? I am running fail2ban and do monitor failed ssh logins. nothing has come up. Could someone compare these hashes to their copy of Ubuntu Server (lts)? Please tell me these are false positives..... Edit: is something else like rkhunter I can run for a second scan?

    Read the article

  • bash testing a group of directories for existence

    - by Jim Jones
    Have documents stored in a file system which includes "daily" directories, e.g. 20050610. In a bash script I want to list the files in a months worth of these directories. So I'm running a find command find <path>/200506* -type f >> jun2005.lst. Would like to check that this set of directories is not a null set before executing the find command. However, if I use if[ -d 200506* ] I get a "too many arguements error. How can I get around this?

    Read the article

  • postfix revived and delivered have the same values (?)

    - by thinkingbig
    I have configured my first server (Debian with ISPConfig). Generally i want to send bulk e-mails to our users, i configure postfix and turn on postfix... but... After 1 hour of sending emails i have logs like this: Grand Totals messages 21886 received 21883 delivered 0 forwarded 0 deferred 234 bounced 0 rejected (0%) 0 reject warnings 0 held 0 discarded (0%) 30805k bytes received 31280k bytes delivered 3 senders 3 sending hosts/domains 12588 recipients 3 recipient hosts/domains Per-Hour Traffic Summary time received delivered deferred bounced rejected -------------------------------------------------------------------- 0000-0100 0 0 0 0 0 0100-0200 0 0 0 0 0 0200-0300 0 0 0 0 0 0300-0400 0 0 0 0 0 0400-0500 0 0 0 0 0 0500-0600 0 0 0 0 0 0600-0700 0 0 0 0 0 0700-0800 0 0 0 0 0 0800-0900 0 0 0 0 0 0900-1000 0 0 0 0 0 1000-1100 0 0 0 0 0 1100-1200 0 0 0 0 0 1200-1300 0 0 0 0 0 1300-1400 0 0 0 0 0 1400-1500 0 0 0 0 0 1500-1600 15311 15306 0 168 0 1600-1700 6575 6577 0 66 0 1700-1800 0 0 0 0 0 1800-1900 0 0 0 0 0 1900-2000 0 0 0 0 0 2000-2100 0 0 0 0 0 2100-2200 0 0 0 0 0 2200-2300 0 0 0 0 0 2300-2400 0 0 0 0 0 Host/Domain Summary: Message Delivery sent cnt bytes defers avg dly max dly host/domain 21521 30353k 0 3.4 m 15.5 m wp.pl 355 919k 0 54.9 s 13.0 m mysenderdomainexample.pl 7 8477 0 1.7 s 1.9 s prokonto.pl Host/Domain Summary: Messages Received msg cnt bytes host/domain 21879 30786k mysenderdomainexample.pl 5 16196 mx4.wp.pl 1 3200 mx3.wp.pl Senders by message count 21783 [email protected] 96 [email protected] 6 from=< **So, my question is: 1) Why i have recived and delivered have the same values (approx)? 2) How can I check if an email has been delivered? 3) How to change default "root" and "www-data" user (FROM / RETURN PATH) to another? I have changed this in script, but postfix ignore scripting values and send every mail from root (we have .php send cron's in /etc/crontab) 4) WHY APPROX 100 % MAILS RECIVED HAS BEEN ADRESED TO MY SENDER HOST? Host/Domain Summary: Messages Received Waiting for respond, Regards TB**

    Read the article

  • download and process a file by ftp at set intervals, with error handling, rescheduling and status messages

    - by compound eye
    I want to download a data file from a remote ftp server to my machine at regular intervals. Once the file is downloaded I want to call another script which will process the file. My development machine is mac os x, the eventual deployment environment is linux. What's would be the stock standard way to automate this? I know I can use cron to schedule curl to download and to run a script that will process the downloaded file at regular intervals, and I know could write a slightly more complex script or an application that would do this and add error handling, rescheduling and sending status emails. But one of my requirements for this project is to write as little custom code as possible, instead I should try to use standard, tried and true existing tools, and if I do have to write code, to try and write the most straightforward code possible. The reason for this is the code will potentially be installed on a large number of machines, all of which will need to be tweaked, customised and maintained by different people, long after I am gone from the project, so the intention is to use well documented, well supported tools as much as possible. This seems such a common task, there must be tools and scripts all over the internet, written by people who have carefully considered everything that could possibly go wrong when you need to download and process a file from a remote server at regular intervals, with error handling, rescheduling and sending status messages. Is that what Expect is for? What would you recommend? (the system will be downloading weather prediction data every six hours, so that the system can prepare in the event of bad weather warnings)

    Read the article

  • Is there a macro or a way to conditionally copy rows from one or more worksheet to another in Excel 2007

    - by marison
    I'm pulling a list of data from two or more excel file into one with some specific condition. For Eg: File1 Date Project ID Engineer 8/2/2008 XYZ T0908-5555 JS 9/4/2008 ABC T0908-6666 DF 9/5/2008 ZZZ T0908-7777 TS 9/4/2008 ABC T0908-1111 DF 9/5/2008 POR T0908-7777 MS 9/4/2008 ABC T0908-2222 DD File 2 Date Project ID Engineer 8/2/2008 ABC T1908-5555 JS 9/4/2008 XYZ T1908-6666 DF 9/5/2008 ABC T1908-7777 TS 9/4/2008 ZZZ T1908-1111 DF 9/5/2008 POR T1908-7777 MS 9/4/2008 ABC T1908-2222 DD I want Data from both file1 and file2 in a new excel with only those rows whose Project ID= "ABC". And the path of file1 and file2 will be changed on daily basis. Kindly help.....

    Read the article

  • Need help writing a recurring task scheduler.

    - by Sisiutl
    I need to write a tool that will run a recurring task on a user configurable schedule. I'll write it in C# 3.5 and it will run on XP, Windows 7, or Windows Server 2008. The tasks take about 20 minutes to complete. The users will probably want to set up several configurations: e.g, daily, weekly, and monthly cycles. Using Task Scheduler is not an option. The user will schedule recurrences through an interface similar to Outlook's recurring appointment dialog. Once they set up the schedule they will start it up and it should sit in the system tray and kick off its tasks at the appointed times, then send mail to indicate it has finished. What is the best way to write this so that it doesn't eat up resources, lock up the host, or otherwise misbehave?

    Read the article

  • Scalable (half-million files) version control system

    - by hashable
    We use SVN for our source-code revision control and are experimenting using it for non-source-code files. We are working with a large set (300-500k) of short (1-4kB) text files that will be updated on a regular basis and need to version control it. We tried using SVN in flat-file mode and it is struggling to handle the first commit (500k files checked in) taking about 36 hours. On a daily basis, we need the system to be able to handle 10k modified files per commit transaction in a short time (<5 min). My questions: Is SVN the right solution for my purpose. The initial speed seems too slow for practical use. If Yes, is there a particular svn server implementation that is fast? (We are currently using the gnu/linux default svn server and command line client.) If No, what are the best f/oss/commercial alternatives Thanks

    Read the article

  • Cannot destroy ZFS snapshot: dataset already exists

    - by Morven
    I have a server (T5220, though I doubt it matters) running Solaris 10 8/07 and I have a ZFS pool, "mysql", on internal disk. Within it I have a filesystem "mysql/data/4.1.12", which I snapshot hourly with a script from cron. I have one snapshot, created as one of those hourly snaps, that will not destroy. I have renamed it out of sequence to be "mysql/data/4.1.12@wibble" so that my script will not try and fail to destroy it, but it was originally within the sequence, though I doubt that matters. It renames successfully. The snapshot can be successfully navigated and read from through the .zfs/snapshots directory. It has no clones based on it. Trying to destroy it does this: (265) root@web-mysql4:/# zfs destroy mysql/data/4.1.12@wibble cannot destroy 'mysql/data/4.1.12@wibble': dataset already exists (266) root@web-mysql4:/# which is apparently nonsensical: of course it already exists, that's the point! Anyone seen anything like this before? Web searches show nothing obviously similar. I can provide patches installed if necessary.

    Read the article

  • Any active Bold for Delphi users ?

    - by Roland Bengtsson
    What are you using as a persistance framework when programming in Delphi? If the application is growing it soon became really complicated to handle the model in SQL ? Bold is a persistance framework for Delphi win32 that really deserve more attention. I use it daily and using OCL instead of SQL to get data from the database saves a lot of time and debugging. When the model is changed Bold translate this to an SQL script and change the database. EDIT: For those that are interested in Bold for Delphi I have spend this evening on create a site on Google about it. I'm not a guru in html so the design is maybe not so exciting. But I want comments and reactions about the site. You can leave the comments in this thread or at the bottom on the subpage. And the address is... http://sites.google.com/site/boldfordelphi/

    Read the article

  • Unique number generation with Java Server Faces

    - by Buddhika Ariyaratne
    I am developing an application for a medical channelling centre where multiple users reserve bookings for doctors with JSF and JPA. A sequence number is unique to the Doctor, Date and Session. I tried to get a unique sequence number from counting the previous bookings and add one, but if two requests comes at the same time, two bookings get the same number causing trouble to functionality. How can I get unique number in this case? Can I use an application wide bean to generate it? (I thought it is not practicle to get the unique number from the database sequence number as there are several doctors, sessions and daily they have to have different booking number.)

    Read the article

  • tftpd starts randomly

    - by Mutant
    A few days ago my Little Snitch filter starts popping up tftpd. I'd never seen this before, so I immediately start freaking out thinking my Mac has been compromised. I can't find anything unusual on the system. The process usually dies before I can trace it (little snitch never allowed the connection just left the popup up). I finally caught it once, and found this: [10:32]: sudo lsof -nlP | fgrep tftp Password: tftpd 1924 18446744 cwd DIR 1,3 1326 2 / tftpd 1924 18446744 txt REG 1,3 29856 163979456 /usr/libexec/tftpd tftpd 1924 18446744 txt REG 1,3 600576 163686622 /usr/lib/dyld tftpd 1924 18446744 txt REG 1,3 303300608 189014898 /private/var/db/dyld/dyld_shared_cache_x86_64 tftpd 1924 18446744 0u IPv4 0x34a76100fcbb06e3 0t0 UDP *:55818 tftpd 1924 18446744 2u IPv4 0x34a76100f1113c53 0t0 UDP *:69 [10:32]: ps ax | fgrep 1924 1924 ?? S 0:00.00 /usr/libexec/tftpd -i /private/tftpboot 1949 s000 S+ 0:00.00 fgrep 1924 For the life of me I can't figure out what is starting this. Nothing in cron, launchdaemons, etc. Google searches haven't yielded much either. The connection IP is different each time. So my question is: Has anyone seen anything like this before?

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >