Search Results

Search found 19555 results on 783 pages for 'job performance'.

Page 104/783 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • SQL Live Monitor

    - by TiborKaraszi
    I just found this one out there and wanted to share it. It connects to an instance and show you a bunch of figures. Nothing you can't extract yourself with SQL queries, but sometimes it is just nice to have one tool which is very easy to use. Here's what it looks like when connecting to an instance with no load on it: As you can see, there are some hyperlinked pages as well, and there are also some interesting options (like logging to CSV or for PAL analysis) under the "Option" button. One more thing...(read more)

    Read the article

  • Static vs. dynamic memory allocation - lots of constant objects, only small part of them used at runtime

    - by k29
    Here are two options: Option 1: enum QuizCategory { CATEGORY_1(new MyCollection<Question>() .add(Question.QUESTION_A) .add(Question.QUESTION_B) .add...), CATEGORY_2(new MyCollection<Question>() .add(Question.QUESTION_B) .add(Question.QUESTION_C) .add...), ... ; public MyCollection<Question> collection; private QuizCategory(MyCollection<Question> collection) { this.collection = collection; } public Question getRandom() { return collection.getRandomQuestion(); } } Option 2: enum QuizCategory2 { CATEGORY_1 { @Override protected MyCollection<Question> populateWithQuestions() { return new MyCollection<Question>() .add(Question.QUESTION_A) .add(Question.QUESTION_B) .add...; } }, CATEGORY_2 { @Override protected MyCollection<Question> populateWithQuestions() { return new MyCollection<Question>() .add(Question.QUESTION_B) .add(Question.QUESTION_C) .add...; } }; public Question getRandom() { MyCollection<Question> collection = populateWithQuestions(); return collection.getRandomQuestion(); } protected abstract MyCollection<Question> populateWithQuestions(); } There will be around 1000 categories, each containing 10 - 300 questions (100 on average). At runtime typically only 10 categories and 30 questions will be used. Each question is itself an enum constant (with its fields and methods). I'm trying to decide between those two options in the mobile application context. I haven't done any measurements since I have yet to write the questions and would like to gather more information before committing to one or another option. As far as I understand: (a) Option 1 will perform better since there will be no need to populate the collection and then garbage-collect the questions; (b) Option 1 will require extra memory: 1000 categories x 100 questions x 4 bytes for each reference = 400 Kb, which is not significant. So I'm leaning to Option 1, but just wondered if I'm correct in my assumptions and not missing something important? Perhaps someone has faced a similar dilemma? Or perhaps it doesn't actually matter that much?

    Read the article

  • Ubuntu 12.10 is slow and some programs gose to non-respond state

    - by user99631
    Ubuntu 12.10 is so slow and a lot of not responding applications I was using Skype whenever i open it it will go to non-responding state thin back to normal after a while even the software centre the system process is eating the CPU I don’t know if the compiz is the problem but issuing the command compiz --replace restore the applications from non-responding state CPU : Intel Celeron D 3.4 RAM : 1 GB VGA : Intel G45 Plz help

    Read the article

  • Why using Fragments?

    - by ahmed_khan_89
    I have read the documentation and some other questions' threads about this topic and I don't really feel convinced; I don't see clearly the limits of use of this technique. Fragments are now seen as a Best Practice; every Activity should be basically a support for one or more Fragments and not call a layout directly. Fragments are created in order to: allow the Activity to use many fragments, to change between them, to reuse these units... == the Fragment is totally dependent to the Context of an activity , so if I need something generic that I can reuse and handle in many Activities, I can create my own custom layouts or Views ... I will not care about this additional Complexity Developing Layer that fragments would add. a better handling to different resolution == OK for tablets/phones in case of long process that we can show two (or more) fragments in the same Activity in Tablets, and one by one in phones. But why would I use fragments always ? handling callbacks to navigate between Fragments (i.e: if the user is Logged-in I show a fragment else I show another fragment). === Just try to see how many bugs facebook SDK Log-in have because of this, to understand that it is really (?) ... considering that an Android Application is based on Activities... Adding another life cycles in the Activity would be better to design an Application... I mean the modules, the scenarios, the data management and the connectivity would be better designed, in that way. === This is an answer of someone who's used to see the Android SDK and Android Framework with a Fragments vision. I don't think it's wrong, but I am not sure it will give good results... And it is really abstract... ==== Why would I complicate my life, coding more, in using them always? else, why is it a best practice if it's just a tool for some cases? what are these cases?

    Read the article

  • Does low latency code sometimes have to be "ugly"?

    - by user997112
    (This is mainly aimed at those who have specific knowledge of low latency systems, to avoid people just answering with unsubstantiated opinions). Do you feel there is a trade-off between writing "nice" object orientated code and writing very fast low latency code? For instance, avoiding virtual functions in C++/the overhead of polymorphism etc- re-writing code which looks nasty, but is very fast etc? It stands to reason- who cares if it looks ugly (so long as its maintainable)- if you need speed, you need speed? I would be interested to hear from people who have worked in such areas.

    Read the article

  • Rails problem with Delayed_Job and Active Record

    - by Michael Waxman
    I'm using Delayed_Job to grab a Twitter user's data from the API, but it's not saving it in the model for some reason! Please help! (code below) class BandJob < Struct.new(:band_id, :band_username) #parameter def perform require 'json' require 'open-uri' band = Band.find_by_id(band_id) t = JSON.parse(open("http://twitter.com/users/show/#{band_username}.json").read) band.screen_name = t['screen_name'] band.profile_background_image = t['profile_background_image_url'] band.url = 'http://' + band_username + '.com' band.save! end end To clarify, I'm actually not getting any errors, it's just not saving. Here's what my log looks like: * [JOB] acquiring lock on BandJob [4;36;1mDelayed::Job Update (3.1ms)[0m [0;1mUPDATE "delayed_jobs" SET locked_at = '2009-11-09 18:59:45', locked_by = 'host:dhcp128036151228.central.yale.edu pid:2864' WHERE (id = 10442 and (locked_at is null or locked_at < '2009-11-09 14:59:45') and (run_at <= '2009-11-09 18:59:45')) [0m [4;35;1mBand Load (1.5ms)[0m [0mSELECT * FROM "bands" WHERE ("bands"."id" = 34) LIMIT 1[0m [4;36;1mBand Update (0.6ms)[0m [0;1mUPDATE "bands" SET "updated_at" = '2009-11-09 18:59:45', "profile_background_image" = 'http://a3.twimg.com/profile_background_images/38193417/fbtile4.jpg', "url" = 'http://Coldplay.com', "screen_name" = 'coldplay' WHERE "id" = 34[0m [4;35;1mDelayed::Job Destroy (0.5ms)[0m [0mDELETE FROM "delayed_jobs" WHERE "id" = 10442[0m * [JOB] BandJob completed after 0.5448 1 jobs processed at 1.8011 j/s, 0 failed ... Thanks!

    Read the article

  • Huge performance difference between two web servers, odd behavior seen using process monitor

    - by Francis Gagnon
    We have two Coldfusion servers that have a huge performance difference running the exact same code on the exact same input data. The code in questions instantiates a large amount of CFCs (Coldfusion Components, which are similar to objects in OOP languages). I compared the two servers by running Process Monitor and then calling the problematic code on both machines. I learned two things. First, Coldfusion opens CFC files every time it instantiates an object. Both servers do this, so it cannot be the cause of the performance difference. Second, the fast server opens the CFC files directly while the server with the performance problem seems to navigate its way through the path until it reaches the desired CFC file. It does this for every file, even the ones it has previously loaded, and because the code instantiates so many CFCs it becomes very slow. See below the partial Promon traces that show this behavior. It can take over 60 seconds for the slow server to do what the fast one does in 2 seconds. Can anyone tell me what causes this behavior? Is it a Coldfusion setting? Since Coldfusion runs on top of Java, is it a Java setting? Is it an OS option? The fast server is running Windows XP and I think the slow server is a Windows Server 2003. Bonus question: Coldfusion doesn't seem to perform any READ FILE operations on any of the CFC or CFM files. How can this be? Sample of the fast server opening CFC files: 11:25:14.5588975 jrun.exe QueryOpen C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5592758 jrun.exe CreateFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5595024 jrun.exe QueryBasicInformationFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5595940 jrun.exe CloseFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5599628 jrun.exe CreateFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5601600 jrun.exe QueryBasicInformationFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5602463 jrun.exe CloseFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc Equivalent sample of the slow server opening CFC files: 11:15:08.1249230 jrun.exe CreateFile D:\ 11:15:08.1250100 jrun.exe QueryDirectory D:\org 11:15:08.1252852 jrun.exe CloseFile D:\ 11:15:08.1259670 jrun.exe CreateFile D:\org 11:15:08.1260319 jrun.exe QueryDirectory D:\org\cli 11:15:08.1260769 jrun.exe CloseFile D:\org 11:15:08.1269451 jrun.exe CreateFile D:\org\cli 11:15:08.1270613 jrun.exe QueryDirectory D:\org\cli\cpn 11:15:08.1271140 jrun.exe CloseFile D:\org\cli 11:15:08.1279312 jrun.exe CreateFile D:\org\cli\cpn 11:15:08.1280086 jrun.exe QueryDirectory D:\org\cli\cpn\APP 11:15:08.1280789 jrun.exe CloseFile D:\org\cli\cpn 11:15:08.1291034 jrun.exe CreateFile D:\org\cli\cpn\APP 11:15:08.1291709 jrun.exe QueryDirectory D:\org\cli\cpn\APP\com 11:15:08.1292224 jrun.exe CloseFile D:\org\cli\cpn\APP 11:15:08.1300568 jrun.exe CreateFile D:\org\cli\cpn\APP\com 11:15:08.1301321 jrun.exe QueryDirectory D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1301843 jrun.exe CloseFile D:\org\cli\cpn\APP\com 11:15:08.1312049 jrun.exe CreateFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1314409 jrun.exe QueryBasicInformationFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1314633 jrun.exe CloseFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1315881 jrun.exe CreateFile D:\ 11:15:08.1316379 jrun.exe QueryDirectory D:\org 11:15:08.1316926 jrun.exe CloseFile D:\ 11:15:08.1330951 jrun.exe CreateFile D:\org 11:15:08.1338656 jrun.exe QueryDirectory D:\org\cli 11:15:08.1339118 jrun.exe CloseFile D:\org 11:15:08.1526468 jrun.exe CreateFile D:\org\cli 11:15:08.1527295 jrun.exe QueryDirectory D:\org\cli\cpn 11:15:08.1527989 jrun.exe CloseFile D:\org\cli 11:15:08.1531977 jrun.exe CreateFile D:\org\cli\cpn 11:15:08.1532589 jrun.exe QueryDirectory D:\org\cli\cpn\APP 11:15:08.1533575 jrun.exe CloseFile D:\org\cli\cpn 11:15:08.1538457 jrun.exe CreateFile D:\org\cli\cpn\APP 11:15:08.1539083 jrun.exe QueryDirectory D:\org\cli\cpn\APP\com 11:15:08.1539553 jrun.exe CloseFile D:\org\cli\cpn\APP 11:15:08.1544126 jrun.exe CreateFile D:\org\cli\cpn\APP\com 11:15:08.1544980 jrun.exe QueryDirectory D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1545482 jrun.exe CloseFile D:\org\cli\cpn\APP\com 11:15:08.1551034 jrun.exe CreateFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1552878 jrun.exe QueryBasicInformationFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1553044 jrun.exe CloseFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc Thanks

    Read the article

  • Expected IOPS for log writing on PS6000X SAN?

    - by dssz
    Customer is experiencing poor Sybase ASE 15 performance on a PS6000X SAN with 16 X 450GB 10K in RAID-50. The server is a Dell R710 running 2003 server R2 64bit in ESX 4.0.0,256968 I've used sqlio to benchmark the sequential write performance of 4KB blocks on the drive. sqlio -kW -t1 -s600 -dE -o1 -fsequential -b4 -BH -LS sqliotestfile.dat Result is 1900 IOPS. However, when Sybase is running a sustained workload of small inserts SAN HQ shows a consistent 590 IOPS (and 100% 4K write activity). It also shows that the write latency increases to 1.2ms from <1ms. Monitoring and tests in Sybase demonstrate the performance problem is IO related and in particular there is a lot of wait time writing to the log. The SAN indicates that write caching is enabled. What IOPS should the SAN be capable of for 4k sequential write activity? Also, with write caching enabled, shouldn't the controller be batching up the 4K writes into something more efficient? Also, any tips on Sybase on ESX would be appreciated.

    Read the article

  • ESX 4.0 space: DASD, NAS, or ?

    - by thormj
    I put together an ESX box for better management, but its performance is a WTF item; I'm a noob at dealing with ESX, so I'm looking for a laundry-list of reading material to help me straighten this out so I can go back to .NET programming. Current storage system: We're running Raid5+Hotspare (8x500 GB spindles) on a PERC6i on a Dell 2910. Due to ESX limitatios, the PERC is showing the storage as 1x2TB + 1x800GB "partitions." I'm not sure of the setup's configuration (stride / stripe / ???) at all. Our Applications We have a SBS server as well as a minor (2x50 GB, but growing at 10GB/month) database server... Our application that lives on the database VM is CPU and I/O insense; it's a database churning excercise mixed in with a lot of computation on the data (fixing that performance is what I'm supposed to be working on)... Perfomance Issue When I do a backup, restore, or worse (copy a backup from 1 vm to another to move it to the QA VM), the entire system slows to a crawl (even "unrelated" VMs). I originally thought a DASD situation would be quite good since you had PCI-x bandwidth, but the systemwide slowdown is killing productivity. Questions What should I do to make an intelligent decision about NAS vs RAID vs SAN vs DASD? Are there sweet spots/ugly spots in the storage setup? Can you use a SSD PCI-X card in ESX for the tempdb? Good/Bad idea? Is there any way to "share" some image in a copy-on-write fashion? Most of the "Backup-Copy-Restore" is to "put a clean image on the dev boxes"; if I could have them "share" the master image, the "big copy" (2x50 GB) would only need to be done once per week instead of once per dev per week...[runtime performance isn't a concern with the dev boxes, but the backup/copy/restore kills production, SBS, and everything else on the box]

    Read the article

  • Building a PC, advice on SSD/Hybrid Hard Drives

    - by Jamie Hartnoll
    I am looking at building a new PC, it's mainly for office (graphics heavy) use and programming. Looking for good performance with opening and closing programs and files as well as a fast boot. I plan to have 3 primary hard drives Windows 7 Programs (photoshop etc) Current Files (There'll also be a large storage capacity back up drive, but this will be the Seagate drive I already have.) So, my question is, looking at standard "old fashioned" hard drives and SSD drives, obviously there's a massive price difference. I have been looking at drives like this: http://www.ebuyer.com/268693-corsair-120gb-force-3-ssd-cssd-f120gb3-bk-cssd-f120gb3-bk and this: http://www.ebuyer.com/321969-momentus-xt-750gb-sata-2-5in-7200rpm-hybrid-8gb-ssd-in-st750lx003 Having no experience of using either I don't know what's the most efficient thing to go for. Clearly the SSD will have better performance, but: If, for example, I had an SSD for Windows (say about 100gB), that would clearly give me the boot speed I want, then I guess my real questions are: If I were to buy one more SSD, would it give the greatest improvement on standard performance if used to store programs, or currently used files? Given that the OS is on an SSD, should I not bother with the 3 drives and instead, partition that Hybrid drive to store programs and currently used files on it? Obviously, option two is cheaper and option one could cause me storage issues, but that's when I can dump files I am not currently using onto another drive. Any, I am open to suggestions... so what do you suggest?!

    Read the article

  • Would an array of SSD drives be able to succesfully substitute the system memory?

    - by Florin Mircea
    I watched a few videos trying to answer this. This video (youtube.com/watch?v=eULFf6F5Ri8) shows a bunch of guys stacking 24 SSD's reaching a peak of around 2GBps r/w. That's under the limit of the worst DDR3 in this list (memorybenchmark.net/write_ddr3_amd.html) - that shows DDR3 memory performance varying from 2.78 to 6.55 Gb per second, but that video is over 3 years old. This video (youtube.com/watch?v=27GmBzQWwP0) shows a more optimistic situation, but for PCI-E SSD drives: 5 drives peaking at around 4Gb. And this other video shows that stacking up more than 3 SSD's doesn't realistically offer a substantial added performance. This and the fact that in all benchmarks the drives act quite poorly when dealing with small files (5k file read/write averaging from 10MB to around 30-40MBps) as opposed to how native memory handles such files, seems to indicate a definite NO to this question. Also, the write life cycle is indeed limited and the drives might wear out quickly, as kindly pointed out by paddy. However, I wanted to get more opinions on this. Would it be possible to at least obtain current memory performance with SSD's in RAID 0? And if so, in what circumstances? I am assuming using this configuration with a Windows OS that has a memory pagefile resident to that stack of SSD's, thus making it very fast to work with.

    Read the article

  • WAMP running extremely slow on WIndows 7

    - by JavaCake
    After 2 days of tough fight trying to figure out what the problem is with my Windows 7 32-bit machine at work i have nearly given up. The issue is that the pages are loaded extremely slow, the performance is both when accessed locally (127.0.0.1) or from another computer in the intranet. First to explain the system: WAMP version: Apache 2.2.22 – Mysql 5.5.24 – PHP 5.4.3 XDebug 2.1.2 XDC 1.5 PhpMyadmin 3.4.10.1 SQLBuddy 1.3.3 webGrind 1.0 DocumentRoot: Located on network drive MySQL: InnoDB Pages: PHP, MySQL, AJAX etc. So basically the changes i have made in order to get a greater performance: Changed C:\windows\system32\drivers\etc\hosts: 127.0.0.1 localhost 127.0.0.1 127.0.0.1 Modified my.ini: innodb_flush_log_at_trx_commit = 2 Modified httpd.ini: EnableMMAP on EnableSendfile on Modified php.ini: realpath_cache_size= 4m How i measure the performance is the overall loadtime of the page. I run it locally on my Mac OS X machine aswell (MAMP), and typically the frontpage loadtime is 0.06seconds but on the Windows 7 machine it is 6-10seconds. I have verified the loadtime with developertools in Chrome aswell. Furthermore the result is identical in XAMPP.

    Read the article

  • getting a job in game industry as a developer, just knowing a game engine

    - by numerical25
    I recently enrolled at a community college for game developement. But I am skeptical about the circulum. I have no experience in the gaming industry so I wouldnt be able to tell rather its a good investment or not. So I am asking you. I dont want to get too much into detail of all the classes I am taking so I will try to be brief. By the time I graduate, I should have a understanding of how a game engine works. I will be working with the unreal engine to develop a Multiplayer game from scratch. So in the process of my final project, I will learn how to work within the unreal engine, Learn python and learn how to use it's API to connect to a remote server and build game mechanics. Overall I will also recieve a associates degree in game development. I learn c++ but not c. The director said he was trying to implement c in the program as well. What I notice is I will not learn how to build a 3d game engine from scratch. They do not teach any AI. I will not learn how to work with the graphics card using a graphic's api such as DirectX or OpenGL. I know building a game engine from scratch is a little complex, but at the same time the track is requireing me to take some advances math courses such a calculus and geotomtry 1 and 2. I also got to take a physic class. I just think thats a little much for just learning how to use the unreal engine but not actually build one or try to learn the anatomy of a game engine. Is this good enough to possibly land my a job in the insdustry. If I left anything out or was not detail, please feel free to ask more questions. Thanks Guys!!

    Read the article

  • Getting a job in the games industry as a developer, just knowing a game engine

    - by numerical25
    I recently enrolled in a community college for games developement. But I am skeptical about the curriculum. I have no experience in the gaming industry so I wouldn't be able to tell whether it's a good investment or not. So I am asking you. I don't want to get too much into the details of all the classes I am taking so I will try to be brief. By the time I graduate, I should have a understanding of how a game engine works. I will be working with the Unreal Engine to develop a Multiplayer game from scratch. So in the process of my final project, I will learn how to work within the Unreal Engine, learn Python and learn how to use its API to connect to a remote server and build game mechanics. Overall I will also recieve an associates degree in game development. I learn C++ but not C. The director said he was trying to implement C in the program as well. What I notice is I will not learn how to build a 3D game engine from scratch. They do not teach any artificial intelligence (AI). I will not learn how to work with the graphics card using a graphics API such as DirectX or OpenGL. I know building a game engine from scratch is a little complex, but at the same time the track is requiring me to take some advanced mathematics courses such as calculus and geometry 1 and 2. I also got to take a physics class. I just think that's a little much for just learning how to use the Unreal Engine but not actually build one or try to learn the anatomy of a games engine. Is this good enough to possibly land my a job in the industry? If I left anything out or was not detail, please feel free to ask more questions. Edit: I do learn data structures and algorithms.

    Read the article

  • Hopping from a C++ to a Perl/Unix job

    - by rocknroll
    Hi all, I have been a C++ / Linux Developer till now and I am adept in this stack. Of late I have been getting opportunities that require Perl, Unix (with knowledge of C++,shell scripting) expertise. Organizations are showing interest even though I don't have much scripting experience to boast off. The role is more in a Support, maintenance project involving SQL as well. Off late I am in a fix whether to forgo these offers or not. I don't know the dynamics of an IT organization and thus on one hand I fear that my C++ experience will be nullified and on the positive side I am getting to work on a new technology stack which will only add to my skill set. I am sure, most of you at some point of time have encountered such dilemmas and would have taken some decision. I want you to share your perspectives on such a scenario where a person is required to change his/her technology stack when changing his/her job. What are the merits and demerits in going with either of the choices? Also I know that C++ isn't going anywhere in the near future. What about perl? I have no clue as to what the future holds for perl developer? Whether there are enough opportunities for a perl developer? I am asking this question here because most of my fellow programmers face this career choice dilemma. Thanks.

    Read the article

  • Perl cron job stays running

    - by Dylan
    I'm currently using a cron job to have a Perl script that tells my Arduino to cycle my aquaponics system and all is well, except the Perl script doesn't die as intended. Here is my cron job: */15 * * * * /home/dburke/scripts/hal/bin/main.pl cycle And below is my Perl script: #!/usr/bin/perl -w # Sample Perl script to transmit number # to Arduino then listen for the Arduino # to echo it back use strict; use Device::SerialPort; use Switch; use Time::HiRes qw ( alarm ); $|++; # Set up the serial port # 19200, 81N on the USB ftdi driver my $device = '/dev/arduino0'; # Tomoc has to use a different tty for testing #$device = '/dev/ttyS0'; my $port = new Device::SerialPort ($device) or die('Unable to open connection to device');; $port->databits(8); $port->baudrate(19200); $port->parity("none"); $port->stopbits(1); my $lastChoice = ' '; my $pid = fork(); my $signalOut; my $args = shift(@ARGV); # Parent must wait for child to exit before exiting itself on CTRL+C $SIG{'INT'} = sub { waitpid($pid,0) if $pid != 0; exit(0); }; # What child process should do if($pid == 0) { # Poll to see if any data is coming in print "\nListening...\n\n"; while (1) { my $incmsg = $port->lookfor(9); # If we get data, then print it if ($incmsg) { print "\nFrom arduino: " . $incmsg . "\n\n"; } } } # What parent process should do else { if ($args eq "cycle") { my $stop = 0; sleep(1); $SIG{ALRM} = sub { print "Expecting plant bed to be full; please check.\n"; $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2\n"; $stop = 1; }; $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1\n"; print "Waiting for plant bed to fill...\n"; alarm (420); while ($stop == 0) { sleep(2); } die "Done."; } else { sleep(1); my $choice = ' '; print "Please pick an option you'd like to use:\n"; while(1) { print " [1] Cycle [2] Relay OFF [3] Relay ON [4] Config [$lastChoice]: "; chomp($choice = <STDIN>); switch ($choice) { case /1/ { $SIG{ALRM} = sub { print "Expecting plant bed to be full; please check.\n"; $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2\n"; }; $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1\n"; print "Waiting for plant bed to fill...\n"; alarm (420); $lastChoice = $choice; } case /2/ { $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2"; $lastChoice = $choice; } case /3/ { $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1"; $lastChoice = $choice; } case /4/ { print "There is no configuration available yet. Please stab the developer."; } else { print "Please select a valid option.\n\n"; } } } } } Why wouldn't it die from the statement die "Done.";? It runs fine from the command line and also interprets the 'cycle' argument fine. When it runs in cron it runs fine, however, the process never dies and while each process doesn't continue to cycle the system it does seem to be looping in some way due to the fact that it ups my system load very quickly. If you'd like more information, just ask. EDIT: I have changed to code to: #!/usr/bin/perl -w # Sample Perl script to transmit number # to Arduino then listen for the Arduino # to echo it back use strict; use Device::SerialPort; use Switch; use Time::HiRes qw ( alarm ); $|++; # Set up the serial port # 19200, 81N on the USB ftdi driver my $device = '/dev/arduino0'; # Tomoc has to use a different tty for testing #$device = '/dev/ttyS0'; my $port = new Device::SerialPort ($device) or die('Unable to open connection to device');; $port->databits(8); $port->baudrate(19200); $port->parity("none"); $port->stopbits(1); my $lastChoice = ' '; my $signalOut; my $args = shift(@ARGV); # Parent must wait for child to exit before exiting itself on CTRL+C if ($args eq "cycle") { open (LOG, '>>log.txt'); print LOG "Cycle started.\n"; my $stop = 0; sleep(2); $SIG{ALRM} = sub { print "Expecting plant bed to be full; please check.\n"; $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2\n"; $stop = 1; }; $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1\n"; print "Waiting for plant bed to fill...\n"; print LOG "Alarm is being set.\n"; alarm (420); print LOG "Alarm is set.\n"; while ($stop == 0) { print LOG "In while-sleep loop.\n"; sleep(2); } print LOG "The loop has been escaped.\n"; die "Done."; print LOG "No one should ever see this."; } else { my $pid = fork(); $SIG{'INT'} = sub { waitpid($pid,0) if $pid != 0; exit(0); }; # What child process should do if($pid == 0) { # Poll to see if any data is coming in print "\nListening...\n\n"; while (1) { my $incmsg = $port->lookfor(9); # If we get data, then print it if ($incmsg) { print "\nFrom arduino: " . $incmsg . "\n\n"; } } } # What parent process should do else { sleep(1); my $choice = ' '; print "Please pick an option you'd like to use:\n"; while(1) { print " [1] Cycle [2] Relay OFF [3] Relay ON [4] Config [$lastChoice]: "; chomp($choice = <STDIN>); switch ($choice) { case /1/ { $SIG{ALRM} = sub { print "Expecting plant bed to be full; please check.\n"; $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2\n"; }; $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1\n"; print "Waiting for plant bed to fill...\n"; alarm (420); $lastChoice = $choice; } case /2/ { $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2"; $lastChoice = $choice; } case /3/ { $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1"; $lastChoice = $choice; } case /4/ { print "There is no configuration available yet. Please stab the developer."; } else { print "Please select a valid option.\n\n"; } } } } }

    Read the article

  • How would you start automating my job?

    - by Jurily
    At my new job, we sell imported stuff. In order to be able to sell said stuff, currently the following things need to happen for every incoming shipment: Invoice arrives, in the form of an email attachment, Excel spreadsheet Monkey opens invoice, copy-pastes the relevant part of three columns into the relevant parts of a spreadsheet template, where extremely complex calculations happen, like =B2*550 Monkey sends this new spreadsheet to boss (email if lucky, printer otherwise), who sets the retail price Monkey opens the reply, then proceeds to input the data into the production database using a client program that is unusable on so many levels it's not even worth detailing Monkey fires up HyperTerminal, types in "AT", disconnect Monkey sends text messages and emails to customers using another part of the horrible client program, one at a time I want to change Monkey from myself to software wherever possible. I've never written anything that interfaces with email, Excel, databases or SMS before, but I'd be more than happy to learn if it saves me from this. Here's my uneducated wishlist: Monkey asks Thunderbird (mail server perhaps?) for the attachment Monkey tells Excel to dump the spreadsheet into a more Jurily-friendly format, like CSV or something Monkey parses the output, does the complex calculations // TODO: find a way to get the boss-generated prices with minimal manual labor involved Monkey connects to the database, inserts data Monkey spams costumers Is all this feasible? If yes, where do I start reading? How would you improve it? What language/framework do you think would be ideal for this? What would you do about the boss?

    Read the article

  • Multi-user job/task tracking/queue software

    - by Bmsgaffer86
    Background: I test and repair electronic products with a team. There are many 'jobs' going through my lab at any point in time. It is getting difficult to track whats coming in and going out because I don't do every test or repair myself. Target: User can enter a job when they drop it off in my lab, and it will appear on the master list or queue. Needs to have priorities and due dates that can be adjusted by users. Ideally this would be web based and open-source, but I am flexible. Dream: A large monitor displaying a list of jobs in the master queue with details. This is very optional though, and would be in the best case scenario. I have done MANY hours of Google-ing and I am not sure if I have been using the right terminology, but I have not found anything that is simple enough to stand alone yet complex enough to be multi-user based. I am mildly proficient in VB, and have the drive to piece anything together that I have to. I am open to ANY help or suggestions.

    Read the article

  • Open source alternative to Autosys?

    - by oninea
    As an alternative to Autosys, what is the best open source job scheduler? This question is a bit subjective but I'm looking for something that is widely used in production environments, has a good community, and has enterprise grade features.

    Read the article

  • Should I start making connections even if I'm not ready for a job yet?

    - by James
    The first job is always the hardest to get and I'm not exception. I'm 23 years old and I have no college degree but planned on going to college this year if all goes well (CS of course). I'm self-studying java right now. I know most of the topics related to the language besides the more advanced topics and I'm beginning to look at open source projects. I would like to find a job (at least a part time job) after a year or two when I'll gain more experience and learn more about java technologies and other technologies that interest me. Finding a job will be a bit difficult because most of the people (or a lot of them at least) at my current age already have 2 years or more of experience, so I will be somewhat disadvantaged. Should I start building connections and joining websites such as linkedin ? I never bothered to look into it because I'm not much of a social network person. If I start contributing to open source projects and create personal projects for 2 years could I apply for jobs that require 1-2 years of experience? Does this experience count ?

    Read the article

  • SSL Slow in IE 8.0.7600.16385IC

    - by discovery.jerrya
    I'm having a performance problem on my company's web site using a specific version of IE 8 to load a page using https. Here's what I know. Server: Virtual machine running on VMWare ESX Windows Server 2003 Enterprise Edition SP 2 Tomcat 6.0.16 Client: Windows XP and Window 7 Internet Explorer 8.0.7600.16385IC Page loads/refreshes in under 1 second using HTTP. Page loads/refreshes in 15-16 seconds in HTTPS using this version of IE. Problem reproduced on multiple client machines with same IE version. Problem reproduced on multiple client machines with different Windows versions (XP and 7). No performance problem using Chrome, Firefox, Opera, or Safari from same machine. No performance problem using other versions of IE 8 on other machines. Slow load causes virtually no CPU, memory, or I/O spike on server or client machine. No performance problem on other sites using HTTPS on same client machine. The pages in question use JavaScript and innerHTML to replace the contents of div elements to create a collapsible menu, and an iframe to display some content. A couple of the div elements contain images. If I remove the iframe and the JavaScript, the performance issues go away. However, rewriting the entire site to make these changes would be very time consuming. We're in the process of replacing the whole site, but it may be 2-3 months before we do so and we really cannot live with this slowdown that long. I've already looked at several IE tuning options, such as disabling add ons, running IE-rereg, and resetting IE, with no luck. Does anyone have any suggestions?

    Read the article

  • rake jobs:work working fine. problem with script/delayed_job start.

    - by krunal shah
    I am calling function with LoadData.send_later(:test). LoadData is my class and test is my method. It's working fine while i am running rake jobs:work. But when i am running script/delayed_job start or run that time delayed_job.log shows error like TEastern Daylight Time: *** Starting job worker delayed_job host:KShah pid:5968 TEastern Daylight Time: * [Worker(delayed_job host:KShah pid:5968)] acquired lock on LoadData.load_test_data_with_delayed_job Could not load object for job: uninitialized constant LoadData TEastern Daylight Time: * [JOB] delayed_job host:KShah pid:5968 completed after 0.0310 TEastern Daylight Time: 1 jobs processed at 10.6383 j/s, 0 failed ... Any solution??

    Read the article

  • Get-WMIObject fails when run AsJob

    - by codepoke
    It's a simple question, but it's stumped me. $cred = Get-Credential $jobs = @() $jobs += Get-WmiObject ` -Authentication 6 ` -ComputerName 'serverName' ` -Query 'Select * From IISWebServerSetting' ` -Namespace 'root/microsoftiisv2' ` -EnableAllPrivileges ` -Credential $cred ` -Impersonation 4 ` -AsJob $joblist = Wait-Job -Job $jobs -Timeout 60 foreach ($job in $jobs) { if ($job.State -eq "Completed") { $app = Receive-Job -Job $job $app } else { ("Job not completed: " + $job.Name + "@" + $job.State + ". Reason:" + $job.ChildJobs[0].JobStateInfo.Reason) Remove-Job -Job $job -Force } } The query succeeds when run directly and fails when run -AsJob. Reason:System.UnauthorizedAccessException: Access is denied. I've jiggered with -Impersonation, -Credentials, -Authority, and -EnableAllPrivileges to no useful effect. It appears I'm overlooking something fundamental. Why is my Powershell prompt allowed to connect to the remote server, but my child process denied?

    Read the article

  • nVidia performance with newer X and newer driver abysmal with Compiz

    - by Nakedible
    I recently upgraded Debian to Xorg 2.9.4 and installed nvidia-glx from experimental, version 260.19.21. This was somewhat of an uphill battle as the dependencies for the experimental nvidia-glx package are still somewhat broken. I got it to work without forcing the installation of any packages and without modifying the packages. However, after the upgrade compiz performance has been abysmal. I am using the desktop wall plugin and switching viewports is really slow - takes a few seconds for each switch. In addition to this, every effect that compiz does, such as zoom animations for icons when launching applications, takes seconds. The viewport switching speed changes relative to the amount of windows on that virtual screen - empty screens switch almost at normal speed, single browser windows work almost decently, but just 4 rxvt terminals slows the switches down to a crawl. My compiz configuration should be pretty basic. Xorg is likewise configured without anything special - the only "custom" configuration is forcing the driver name to be "nvidia". I've fiddled around with the nvidia-settings and compizconfig trying different VSync settings, but none of those helped. My graphics card is: NVIDIA GPU NVS 3100M (GT218) at PCI:1:0:0 (GPU-0). This is laptop GPU that is from the Geforce GTX 200 series. Graphics card performance should naturally be no problem.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >