Search Results

Search found 37604 results on 1505 pages for 'build script'.

Page 500/1505 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • How to reliably run a batch job every 5 seconds?

    - by Benjamin
    I'm building an application where the sending of all notifications (email, SMS, fax) will be asynchronous. The application will write the notifications to the database, and a batch job will read these notifications and send them with the appropriate transport. I was first reading at ways to run cron faster than the minute, and realized this was a bad idea. The batch scripts are written in PHP, and I guess that writing a proper daemon would be quite an overhead (though I'm open to any suggestion, as PHP car run indefinitely as well). What I have in mind is a solution that would: Run the PHP script every 5 seconds Check that the previous run has finished, or abort (never 2 concurrent batches running) Kill the script if live for more than x minutes (a security in case it hangs) Start with the system (if a reboot occurs) Any idea how to do this?

    Read the article

  • How to make a iOS plugin for Unity3d

    - by DannoEterno
    I've passed last 2 days reading articles and book for understand how can i make a plugin for iOS in Unity. Basically i need just a demo for understand how it work. For now i've tried to make this process (with really poor luck): I've started a new project in Unity and writed a simple script using UnityEngine; using System.Collections; using System; using System.Runtime.InteropServices; public class CallPlugin : MonoBehaviour { [DllImport ("__Internal")] private static extern int test(); void Start () { Debug.Log(test()); } } Then i've created a project in Xcode with this simple script: extern "C"{ int test() { int che = 5; return che; } } Then i've tried: to put the .mm and .h in the Assets/Plugins/iOS = nothing to build the unity project and than add the .h and .mm in the Xcode project = nothing In Unity i will always get the EntryPointNotFoundException, so unity see the file but is unable to reach the method. The problem is... how?! :) Maybe i miss something or i've done something wrong? Thanks a lot for every help that you can give me :)

    Read the article

  • Run a VirtualBox VM in a second X-Server with Graphic support

    - by Scindix
    I'm starting a VirtualBox VM (Windows 7) in a second X-Server (Ubuntu 14.04) and i'm using the following xinit script (/path/.vboximage): optirun VBoxManage startvm <vm name> & exec tinywm I recognized that while running Virtualbox normally under Gnome (Unity to be precise ;-) ) I get full graphics support. But when I run it on a second xserver there seem to be some problems. E.g. Windows Aero doesn't seem to work and Chrome WebGL demos are running with poorer performance. I'm not a big Windows expert, so I don't know how I could check the used Graphic card (specification). But it is very obvious that something has changed when running the vm in the extra X-Server. Also when I try to replace tinywm with compiz I get the unity frame around the VM, which also seems to have no graphics acceleration (no transparency effects) So it seems that the X-Server doesn't have Graphic acceleration at all. I have a NVidia 525m and an Intel HD3000 which are both capable of advanced graphics. I'm starting the above script with startx /path/.vboximage - :1 How could I fix that?

    Read the article

  • APC serving old code intermittently running with Lighttpd and PHP Fast CGI

    - by APZ
    I recently started facing this problem that APC shows old code when we upload a html template file to fix/ change something on our websites. We run APC with Stat=0 and want to keep it that way because we seldomly make changes to templates. Every time we upload a template we make sure to flush APC cache and we execute this script(shown only some part of the script here) to clear the cache: apc_clear_cache(); apc_clear_cache('user'); apc_clear_cache('system'); apc_clear_cache(opcode); We use lightpd and PHP Fast CGI and fast cgi has "max-procs" = 2, "PHP_FCGI_CHILDREN" = "5", Even after flushing APC once upload is complete it serves the old template intermittently. Any help would be appreciated.

    Read the article

  • Are there legitimate reasons for returning exception objects instead of throwing them?

    - by stakx
    This question is intended to apply to any OO programming language that supports exception handling; I am using C# for illustrative purposes only. Exceptions are usually intended to be raised when an problem arises that the code cannot immediately handle, and then to be caught in a catch clause in a different location (usually an outer stack frame). Q: Are there any legitimate situations where exceptions are not thrown and caught, but simply returned from a method and then passed around as error objects? This question came up for me because .NET 4's System.IObserver<T>.OnError method suggests just that: exceptions being passed around as error objects. Let's look at another scenario, validation. Let's say I am following conventional wisdom, and that I am therefore distinguishing between an error object type IValidationError and a separate exception type ValidationException that is used to report unexpected errors: partial interface IValidationError { } abstract partial class ValidationException : System.Exception { public abstract IValidationError[] ValidationErrors { get; } } (The System.Component.DataAnnotations namespace does something quite similar.) These types could be employed as follows: partial interface IFoo { } // an immutable type partial interface IFooBuilder // mutable counterpart to prepare instances of above type { bool IsValid(out IValidationError[] validationErrors); // true if no validation error occurs IFoo Build(); // throws ValidationException if !IsValid(…) } Now I am wondering, could I not simplify the above to this: partial class ValidationError : System.Exception { } // = IValidationError + ValidationException partial interface IFoo { } // (unchanged) partial interface IFooBuilder { bool IsValid(out ValidationError[] validationErrors); IFoo Build(); // may throw ValidationError or sth. like AggregateException<ValidationError> } Q: What are the advantages and disadvantages of these two differing approaches?

    Read the article

  • What do I need to learn to decide on rename/recompile source package names because of company rebranding?

    - by Roberto Linares
    My company is currently at a rebranding process and the brand names have been used in the sources' package names but these names are only visible to developers who maintain this code so nobody from project management is really interested in changing them considering also that it would imply the recompiling of several old components. What factors do I need to consider when deciding on a change like that? I don't know if I should worry about legal issues or not and if so, how to address this with project management. More background details. I have all the sources and dependencies but since the company rebranding, other development areas have adopted some of the code that needs package name-changing so I cannot take the decision only by myself so I don't make everyone else's code to crash with my core components and I cannot change other areas' code without the permission of those areas' users so yes, my concern is more political than technical. I am going try to coordinate the involved it areas to make the change anyway, since it seems to be the best approach.   Unfortunatelly in my company there's no continuous integration build server so we build our code manually on demand and to get something to production I have to justify the change (even just the package name changing) to QA with an user requirement and some other bureaucratic documentation so that's why I was hesitating the change in first place.

    Read the article

  • Nothing is written in php5-fpm.log

    - by jaypabs
    I have two servers which is Ubuntu 12.04 and Ubuntu 14.04. When I use Ubuntu 14.04 in my new server and enabled the php-fpm log file found under /etc/php5/fpm/php-fpm.conf that reads as follows: error_log = /var/log/php5-fpm.log I noticed that most of the log that I found in Ubuntu 12.04 is not written in 14.04. For example, if I restart php5-fpm in my Ubuntu 12.04, a restart log is being written, however, this does not happen in 14.04. Another log which I missed in 14.04 are the following: [23-Aug-2014 16:23:03] NOTICE: [pool web42] child 118098 exited with code 0 after 12983.480191 seconds from start [23-Aug-2014 16:23:03] NOTICE: [pool web42] child 147653 started [23-Aug-2014 17:27:31] WARNING: [pool web8] child 76743, script '/var/www/mysite.com/web/wp-comments-post.php' (request: "POST /wp-comments-post.php") executing too slow (12.923022 sec), logging I really wanted to have this kind of log so I will know the length of time a slow script has executed. Does anyone know if there are other settings in Ubuntu 14.04 that I need to change in addition to /etc/php5/fpm/php-fpm.conf?

    Read the article

  • How to factorize code in Unreal Kismet (i.e. "Material Function"s for Kismet)

    - by Georges Dupéron
    In the Unreal Development Kit, when using the Material Editor, one can factorize frequently-used groups of nodes by creating a Material Function (content Browser ? right-click ? new matrial function, IIRC). When defining the behaviour of some actor in Kismet, one can easily have a dozen nodes involved. If I have many actors that share the same behaviour, then I'll copy-paste these nodes, and change the variables so they point to the other actors. This leads to inconsistencies (a modification in the behaviour of an actor isn't propagated in the copy-pasted nodes), complexity (you end up with hundreds of nodes), and generally useless effort. My question is : Can I create a "kismet function", just like a material function ? Note: I'd rather avoid using UnrealScript. I don't even know where to type UnrealScripts, don't know where the documentation is and more generally don't have enough time to invest in learning UnrealScript. This "kismet function" feature must be usable by graphists (with little programming knowledge). If a (simple) script suffices to add this feature in the Kismet editor, so that one can create several "functions" without using UnrealScript, then fine, but I don't really want to have to write a script each time I want to factorize a few nodes. Thanks for any information !

    Read the article

  • Windows Java-based apps not working

    - by DariusVE
    After updating the Java JRE to 7u25, many Java-based applications not works as usual, the I upgrade Java to 7u45, and still the apps not working. Minecraft start screen not showing, have to press TAB key to select the Run button and press ENTER to run the game. Netbeans IDE it's running, but none it's showing on the screen. Eclipse and JDownloader are working fine. I cannot run the Java Control Panel, it's only shows the Java icon at the taskbar. My System OS: Windows 7 Ultimate SP1 64Bits Java: java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) Client VM (build 24.45-b08, mixed mode, sharing)

    Read the article

  • /var/tmp and file lifetime

    - by clorz
    Hi everyone I've got a cronjob that runs every minute and checks existence of a certain file. If there's no such file, job silently ends. If there is a file, then another script is started. That script removes the file when done. Its execution time can take up to 20 minutes. My questions are: Are there any flaws in this scheme? Is it ok to store such file in tmp? Can I be shure that nothing will attempt to remove it?

    Read the article

  • How to implement behavior in a component-based game architecture?

    - by ghostonline
    I am starting to implement player and enemy AI in a game, but I am confused about how to best implement this in a component-based game architecture. Say I have a following player character that can be stationary, running and swinging a sword. A player can transit to the swing sword state from both the stationary and running state, but then the swing must be completed before the player can resume standing or running around. During the swing, the player cannot walk around. As I see it, I have two implementation approaches: Create a single AI-component containing all player logic (either decoupled from the actual component or embedded as a PlayerAIComponent). I can easily how to enforce the state restrictions without creating coupling between individual components making up the player entity. However, the AI-component cannot be broken up. If I have, for example, an enemy that can only stand and walk around or only walks around and occasionally swing a sword, I have to create new AI-components. Break the behavior up in components, each identifying a specific state. I then get a StandComponent, WalkComponent and SwingComponent. To enforce the transition rules, I have to couple each component. SwingComponent must disable StandComponent and WalkComponent for the duration of the swing. When I have an enemy that only stands around, swinging a sword occasionally, I have to make sure SwingComponent only disables WalkComponent if it is present. Although this allows for better mix-and-matching components, it can lead to a maintainability nightmare as each time a dependency is added, the existing components must be updated to play nicely with the new requirements the dependency places on the character. The ideal situation would be that a designer can build new enemies/players by dragging components into a container, without having to touch a single line of engine or script code. Although I am not sure script coding can be avoided, I want to keep it as simple as possible. Summing it all up: Should I lob all AI logic into one component or break up each logic state into separate components to create entity variants more easily?

    Read the article

  • Secure copy uucp style

    - by Alexander Janssen
    I often have the case that I have to make a lot of hops to the remote host, just because there is no direct routing between my client and the remote host. When I need to copy files from a remote host two or more hops away, I always have to: client$ ssh host1 host1$ ssh host2 host2$ scp host3:/myfile . host2$ exit host1$ scp host2:myfile . host1$ exit client$ scp host1:myfile . Back when uucp still was being used this would be as simple as a uucp host1!host2!host3 /myfile . I know that there's uucp over ssh, but unfortunately I don't have the proper privileges on those machines to set it up. Also, I'm not sure if I really want to fiddle around with customer's machines. Does anyone know of a method doing this tasks without the need to setup a lot of tunnels or deploying new software to remote hosts? Maybe some kind of recursive script which clones itself to all the remote hosts, doing the hard work for me? Assume that authentication takes place with public keys and that all hosts do SSH Agent Forwarding. Edit: I'm not looking for a way to automatically forwarding my interactive sesssion to the nexthop host. I want a solution to copy files bangpath-style using scp via multiple hops without the need to install uucp on any of those machines. I don't have the (legal) rights or the privileges to make permanent changes to the ssh-config. Also, I'm sharing this username and hosts with a lot of other people. I'm willing to hack up my own script, but I wanted to know if anyone knows something which already does it. Minimum-invasive changes to hosts on the bangpath, simple invocation from the client. Edit 2: To give you an impression of how it's properly been done in interactive sessions, have a look at the GXPC clustershell. This is basically a Python-script, which spwans itself over to all remote hosts which have connectivity and where your ssh-key is installed. The great thing about it is, that you can tell "I can reach HostC via HostB via HostA." It just works. I want to have this for scp.

    Read the article

  • Is there a common programming term for the problems of adding features to an already-featureful program?

    - by Jeremy Friesner
    I'm looking for a commonly used programming term to describe a software-engineering phenomenon, which (for lack of a better way to describe it) I'll illustrate first with a couple of examples-by-analogy: Scenario 1: We want to build/extend a subway system on the outskirts of a small town in Wyoming. There are the usual subway-problems to solve, of course (hiring the right construction company, choosing the best route, buying the subway cars), but other than that it's pretty straightforward to implement the system because there aren't a huge number of constraints to satisfy. Scenario 2: Same as above, except now we need to build/extend the subway system in downtown Los Angeles. Here we face all of the problems we did in case (1), but also additional problems -- most of the applicable space is already in use, and has a vocal constituency which will protest loudly if we inconvenience them by repurposing, redesigning, or otherwise modifying the infrastructure that they rely on. Because of this, extensions to the system happen either very slowly and expensively, or they don't happen at all. I sometimes see a similar pattern with software development -- adding a new feature to a small/simple program is straightforward, but as the program grows, adding further new features becomes more and more difficult, if only because it is difficult to integrate the new feature without adversely affecting any of the large number of existing use-cases or user-constituencies. (even with a robust, adaptable program design, you run into the problem of the user interface becoming so elaborate that the program becomes difficult to learn or use) Is there a term for this phenomenon?

    Read the article

  • libkeybinder error Arch linux

    - by Nihar Sarangi
    I am trying to install a package that depends on python-keybinder and hence on libkeybinder. When I run makepkg for libkeybinder, it starts and after sometime I get the following error: checking for XkbQueryExtension... no configure: error: Could not find XKB ==> ERROR: A failure occurred in build(). Aborting... ==> ERROR: Makepkg was unable to build libkeybinder. I tried checking the pkgbuild but wasn't able to find anything. :( Is there a workaround?

    Read the article

  • Depending on fixed version of a library and ignore its updates

    - by Moataz Elmasry
    I was talking to a technical boss yesterday. Its about a project in C++ that depends on opencv and he wanted to include a specific opencv version into the svn and keep using this version ignoring any updates which I disagreed with.We had a heated discussion about that. His arguments: Everything has to be delivered into one package and we can't ask the client to install external libraries. We depend on a fixed version so that new updates of opencv won't screw our code. We can't guarantee that within a version update, ex from 3.2.buildx to 3.2.buildy. Buildy the function signatures won't change. My arguments: True everything has to be delivered to the client as one package,but that's what build scripts are for. They download the external libraries and create a bundle. Within updates of the same version 3.2.buildx to 3.2.buildy its impossible that a signature change, unless it is a really crappy framework, which isn't the case with opencv. We deprive ourselves from new updates and features of that library. If there's a bug in the version we took, and even if there's a bug fix later, we won't be able to get that fix. Its simply ineffiecient and anti design to depend on a certain version/build of an external library as it makes our project difficult in the future to adopt to new changes. So I'd like to know what you guys think. Does it really make sense to include a specific version of external library in our svn and keep using it ignoring all updates?

    Read the article

  • Wireless switch on Dell XT2 - strange behaviour of rfkill

    - by DyP
    I have an Dell Latitude XT2 using an Intel WLAN card (lspci lists it as "Intel Corporation Ultimate N WiFi Link 5300") running Lubuntu 12.04 with recent updates. The laptop has a hardware WLAN switch. I have problems activating the WLAN when booting with the hardware switch set to "off". The situation is a bit confusing, unfortunately. rfkill lists two WLAN devices (though lspci only shows the Intel one). This is the situation when booting with the hardware switch set to "Off": 0: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes 1: dell-bluetooth: Bluetooth Soft blocked: yes Hard blocked: yes 2: phy0: Wireless LAN Soft blocked: yes Hard blocked: yes From some tests, I conclude WLAN is only activated when both, the dell-wifi and phy0, are unblocked by soft- and hardware. But I can only unblock dell-wifi after the hardware switch is set to "on". Procedure right from boot with hardware switch set to "Off": Soft-unblocking phy0 works as expected. Could be done by start-up script. sudo rfkill unblock 0: nothing happens. Soft block of dell-wifi not removed. Set the hardware switch to "on": phy0 gets its hard block removed. Still no WLAN. sudo rfkill unblock 0: both the soft and hard lock of dell-wifi are removed. WLAN is now active and works. sudo rfkill block 0: only adds the soft block as expected. WLAN goes off again. So, in order to activate WLAN, I have to use the hardware switch and afterwards (manually) run a script - that's a bit inconvenient. Does someone know a better solution? Maybe a daemon could help that listens to rfkill events to unblock dell-wifi after I have set the hardware switch to "on"? (sounds like another workaround) When booting with the hardware switch set to "On", nothing is blocked neither hard nor soft.

    Read the article

  • Bash: verify that process has stopped

    - by pfac
    I'm working on script meant to start/stop a set of services. For stopping, it has to terminate many processes which take a while and might hang. The script needs to verify that the process has indeed terminated, and send an email if such does not happen after a given period. This is what I have: pkill -f "stuff" for i in {1..30}; do VERIFICATIONS=$i if verification_command then echo "It's gone" break fi sleep 2 done if [ $VERIFICATIONS -ge 30 ]; then echo "failed to terminate" # send mail fi Is there a better way to do this?

    Read the article

  • Finding Locked Out Users

    - by Bart Silverstrim
    Active Directory up to 2008 network (our servers are a mix of 2008, 2003...) I'm looking for a quick way to query AD to find out what users are locked out, preferably from a batch or script file, to monitor for possible issues with either user accounts being attacked by an automated attack or just anomalies in the network. I've Googled and my Google-fu has failed; I found a query off Microsoft's own knowledgebase that cites a string to use on Server 2003 with the management snap-in's saved queries (http://support.microsoft.com/kb/555131) but when I entered it, the query returned 400 users that a spot-check showed did NOT have a checkmark in the "Account is locked out" box under "account." In fact, I don't see anything wrong with their accounts. Is there a simple utility (wisesoft bulkadusers apparently uses this method behind the scenes, since it's results were also wrong) that will give a count of users and possibly their user object names? Script? Something?

    Read the article

  • Does my dd-wrt installation support firewall logging?

    - by SpikeX
    I would like to log firewall events. I know this is possible with dd-wrt, but a lot of the documentation I've read states that this isn't possible with a micro installation of dd-wrt (based on BusyBox). I have a Netgear WNDR3700v2 router, and it does have BusyBox installed on it, but I don't know if the dd-wrt build is a micro build or not. How can I find this out, or how can I find out if my router supports firewall logging or not? Currently, if I enable all firewall logging (setting everything to "Enabled" and/or "High" gives me back blank firewall logs - but syslogd is working because I can view other system log messages).

    Read the article

  • When I auto-start Supervisord on boot, the [program:start_gunicorn] don't start

    - by Charlesliam
    The [program:start_gunicorn] is running with no error when I manually start supervisord with this setup. [program:start_gunicorn] command=/env/nafd/bin/gunicorn_start priority=1 autostart=true autorestart=unexpected user=nafd_it redirect_stderr=true stdout_logfile=/env/nafd/logs/gunicorn_supervisor.log stderr_logfile=/env/nafd/logs/gunicorn_supervisor_err.log I successfully run this init script for my supervisord. But when I used auto-start init script for supervisord the gunicorn is not running. ]# service gunicorn status gunicorn: unrecognized service What do I need to do to make the [program:start_gunicorn] run when using auto-start supervisord on boot? Here's my gunicorn config. /env/nafd/bin/gunicorn_start #!/bin/bash NAME="nafd_proj" DJANGODIR=/env/nafd/nafd_proj SOCKFILE=/env/nafd/run/gunicorn.sock NUM_WORKERS=1 DJANGO_SETTINGS_MODULE=nafd_proj.settings DJANGO_WSGI_MODULE=nafd_proj.wsgi echo "Starting $NAME as 'NAFD Web Server'" source /env/nafd/bin/activate export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE export PYTHONPATH=$DJANGODIR:$PYTHONPATH RUNDIR=$(dirname $SOCKFILE) test -d $RUNDIR || mkdir -p $RUNDIR cd /env/nafd/nafd_proj exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application --bind=127.0.0.1:8001 \ --name $NAME \ --workers $NUM_WORKERS \ --log-level=debug \` Any idea is really appreciated.

    Read the article

  • How do I get PHP to work with UserDir

    - by Callmeed
    I've got a fresh CentOS 5.5 box and have installed Webmin+VirtualMin 3.79. I've enabled UserDir in apache and the sites are visible via http://ipaddress/~user/ but PHP does not work. (PHP works fine if I visit the site via it's domain) Here's what I put in my httpd.conf to get where I'm at: <IfModule mod_userdir.c> UserDir public_html </IfModule> <Directory /home/*/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php AddHandler fcgid-script .php5 </Directory> When I try to hit a PHP file, I get a 500 error and the following is logged to /var/log/httpd/error_log: suexec failure: could not open log file fopen: Permission denied Any help/direction is appreciated.

    Read the article

  • How to print a pdf in a new tab? [migrated]

    - by TheDuke777
    I need to print a pdf by opening it in a new window. I've looked at tutorials for days, but nothing is working. I've tried many approaches, but this is the most recent: <!DOCTYPE html> <html> <head> <script type="text/javascript"> function print() { window.getElementById("pdf").focus(); window.getElementById("pdf").print(); } </script> </head> <body onload="print()"> <embed id="pdf" src="http://path/to/file" /> </body> </html> The page loads fine, with the pdf embedded. But it won't print, and I feel like I've been beating my head against a brick wall trying to figure this out. How can I get this to work? I'm willing to try anything at this point.

    Read the article

  • How to reduce the CPU load on a hosting with WordPress installed as a CMS? [on hold]

    - by Akky Awesøme
    I have been using hostgators hatchling plan for three months. I got an email from the hosting that my website is creating an over load on CPU. They said that I am eating up their processor and as a precaution, they have temporarily suspended my account. When I contacted their customer support, they said: You have to optimize your database and use some sort of caching mechanism, where the script does not need to generate a new page with every request, helps to lower the over load that a script will cause. I am not a technical geek, I am wondering how I will do this thing. I don't have any resource to hire a web developer to do this job. My website is down for 48hours. I was using wp super cache along with cloudfare's free support. Now I have intalled optimize-db plugin and optimized my database. Please provide me with some more tips on how to optimize my database to reduce CPU usage. Any help would be appreciated.

    Read the article

  • How to kill all screens that has been up longer then 4 weeks?

    - by Darkmage
    Im creating a script that i am executing every night at 03.00 that will kill all screens that has been running longer than 3 weeks. anyone done anything similar that can help? If you got a script or suggestion to a better method please help by posting :) I was thinking maybe somthing like this. First do a dump to textfile ps -U username -ef | grep SCREEN dump.txt then do a loop running through all lines of dump.txt with a regex and putting pid of the prosseses with STIME 3weeksago in a array. then do a kill loop on the array result.

    Read the article

  • Migrating users and mailboxes from postfix / Maildir to Postfix with Mysql backend [closed]

    - by Chrispy
    Possible Duplicate: Migrating users and mailboxes from postfix / Maildir to Postfix with Mysql backend So I've got 60 or so users on a hand rolled postfix installation on openbsd and I'd like to move their mailboxes to our new mail server running iRedMail (postfix, vmail/mysql back end) Does anyone know of a good way to do this? Preferably a script I can run to keep syncing the users mailboxes as MX records get updated? I presume one way (though I don't have all their passwords!) would be to have a command line imap client that simulated the users copying their mail themselves but I'm sure there must be a shell / php script to migrate users? Anyone got any bright ideas? Chris.

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >