Search Results

Search found 24508 results on 981 pages for 'joel test'.

Page 589/981 | < Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >

  • Linux - How to run Firefox with AT command

    - by conualfy
    I try to run a specified command on a desired time. I found at for this, and it seems to work fine if I run: echo "ls -al / > /home/florin/test.txt" | at 4:21am But I want to run a different thing: /usr/bin/firefox -new-tab http://google.ro I tried adapting the first line with my action (running it in terminal opens a new Firefox tab with http://google.ro, so the command is correct), but with at, it does not work: echo "firefox -new-tab http://google.ro" | at 4:23am The task seems to be scheduled, but it does not run. When running the previous line I get the default reply from at: warning: commands will be executed using /bin/sh Should my Firefox command be differently run in sh? Is there a way to do my action with at, or some other way? Thanks a lot!

    Read the article

  • Javascript Canvas Drawing Efficiency

    - by jujumbura
    I have just recently started some experiments with game development in Javascript/HTML5, and so far it has been going pretty well. I have a simple test scene running with some basic input handling, and a hundred-ish drawImage() calls with a few transforms. This all runs great on Chrome, but unfortunately, it already chugs on Firefox. I am using a very large canvas ( 1920 x 1080 ), but it doesn't seem like I should be hitting my limit already. So on that note, I was hoping to ask a few questions: 1) What exactly is done on the CPU vs. the GPU in terms of canvas and drawImage()? I'm afraid the answer is probably "it depends on the browser", but can anybody give me some rules of thumb? I naively imagined that each drawImage call results in a textured quad on the GPU with the canvas effectively being a render target, but I'm wondering if I'm pretty far off base there... 2) I have seen posts here and there with people saying not to use the translate(), rotate(), scale() functions when drawing on the canvas. Am I adding a lot of overhead just by adding a translate() call, as opposed to passing in the x,y to drawImage()? Some people suggest using "transate3d", etc., which are CSS properties, but I'm not sure how to use them within a scene. Can they be used for animated sprites within a single canvas? 3) I have also seen a lot of posts with people mentioning that pre-building canvases and then re-using them is a lot faster than issuing all the individual draw calls again. I am guessing that my background should definitely be pre-built into a canvas, but how far should I take this? Should I maintain an individual canvas for each sprite, to cache all static image data when not animating? Thank you much for your advice!

    Read the article

  • What is the easiest way to get XtraDB for MySQL running on CentOS 5

    - by Jeremy Clarke
    I'm having a lot of issues with a dedicated MySQL server and it seems like upgrading to the XtraDB version of InnoDB will probably have a positive effect, but I'm hesitant to get involved with it since I am not really a sysadmin and prefer to stick with things that start with "yum update". What is the easiest way to get XtraDB installed? Should I use the Percona server? MariaDB? OurDelta? Is there a way to avoid using custom RPMs and sticking to a repo instead? The current yum version of MySQL is 5.0.xx, whereas a lot of the alternate MySQL builds are based on 5.1.xx. How does this factor in? Do I need to figure out 5.1 on CentOS before working on getting XtraDB in? For bonus points: Do I need to seriously test XtraDB with my server before implementing it, or is it relatively safe to have the brief downtime for switching servers followed by putting the site back online with XtraDB?

    Read the article

  • SQL server periodically gets disconnected

    - by Maulin
    Hi, Our environment is: Windows Server 2003, Service Pack 2 SQL Server Express 2005 SQLServer JDBC driver 1.2 (also tried Jtds) Sun JDK 1.6 (we tried this on JDK 1.5 as well) There is no virus protection software on the host, and no firewall is enabled. We have Web application deployed in JBOSS 4.0.2. Our problem is that the JDBC connection to SQL server periodically gets disconnected, and then we can't reconnect to the SQL server at all, unless we physically restart the server on which JBOSS deployed. we are getting following error in log. Software caused connect on abort: recv failed Note: We are able to connect to SQL server using sample java test class. Any suggestions would be most appreciated, as this is a serious, mission-criticial problem for us right now.

    Read the article

  • Connect Lenovo Thinkpad Edge e120 to WQHD display (such as Dell U2711) through HDMI

    - by Fulvio
    On paper, the HDMI version of the Lenovo Thinkpad Edge e120 is 1.3, which supports WQHD. Does it actually work? I want to connect the e120 to a WQHD display, such as the Dell U2711 or an Apple Cinema Display. I run Windows 7 Ultimate and latest drivers. The e120 I think has the Intel HD Graphics 3000 onboard chipset. I don't have the chance to test out the e120 with neither of these displays, so I'm seeking for an opinion from those who have tried. Thank you

    Read the article

  • No HTTP Response from Tomcat 7 EC2 instance

    - by David Kaczynski
    I am new to EC2 (and Tomcat, for that matter), and I am trying to deploy a vanilla Tomcat 7 server to an Ubuntu 12.04.1 EC2 instance and access the default test site over HTTP. My EC2 instance is running, and the Security Group includes port 80: My /etc/tomcat7/server.xml config has been edited to listen for HTTP requests on port 80: 0 I have restarted my Tomcat 7 server via sudo service tomcat7 restart. However, according to sudo netstat -lnp, Tomcat is not listed as listening over port 80: I am unable to get any response from going to the ...amazonaws.com public DNS in a web browser. What am I missing?

    Read the article

  • Why, when on Kubuntu I lose internet connection, am I unable to reconnect?

    - by Jonathan
    Using Kubuntu 11.10. Sony Vaio computer. Network controller: Intel Corporation WiFi Link 5100. If I connect to a wireless network, and the signal drops, then I am unable to connect to any network without a reboot. I can assure that the issue has nothing to do with the computer going to sleep, as I have experienced the above while using my computer continuously. Here is exactly what happens: Connect to network (at University, where the connection is not so great). The connection is broken There are three other possible networks available, but none of them can be connected to. I have tried off and on sometimes for hours. I am always able to reestablish a connection after a reboot. I can only think of two explanations. The first is that a temp file is corrupted when the internet connection is abruptly dropped. The second is that my computer actually corrupts something before the loss of internet connection, which causes the loss in signal, and inability to reconnect. However, I am not confident that my explanations are complete, nor do I have any idea how to test these things.

    Read the article

  • how can i use apache log files to recreate usage scenario

    - by daigorocub
    Recently i installed a website that had too many requests and it was too slow. Many improvements have been made to the web site code and we've also bought a new server. I want to test the new server with exactly the same requests that made the old server slow. After that, i will double the requests, make new tests and so on. These requests are logged in the apache log files. So, I can parse those files and make some kind of script to make the same requests. Of course, in this case, the requests will be made only by my computer against the server, but hey, better than nothing. Questions: - is there some app that does this already? - would you use wget? ab? python script? Thanks!

    Read the article

  • Subversion for web designer: repository on a network share and ftp to the live server?

    - by ceatus
    My configuration: htdocs on a windows network share (z:) web developers check out with dreamweaver modify and check in back to the drive z LAMP running on a Ubuntu server virtualized on Hyper-V with apache that point on the z drive for dev in order to test the websites Upload by FTP on the live server Now: I need multiple access to the repository, keep them on a network shares and we manage about 200 websites. All the web developers, administrators and IT need to access to the share. I found out that creating a svn server is the best way for me, so I created it on a Ubuntu Server which is virtualized on Hyper-V. Right now I have the repos local on the Ubuntu Server but I'd like them on my network drive and I'd like to have a post-commit, if possible, in order to ftp directly on my live server. Do you guys think that a WebDav solution would be better? Thanks in advance Angelo

    Read the article

  • Access my mac's local server from a windows machine

    - by Simon Davies
    I am running my test sites on my mac, but I would like be able to use my windows machine to also access this server and be able to write / read fiels form this server. I can access my sites on my windows by using the IP address 192.168.1.???. But I would like to be able to use my windows machine to also access the fiels and edit them. Any ideas much appreciated. Some details: Mac OSX SL file sharing set to on. Windows 7 (tried setting up network map using afp://192.... but nothing) Both machines are connected to a local network, with several other machine (windows) Thanks

    Read the article

  • Regression testing for firewall changes

    - by James C
    We have a number of firewalls in place around our organisation and in some cases packets can pass through four levels of firewall limiting the flow TCP traffic. A concept that I'm used to from software testing is regression testing, allowing you to run a test suite against a changed application to verify that the new changes haven't affected any old features. Does anyone have any experience or an offer any solutions to being able to perform the same type of thing with firewall changes and network testing? The problem becomes a lot more complicated because you'd ideally want to be originating (and testing receipt) of packets across many machines.

    Read the article

  • Odd Suhosin memory alerts

    - by slice
    I am getting a lot of odd suhosin alerts in my syslog. The following are example entries: Jun 9 08:46:11 suhosin[9764]: ALERT - script tried to increase memory_limit to 2145386496 bytes which is above the allowed value (attacker '157.55.39.180', file '/var/www/site/index.php') Jun 9 08:46:11 suhosin[9744]: ALERT - script tried to increase memory_limit to 2145386496 bytes which is above the allowed value (attacker '109.74.2.136', file '/var/www/site/test.php') Jun 9 08:46:13 suhosin[9779]: ALERT - script tried to increase memory_limit to 0 bytes which is above the allowed value (attacker 'REMOTE_ADDR not set', file 'unknown') Jun 9 08:46:13 suhosin[9779]: ALERT - script tried to increase memory_limit to 2145386496 bytes which is above the allowed value (attacker 'REMOTE_ADDR not set', file 'unknown') What is happening here? Why 0 bytes or 2145386496 bytes (2046 GB!!??)? Why does it sometimes state the attacker and the requested script and sometimes state 'REMOTE_ADDR not set' and file 'unknown'? How do I proceed to figure this out?

    Read the article

  • Dual monitors with one above the other?

    - by Felix
    I'm using Gnome 3 and proprietary Nvidia drivers. I have tried to set in nvidia-settings my external monitor to be "above" my main one (it's a laptop). However, when I try to drag a window up from the main display to the external one, it gets stuck and can't move past a certain point. Trying to maximize it changes its decoration so it looks maximized (i.e. no borders, etc), but its size or position doesn't change. Now, if I set my external monitor to be "to the left" of the main one, it works, which is why I'm suspecting this is a Gnome issue, not an Nvidia one. Anyone know how to fix this? Update: some versions: Gnome: 3.2.2.1 Nvidia: 280.13 Update 2: I can see that Gnome 3.4 is out, and among the release notes is better external monitor support. However, they only mention a small fix that is unrelated to my problem. Can anyone with Gnome 3.4 and access to an external monitor please test this out and tell me if it works? I don't want to go through the hassle of upgrading my Ubuntu installation unless I know for certain it's going to fix the problem.

    Read the article

  • File permission mask/mode settings for Samba on FreeNAS?

    - by tkahn
    I'm currently working on the Samba settings on a FreeNAS server. When any user creates a file or a folder on the server I want the file or folder to get the following RWX permissions: Folders: drwxrws--- Files: -rwxrws--- To set the permissions like this manually I use chmod 2770 which works great. But I want this to happen automatically and therefore I've added the following lines to smb.conf: create mask = 2770 directory mask = 2770 force create mode = 2770 force directory mode = 2770 But when I test by creating a file in one of the folders it get's these permissions: Folder: drwxrwx File: -rwxrw---- What am I overlooking or doing wrong? Is the order of the lines relevant? Does the setgid digit (the 2 in 2770) mess things up?

    Read the article

  • Large keepalive_requests values are severely slowing-down Nginx

    - by Gil
    When running a bacon (43-byte transparent pixel) load test on Nginx, we have tried several keepalive_requests values (from 10 to 100,000) and the optimal value seems to be 10. Here are the server HTTP headers of this tiny reply: HTTP/1.1 200 OK Server: nginx/1.5.6 Date: Wed, 23 Oct 2013 12:39:45 GMT Content-Type: image/gif Content-Length: 43 Last-Modified: Mon, 28 Sep 1970 06:00:00 GMT Connection: keep-alive Nginx is twice slower with keepalive_requests 100000 than with keepalive_requests 10. Can you help understanding that result? Or tell what we do wrong? For reference, here is the nginx.conf file.

    Read the article

  • Book Review: Professional ASP.Net MVC4

    - by Sam Abraham
    The past few weeks have been particularly busy as I continue to dedicate a bigger portion of my free time to refreshing my memory and enhancing my knowledge of best practices pertaining to technologies we plan on using for a major upcoming project. In this blog post, I will be providing a brief overview of my latest reading “Professional ASP.Net MVC4” by Jon Galloway, Phil Haack, Brad Wilson and K. Scott Allen. This book is a must read for web developers looking to enhance their MVC expertise with best practices and tips shared from recognized industry experts. This book takes the reader on a 16-chapter long journey towards being a better ASP.NET MVC developer with chapter 16 putting all information covered in practical context by dissecting the implementation of Nuget.org, a real-life open-source, ASP.NET MVC project.  All code samples referenced in this book are conveniently accessible via NuGet, a free, open-source Library package manager that installs as a Visual Studio Extension. Chapters 2, 3 and 4 thoroughly cover MVC’s various components: Controllers “C”, Views “V” and Models “M” respectively. Chapter 5 covers additional extension methods (Helpers) provided to speed and ease the use of common HTML elements such as forms, textboxes, grids, to name a few… Chapter 6 tackles built-in validation while providing examples and use cases on implementing custom validation that plugs into the MVC framework. Chapters 7 thru 13 discusses the latest on Membership, Ajax, Routing, NuGet and the ASP.Net Web API. Chapters 12 (Dependency Injection) and 13 (Unit Testing) demonstrate a big competitive advantage of MVC with its ease of test-ability and plug-ability. Chapters 14 and 15 targets the advanced developer showcasing how to extend MVC to customize and replace every piece in the framework.In conclusion, I strongly recommend Professional ASP.NET MVC 4 as an excellent read for both developers already using MVC as well as those getting started with the framework.   Many thanks to the Wiley/Wrox User Group Program for their support of our West Palm Beach Developers’ Group.  You can access my reviews of books I recently read: Professional ASP.NET Design Patterns Professional WCF 4.0 Inside Windows Communication Foundation Inside Microsoft SQL Server 2008 series

    Read the article

  • iptables: built-in INPUT chain in nat table?

    - by ughmandaem
    I have a Gentoo Linux system running linux 2.6.38-rc8. I also have a machine running Ubuntu with linux 2.6.35-27. I also have a virtual machine running Debian Unstable with linux 2.6.37-2. On the Gentoo and Debian systems I have an INPUT chain built into my nat table in addition to PREROUTING, OUTPUT, and POSTROUTING. On Ubuntu, I only have PREROUTING, OUTPUT, and POSTROUTING. I am able to use this INPUT chain to use SNAT to modify the source of a packet that is destined to the local machine (imagine simulating an incoming spoofed IP to a local application or just to test a virtual host configuration). This is possible with 2 firewall rules on Gentoo and Debian but seemingly not so on Ubuntu. I looked around for documentation on changes to the SNAT target and the INPUT chain of the nat table and I couldn't find anything. Does anyone know if this is a configuration issue or is it something that was just added in more recent versions of linux?

    Read the article

  • Are R&D mini-projects a good activity for interns?

    - by dukeofgaming
    I'm going to be in charge of hiring some interns for our software department soon (automotive infotainment systems) and I'm designing an internship program. The main productive activity "menu" I'm planning for them consists of: Verification testing Writing Unit Tests (automated, with an xUnit-compliant framework [several languages in our projects]) Documenting Code Updating wiki Updating diagrams & design docs Helping with low priority tickets (supervised/mentored) Hunting down & cleaning compiler/run-time warnings Refactoring/cleaning code against our coding standards But I also have this idea that having them do small R&D projects would be good to test their talent and get them to have fun. These mini-projects would be: Experimental implementations & optimizations Proof of concept implementations for new technologies Small papers (~2-5 pages) doing formal research on the previous two points Apps (from a mini-project pool) These kinds of projects would be pre-defined and very concrete, although new ideas from the interns themselves would be very welcome. Even if a project is too big or is abandoned, the idea would also be to lay the ground work so they can be retaken by another intern or intern team. While I think this is good in concept, I don't know if it could be good in practice, as obviously this would diminish their productivity on "real work" (work with immediate value to the company), but I think it could help bring aboard very bright people and get them to want to stay in the future (which, I think, is the end goal for any internship program). My question here is if these activities are too open ended or difficult for the average intern to accomplish and if R&D is an efficient use of an interns time or if it makes more sense for to assign project work to interns instead.

    Read the article

  • Install on Acer Aspire 4752

    - by user216962
    I am at my wits end with this computer. I bought and Acer Aspire 4752 with a fully loaded version of Windows 7 on it. I prefer Ubuntu so I began to install 14.04 from USB. Got the error: [Errno 5] Input/output error This is often due to a faulty CD/DVD disk or drive, or a faulty hard disk. It may help to clean the CD/DVD, to burn the CD/DVD at a lower speed, to clean the CD/DVD drive lens (cleaning kits are often available from electronics suppliers), to check whether the hard disk is old and in need of replacement, or to move the system to a cooler environment. So I tried a different USB stick, same error. Tried different versions of Ubuntu, got the same error. I've used startup disk creator and Unetbootin to make start USB boot devices. I can boot with the USB drive and run Ubuntu that way. I even checked the hard drive using the tools in Ubuntu. Everything was fine, except it said the hard drive was hot. I tried a different hard drive. Got same error above. I ran a test with mem86, everything was fine. No matter what I do, using the USB gives me the Errno5 error. I then switched to using DVDs. Now I keep getting an uncompression error when installing Ubuntu 14.04 or 12.04. I can't figure out for the life of me why I get nothing but errors. Can anyone help?

    Read the article

  • Running CORN job on Ubuntu server for SugarCRM

    - by Logik
    i am pretty inexperienced in Linux.So be descriptive on your answer. My environment :Local Linux server 12.04 hosting Sugar CRM 6.5.2. There is area in sugar CRM called scheduler. I can configured some predefined jobs here. in my case i am trying to run email reminders (ever min/hour/day/month). For this scheduler to be effective, i read some where i need to setup CRON job. So i did some research & finally put following lines in CRONTAB for the root user, as per instructions given in sugarCRM. cd /var/www/crm; php -f cron.php /dev/null 2&1 Well i am creating contracts in my sugarCRM (AOS module) & i want email reminders to be sent for these contracts to the concern person. Now my sugarCRM email is configured correctly & i can send test emails using it. But the CRON + scheduler not giving any result. I can't receive any emails. Then i tried to read /var/log/syslog & it is showing entry for following line each minute. Oct 27 15:03:01 unicomm CRON[28182]: (root) CMD (cd /var/www/crm; php -f cron.php /dev/null 2&1) I've few questions: 1) what does the CRON job line i've added in crontab mean? cd /var/www/crm; php -f cron.php /dev/null 2&1 is not making any sense to me. 2) How am i suppose to get this thing work? I've searched a lot (including SugarCRM forum), but no luck.

    Read the article

  • How to apply verification and validation on the following example

    - by user970696
    I have been following verification and validation questions here with my colleagues, yet we are unable to see the slight differences, probably caused by language barrier in technical English. An example: Requirement specification User wants to control the lights in 4 rooms by remote command sent from the UI for each room separately. Functional specification The UI will contain 4 checkboxes labelled according to rooms they control. When a checkbox is checked, the signal is sent to corresponding light. A green dot appears next to the checkbox When a checkbox is unchecked, the signal (turn off) is sent to corresponding light. A red dot appears next to the checkbox. Let me start with what I learned here: Verification, according to many great answers here, ensures that product reflects specified requirements - as functional spec is done by a producer based on requirements from customer, this one will be verified for completeness, correctness). Then design document will be checked against functional spec (it should design 4 checkboxes..), and the source code against design (is there a code for 4 checkboxes, functions to send the signals etc. - is it traceable to requirements). Okay, product is built and we need to test it, validate. Here comes our understanding trouble - validation should ensure the product meets requirements for its specific intended use which is basically business requirement (does it work? can I control the lights from the UI?) but testers will definitely work with the functional spec, making sure the checkboxes are there, working, labelled, etc. They are basically checking whether the requirements in functional spec were met in the final product, isn't that verification? (should not be, lets stick to ISO 12207 that only validation is the actual testing)

    Read the article

  • RAID 1 not performing as expected

    - by Faken
    I recently bought a new 320Gb hard drive for my computer to set up RAID 1 on it for some added security. Installation went as smooth as could possibly be (plug in power, plug in data cable, start up computer, Intel software recognized new drive, right click create RAID 1, done!). However, for some inexplicable reason, I seem to have strange test results when using BENCH32. On my old configuration, a single 7200 rpm drive, I achieved about 60 MB/s write and 70 MB/s read. With a new RAID 1 configuration, I would expect the write to be slightly diminished but read to be significantly improved (though not exactly double speed). However, with the new configuration, I am getting 90 MB/s write and only about 80 MB/s read. I should NOT be getting improved write performance, especially NOT better than read! What's going on? My system setup is: q6600 2.4ghz CPU 4Gb DDR2 667mhz RAM on board Intel ICH9R "RAID chip" 2x Seagate 7200 RPM 320GB drives in RAID 1 Widows 7 home premium 64-bit

    Read the article

  • cron job executing every minute but should be setup to execute every 4 hours.

    - by Frank V
    Note: I've viewed cron: can’t lock /var/run/crond.pid, otherpid may be 3759 but I believe my question is different (but with the same resulting problem.) I'm very new to cron. I setup a script to run a python script every minute to test that everything was working. I did use crontab to accomplish this. It worked great, so I wanted to switch it to run every 4 hour. I changed my * * * * * {...} to * */4 * * * {...} but the job is continues to run every minute. It's been like this for the last hour or so. When I attempt to run cron restart (thinking that would solve the problem), I receive the following error message: cron: can't lock /var/run/crond.pid, otherpid may be 2311: Resource temporarily unavailable Is my cron syntax wrong? And why might I not be able to restart cron?

    Read the article

  • Problems executing check_mysql plugin

    - by Brian
    I'm attempting to test out the check_mysql plugin for Nagios using the command line. It works great when I'm just checking localhost but when I try to specify a different server (using the -H argument) I keep getting the following error message: CRITICAL - Unable to connect to mysql://nagios_user@{remoteServerHostname}/ - Access denied for user 'nagios_user'@'{localServerHostName}' (using password: YES) It looks like it's trying to connect to the database using the localhost server even though I've specified a different host name. I've already set up this user on the remote database and they have correct permissions. I'm just trying to figure out why the script keeps inserting the localhost server name instead of the remote one.

    Read the article

  • Motherboard rejects identical hard drive, one works the other doesn't

    - by Payson Welch
    I have an interesting situation. I have a Dell XS23-SB server that has four blades in it. The blades use Supermicro X7DWT motherboards, and interface with the sata drives through a backplane. I took two identical drives from a raid 0 enclosure that came from the factory (GDrive), one works on all four servers, the other does not. I verified that they both work by plugging them into a hard drive cradle. This behavior can be repeated with other drives, some drives work and some don't. However when i test them, they ALL work on my PC. What could cause this?

    Read the article

< Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >