Search Results

Search found 10170 results on 407 pages for 'regression testing'.

Page 120/407 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Is '@' Error Suppression a Valid Technique for Testing for an Optional Array Key?

    - by MikeSchinkel
    Rarst and I were debating offline about the use of the '@' error suppression operator in PHP, specifically for use to test for existence of "optional" array keys, i.e. array keys that are being used as a switch here a their lack of existence in the array is functionally equivalent to the array having the key with a value equaling false. Here is pseudo-code for this scenario: function do_something( $args = array() ) { if ( @$args['switch'] ) { // Do something with this switch } // continue on... } vs. this approach: function do_something( $args = array() ) { if ( ! empty( $args['switch'] ) && $args['switch'] ) { // Do something with this switch } // continue on... } Of course in most use-cases, suppressing errors would not be A Good Thing(tm). However in this use-case where an array is passed with an optional element, it seems to me that it is actually a very good technique but I could be wrong and would like to hear other's opinions on the subject before I make up my mind. I do know that there are alleged performance hits for using the former approach but I'd like to know how they compare with the alternative and if they performance hits really matter in real world scenarios? P.S. I decided to post this because, after debating this offline with Rarst, he asked a more general question here on Programmers but didn't actually give a detailed example of the specific use-case we were debating. And since I'm pretty sure he'll want to use the out-of-context answers on that other question as justification for why the above is "bad" I decided I needed to get opinions on this specific use-case.

    Read the article

  • How do I use depth testing and texture transparency together in my 2.5D world?

    - by nbolton
    Note: I've already found an answer (which I will post after this question) - I was just wondering if I was doing it right, or if there is a better way. I'm making a "2.5D" isometric game using OpenGL ES (JOGL). By "2.5D", I mean that the world is 3D, but it is rendered using 2D isometric tiles. The original problem I had to solve was that my textures had to be rendered in order (from back to front), so that the tiles overlapped properly to create the proper effect. After some reading, I quickly realised that this is the "old hat" 2D approach. This became difficult to do efficiently, since the 3D world can be modified by the player (so stuff can appear anywhere in 3D space) - so it seemed logical that I take advantage of the depth buffer. This meant that I didn't have to worry about rendering stuff in the correct order. However, I faced a problem. If you use GL_DEPTH_TEST and GL_BLEND together, it creates an effect where objects are blended with the background before they are "sorted" by z order (meaning that you get a weird kind of overlap where the transparency should be). Here's some pseudo code that should illustrate the problem (incidentally, I'm using libgdx for Android). create() { // ... // some other code here // ... Gdx.gl.glEnable(GL10.GL_DEPTH_TEST); Gdx.gl.glEnable(GL10.GL_BLEND); } render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); // ... // bind texture and create vertices // ... } So the question is: How do I solve the transparency overlap problem?

    Read the article

  • vector collision on polygon in 3d space detection/testing?

    - by LRFLEW
    In the 3d fps in java I'm working on, I need a bullet to be fired and to tell if it hit someone. All visual objects in the game are defined through OpenGL, so the object it can be colliding with can be any drawable polygon (although they will most likely be triangles and rectangles anyways). The bullet is not an object, but will be treated as a vector that instantaneously moves all the way across the map (like the snipper riffle in Halo). What's the best way to detect/test collisions with the polygon and the vector. I have access to OpenCL, however I have absolutely no experience with it. I am very early in the developmental stage, so if you think there's a better way of going about this, feel free to tell me (I barley have a player model to collide with anyways, so I'm flexible with it). Thanks

    Read the article

  • From the Tips Box: iPhone Sleep Monitors, Testing IR Remotes with a Camera, and Glowing Easter Eggs Redux

    - by Jason Fitzpatrick
    Once a week we round up some great reader tips and share them with everyone. This week we’re looking at using your iPhone as a sleep monitor that wakes you at an optimum time, how to test your remote with a digital camera, and a clever way to craft glowing Easter eggs. How to Own Your Own Website (Even If You Can’t Build One) Pt 3 How to Sync Your Media Across Your Entire House with XBMC How to Own Your Own Website (Even If You Can’t Build One) Pt 2

    Read the article

  • Are "Compile to JavaScript" Frameworks Hostile to Continuous Integration?

    - by joshin4colours
    Lately we've been looking at ways to improve automated testing and related tooling of our enterprise-level GWT web app. I've realized that in some ways, GWT is a bit hostile to automated testing, mainly because of the nature of the long GWT compile times from Java to JS. This makes unit testing somewhat challenging, but it also puts some roadblocks up for testing in a CI environment. I've also found out that some of our build and deployment processes are somewhat complicated due to the nature of GWT's compile process. Is this a general problem for "compile to JS" frameworks for webapps? I don't have much experience with them, but I can see some potential problems for automated testing and continuous integration and deployment. Some issues I see: Long build and compile times preventing quick deployments Language the app is developed in != JS, preventing good unit testing Obfuscated JS in the actual app makes it more like a executable than a web app Are these issues present in other similar frameworks, or is this more a GWT issue?

    Read the article

  • How to structure my AdWords campaign for testing and different groups of keywords?

    - by Romain Dorange
    I am starting an AdWords campaigns and I will measure conversion rates using the AdWords conversion tracking pixel. Conversion might be account creation or a concrete sale. As it will be a test campaign to have some insights on CTR, CR, etc... on the future, I am likely to try several configurations: Two different ads with different landing URL and messages: one with a focus on the product / the other will contains a discount embedded in the URL. 4 different groups or themes of keywords. I guess I have to build 4 ads groups based on the keywords 2 ads with the different messages assign the two ads to each ads groups follow the campaign precisely in the ads tabs where I can see the effectiveness of each Ads per Ads Groups (for a total of 8 lines of reporting) Also, what are the key performance indicators that I can have from an AdWords campaign to measure global effectiveness? measure of return on investment from concrete sales (tracking pixel with e-commerce tag on confirmation page) measure o return on investment from leads acquisition (tracking pixel on account creation) measure of traffic increase with the campaign

    Read the article

  • Which swing testing frameworks are well suited to TDD?

    - by Niel de Wet
    I am trying to follow a BDD/TDD approach to developing an IntelliJ IDEA plugin, and in order to do a full acceptance test I want to exercise my plugin through the GUI. I know that I can do it using Window Licker, but looking at the commit log there hasn't been any activity since 2010. I see there are several other frameworks, but which are current and suited for TDD? If you have any experience with swing and TDD, please share those as well.

    Read the article

  • Do I lose/gain performance for discarding pixels even if I don't use depth testing?

    - by Gajoo
    When I first searched for discard instruction, I've found experts saying using discard will result in performance drain. They said discarding pixels will break GPU's ability to use zBuffer properly because GPU have to first run Fragment shader for both objects to check if the one nearer to camera is discarded or not. For a 2D game I'm currently working on, I've disabled both depth-test and depth-write. I'm drawing all objects sorted by their depth and that's all, no need for GPU to do fancy things. now I'm wondering is it still bad if I discard pixels in my fragment shader?

    Read the article

  • Setting up a LAMP VM server for Development and Testing?

    - by TdotThomas
    Info: I would like to set up a VM server on my local computer which will serve pages in the exact same way as my current hosting (but only to me on my local computer). I currently pay a big web hosting company to host my website & web store and they are doing a great job, but I would like to be able to work on my Web site and its corresponding MySQL DB, HTML, and PHP code without being at risk of messing something completely up on the live servers. My current plan of action: Set up a VM webserver with Debian, MySQL, PHP, Apache. Copy web store (PHP/HTML) code to VM server. Copy my current MySQL databases from my hosting provider and install on VM server. Modify and test new features on VM server. Upload MySQL DB and HTML/PHP code back to web host's server where it should work as before but with new modifications. Questions: Now I'm pretty sure I have steps one and two down correctly but I can't for the life of me figure out how to proceed next, so here are my questions. I have my /etc/host file set up so www.MySite.test redirects to the IP address of the local VM webserver. Once I import my PHP/HTML files and MySQL file whats the best way to navigate around the fact that all of my files and DBs will reference www.MySite.com. I can export my MySQL dbs but do I also have to export my MySQL users and passwords to access those db or are those coded into my html/php code?

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • snmptt not translating traps, even with translate_log_trap_oid=1

    - by mbrownnyc
    I am having some trouble configuring snmptt to properly translate snmp traps. The following is a problem: /etc/snmp/snmptt.conf reflects: EVENT fgFmTrapIfChange .1.3.6.1.4.1.12356.101.6.0.1004 "Status Events" Critical FORMAT $* EXEC /usr/local/nagios/libexec/eventhandlers/submit_check_result $r "snmp_traps" 2 "$O: $+*" "$*" SDESC Trap is sent to the managing FortiManager if an interface IP is changed Variables: 1: fnSysSerial 2: ifName 3: fgManIfIp 4: fgManIfMask EDESC when a trap is received, /var/log/messages reflects: Sep 6 12:07:32 SNMPMANAGERHOST snmptrapd[15385]: 2012-09-06 12:07:32 <UNKNOWN> [UDP: [192.168.100.2]:162->[192.168.100.31]]: #012.1.3.6.1.2.1.1.3.0 = Timeticks: (707253943) 81 days, 20:35:39.43 #011.1.3.6.1.6.3.1.1.4.1.0 = OID: .1.3.6.1.4.1.12356.101.6.0.1004 #011.1.3.6.1.4.1.12356.100.1.1.1.0 = STRING: FGTNNNNNNNNN #011.1.3.6.1.2.1.31.1.1.1.1.10 = STRING: internal4 #011.1.3.6.1.4.1.12356.101.6.2.1.0 = IpAddress: 192.168.65.100 #011.1.3.6.1.4.1.12356.101.6.2.2.0 = IpAddress: 255.255.255.0 Sep 6 12:07:37 SNMPMANAGERHOST icinga: EXTERNAL COMMAND: PROCESS_SERVICE_CHECK_RESULT; 192.168.100.2; snmp_traps; 2; enterprises.12356.101.6.0.1004: enterprises.12356.100.1.1.1.0:FGTNNNNNNNNN ifName.10:internal4 enterprises.12356.101.6.2.1.0:192.168.65.100 enterprises.12356.101.6.2.2.0:255.255.255.0 Since the icinga entry reflects the EXEC, it's obvious there is no translations occurring by snmptt. I have verified that translate_log_trap_oid and net_snmp_perl_enable is enabled in snmptt.ini When using --debug=1 to start snmptt, I see the following in the --debugfile: ********** Net-SNMP version 5.05 Perl module enabled ********** The main NET-SNMP version is reported as NET-SNMP version: 5.5. What else can be done to verify that snmptt is configured properly to translate traps? I have run snmptt-net-snmp-test to verify whatever net-snmp-perl version I have installed properly supports translations. The output indicates it does. /root/snmptt_1.3/snmptt-net-snmp-test --best_guess=2 SNMPTT Net-SNMP Test v1.0 (c) 2003 Alex Burger http://snmptt.sourceforge.net MIBS:RFC1213-MIB best_guess: 2 Testing translateObj ******************** Testing: .1.3.6.1.2.1.1.1, long_names=disabled, include_module=disabled Test passed. Result: sysDescr Testing: .1.3.6.1.2.1.1.1, long_names=disabled, include_module=enabled Test passed. Result: RFC1213-MIB::sysDescr Testing: .1.3.6.1.2.1.1.1, long_names=enabled, include_module=disabled Test passed. Result: .iso.org.dod.internet.mgmt.mib-2.system.sysDescr Testing: .1.3.6.1.2.1.1.1, long_names=enabled, include_module=enabled Test passed. Result: RFC1213-MIB::.iso.org.dod.internet.mgmt.mib-2.system.sysDescr Testing: sysDescr, long_names=disabled, include_module=disabled Test passed. Result: .1.3.6.1.2.1.1.1 Testing: RFC1213-MIB::sysDescr, long_names=disabled, include_module=disabled Test passed. Result: .1.3.6.1.2.1.1.1 Testing: system.sysDescr, long_names=disabled, include_module=disabled Test passed. Result: .1.3.6.1.2.1.1.1 Testing: RFC1213-MIB::system.sysDescr, long_names=disabled, include_module=disabled Test passed. Result: .1.3.6.1.2.1.1.1 Testing: .iso.org.dod.internet.mgmt.mib-2.system.sysDescr, long_names=disabled, include_module=disabled Test passed. Result: .1.3.6.1.2.1.1.1 Testing getType *************** Testing: .1.3.6.1.2.1.4.1 Test passed. Result: INTEGER Testing: ipForwarding Test passed. Result: INTEGER Testing Description ******************* Test passed. Result: ------------------------------------------------- The indication of whether this entity is acting as an IP gateway in respect to the forwarding of datagrams received by, but not addressed to, this entity. IP gateways forward datagrams. IP hosts do not (except those source-routed via the host). Note that for some managed nodes, this object may take on only a subset of the values possible. Accordingly, it is appropriate for an agent to return a `badValue' response if a management station attempts to change this object to an inappropriate value. ------------------------------------------------- I have manually gone through the MIB with the definition that's not resolving, and verified that it is properly linking back to the proper resolved definition. It is: FORTINET-FORTIGATE-MIB.txt contains: fgFmTrapIfChange NOTIFICATION-TYPE OBJECTS { fnSysSerial, ifName, fgManIfIp, fgManIfMask } STATUS current DESCRIPTION "Trap is sent to the managing FortiManager if an interface IP is changed" ::= { fgFmTrapPrefix 1004 } fgFmTrapPrefix OBJECT IDENTIFIER ::= { fgMgmt 0 } fgMgmt OBJECT IDENTIFIER ::= { fnFortiGateMib 6 } fnFortiGateMib ::= { fortinet 101 } IMPORTS FnBoolState, FnIndex, fnAdminEntry, fnSysSerial, fortinet FROM FORTINET-CORE-MIB fortinet MODULE-IDENTITY ::= { enterprises 12356 } LOOKS GOOD!!!!! 1.3.6.1.4.1.12356.101.6.0.1004 I've exhausted all the documentation and even posted fruitlessly in the snmptt-users mailing list. I can not prove it is the MIB. Why would snmptt fail to translate traps? Thanks, Matt

    Read the article

  • ms-access: DB engine cannot find input table or query

    - by every_answer_gets_a_point
    here's the query: SELECT * FROM (SELECT [Occurrence Number], [Occurrence Date], [1 0 Preanalytical (Before Testing)], [Cup Type], NULL as [2 0 Analytical (Testing Phase)], [2 0 Area], NULL as [3 0 Postanalytical ( After Testing)],NULL as [4 0 Other], [Practice Code], [Specimen ID #] FROM [Lab Occurrence Form] WHERE NOT ([1 0 Preanalytical (Before Testing)] IS NULL) UNION SELECT [Occurrence Number], [Occurrence Date],NULL, [Cup Type],[2 0 Analytical (Testing Phase)], [2 0 Area], NULL,NULL, [Practice Code], [Specimen ID #] FROM [Lab Occurrence Form] WHERE NOT ([2 0 Analytical (Testing Phase)] IS NULL) UNION SELECT [Occurrence Number], [Occurrence Date],NULL, [Cup Type],NULL, [2 0 Area], [3 0 Postanalytical ( After Testing)],NULL, [Practice Code], [Specimen ID #] FROM [Lab Occurrence Form] WHERE NOT ([3 0 Postanalytical ( After Testing)] IS NULL) UNION SELECT [Occurrence Number], [Occurrence Date],NULL, [Cup Type],NULL, [2 0 Area], NULL, [4 0 Other] FROM [Lab Occurrence Form], [Practice Code], [Specimen ID #] WHERE NOT ([4 0 Other] IS NULL) ) AS mySubQuery ORDER BY mySubQuery.[Occurrence Number]; for some reason it doesnt like [Practice Code]. it's definitely a column in the table so i dont understand the problem. the error is the microsoft office access database engine cannot find the input table or query 'Practice Code'........

    Read the article

  • Permission forbidden on localhost with apache2

    - by N Alex
    Here is what I am trying to do. I tried to add another folder to apache and I get the following error when trying to acces testing/index.html. The idea is that I would like to have for every customer a folder like /home/neagoe/Work/InterWebs/Projects/[PROJECT NAME]/CustomerProjects/website/dist. Forbidden You don't have permission to access /index.html on this server. Apache/2.2.22 (Ubuntu) Server at testing Port 80 Here are the steps that I followed: Step1: sudo chmod a+x /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step2: sudo chown -R www-data:www-data /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist sudo chmod -R 775 /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step3: sudo adduser $USER www-data Step4: sudo a2enmod userdir Step5: sudo cp /etc/apache/sites-available/default /etc/apache/sites-available/testing I edited the file /etc/apache/sites-available/testing so it looks like this: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName testing DocumentRoot /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Step6: I edited hosts ("/etc/hosts") so it looks like this: 127.0.0.1 localhost 127.0.0.1 testing # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Step7: sudo a2ensite testing sudo service apache2 restart I searched for about 2 hours on the internet but I can't figure out what went wrong. All the pages that I found following the same steps as described above. I know there are similar questions here on the internet, but the answer is to change permission to the directory which I did on Step2. I am sorry if this is really a duplicate but I could't find the right answer. Thank you! PS. I asked this also on AskUbuntu but didn't get any answers so I'm trying my luck here. Edit: There isn't much on the error log or the access log. On the access.log: ::1 - - [10/Aug/2013:11:23:28 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:29 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:31 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:32 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:33 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:34 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:35 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:23:23 +0300] "POST /wordpress-testing/wp-cron.php?doing_wp_cron=1376123003.7026669979095458984375 HTTP/1.0" 200 705 "-" "WordPress/3.6; http://localhost/wordpress-testing" ::1 - - [10/Aug/2013:11:23:36 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:37 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:38 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:31:32 +0300] "GET /index.html HTTP/1.1" 200 485 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:23.0) Gecko/20100101 Firefox/23.0" And the last line repeats for about 200 rows. On the error.log: 1. This lines repeat from time to time. PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525 /msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:06:42 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations [Sat Aug 10 13:07:36 2013] [notice] caught SIGTERM, shutting down PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:07:37 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations 2. And this is the predominant error. (hundreds of lines) [Sat Aug 10 13:07:40 2013] [error] [client 127.0.0.1] (13)Permission denied: access to /index.html denied

    Read the article

  • Read from a file into an array and stop if a ":" is found in ruby

    - by Minky
    Hi! How can I in Ruby read a string from a file into an array and only read and save in the array until I get a certain marker such as ":" and stop reading? Any help would be much appreciated =) For example: 10.199.198.10:111 test/testing/testing (EST-08532522) 10.199.198.12:111 test/testing/testing (EST-08532522) 10.199.198.13:111 test/testing/testing (EST-08532522) Should only read the following and be contained in the array: 10.199.198.10 10.199.198.12 10.199.198.13

    Read the article

  • What Kind of Spam is This? Testing Blog Comment Limits

    - by Yar
    I received this comment on one of my blogs today (on blogger.com): Easily I agree but I about the post should acquire more info then it has. It's the third in a series. Before there was: I will not acquiesce in on it. I over precise post. Expressly the title attracted me to be familiar with the sound story. and before that Your blog keeps getting better and better! Your older articles are not as good as newer ones you have a lot more creativity and originality now keep it up! It is obviously computer-generated (well, not this last one). The comments are from Anonymous, so they're not trying to legitimate a user on Blogger. Is this a spam attack? What might its goal be? Or are they just testing my blog to see if I reject or not? Does this kind of "attack" have a name?

    Read the article

  • Hit Testing with CALayer using the alpha properties of the CALayer contents.

    - by Charliehorse
    I'm writing a game for Mac using Cocoa. I'm currently implementing hit testing and have founds that CALayer offers hit testing, but does not seem to implement the alpha properties. As I have at times many CALayers stacked on top of each other, I really need to find a way to determine what the user actually meant to click on. I'm thinking if I could somehow get an array that contains pointers to all of the CALayers that contain the click point, I could filter through them some how. However the only way I've got so far to create the array is: NSMutableArray* anArrayOfLayers = [NSMutableArray array]; for (CALayer* aLayer in mapLayer.sublayers) { if ([aLayer containsPoint:mouseCoord]) [anArrayOfLayers addObject:aLayer]; } Then sort the array by the CALayer's z-values then go through checking if the pixel at location is alpha or not. However, between the sort and the alpha check this seems to be an incredible performance hog. (How would you even check the alpha?) Is there any way to do this?

    Read the article

  • Beginner, learning as I go - how to get C#/SQLite db set up and ready for testing?

    - by ChrisC
    I've messed with Access a little bit in the past, had one class on OO theory, and one class on console c++ apps. Now, as a hobby project, I'm undertaking to write an actual app, which will be a database app using System.Data.SQLite and C#. I have the db's table structure planned. I have System.Data.SQLite installed and connected to VS Pro. I entered my tables and columns in VS, but that's where I'm stuck. I really don't know how to finish the db set up so I can start creating queries and testing the db structure. Can someone give me guidance to online resources that will help me learn how to get the db properly set up so I can proceed with testing it? I'm hoping for online resources specific to beginners using C# and System.Data.SQLite, but I'll use the closest I can get. Thanks.

    Read the article

  • How to setup a simple Ubuntu Server Tomcat cluster on VirtualBox for testing?

    - by Alex Pakka
    I am looking for a step by step instructions to setup at leat two (and later more) simple Ubuntu Virtual Core 12.10 Server VMs on Oracle VirtualBox under Windows 7 64bit. The test setup would be: Apache HTTP server on the Windows host acting as a Load Balancer. The result will be that going to http://localhost:8080 would balance between two nodes and prove session replication. Two lean, small footprint Ubuntu Server guest nodes with Java 7 and Tomcat 7. The intention is to help everyone doing High Availability / Load Balancing development and testing to create a reasonable environment on the local workstation or mainstream notebook in as little time as possible.

    Read the article

  • how to find and add a string to a file in linux

    - by user2951644
    How can I check a file for a string if missing the string automatically add it for example Input Input file test.txt this is a test text for testing purpose this is a test for testing purpose this is a test for testing purpose this is a test text for testing purpose I would like to add "text" to all the lines Desired Output this is a test text for testing purpose this is a test text for testing purpose this is a test text for testing purpose this is a test text for testing purpose Is it possible? many thanks in advance Hi guys thanks for all the help, for my case is not that simple. I wont know which line will be different and in the middle string it will not only have a single string. i will give a clearer case Input file test.txt Group: IT_DEPT,VIP Role: Viewer Dept: IT Group: IT_DEPT,VIP Dept: IT Group: FINANCE LOAN VIEWER Role: Viewer Dept: FINANCE Group: FINANCE LOAN VIEWER Dept: FINANCE Desired output file test2.txt Group: IT_DEPT,VIP Role: Viewer Dept: IT Group: IT_DEPT,VIP Role: - Dept: IT Group: FINANCE LOAN VIEWER Role: Viewer Dept: FINANCE Group: FINANCE LOAN VIEWER Role: - Dept: FINANCE So those that are missing "Role:" will be added "Role: - ", hope this clear things out, thanks in advance again

    Read the article

  • PHP remote development workflow: git, symfony and hudson

    - by user2022
    I'm looking to develop a website and all the work will be done remotely (no local dev server). The reason for this is that my shared hosting company a2hosting has a specific configuration (symfony,mysql,git) that I don't want to spend time duplicating when I can just ssh and develop remotely or through netbeans remote editing features. My question is how can I use git to separate my site into three areas: live, staging and dev. Here's my initial thought: public_html (live site and git repo) testing: a mirror of the site used for visual tests (full git repo) dev/ticket# : git branches of public_html used for features and bug fixes (full git repo) Version Control with git: Initial setup: cd public_html git init git add * git commit -m ‘initial commit of the site’ cd .. git clone public_html testing mkdir dev Development: cd /dev git clone ../testing ticket# all work is done in ./dev/ticket#, then visit www.domain.com/dev/ticket# to visually test make granular commits as necessary until dev is done git push origin master:ticket# if the above fails: merge latest testing state into current dev work: git merge origin/master then try the push again mark ticket# as ready for integration integration and deployment process: cd ../../testing git merge ticket# -m "integration test for ticket# --no-ff (check for conflicts ) run hudson tests visit www.domain.com/testing for visual test if all tests pass: if this ticket marks the end of a big dev sprint: make a snapshot with git tag git push --tags origin else git push origin cd ../public_html git checkout -f (live site should have the latest dev from ticket#) else: revert the merge: git checkout master~1; git commit -m "reverting ticket#" update ticket# that testing failed with the failure details Snapshots: Each major deployment sprint should have a standard name and should be tracked. Method: git tag Naming convention: TBD Reverting site to previous state If something goes wrong, then revert to previous snapshot and debug the issue in dev with a new ticket#. Once the bug is fixed, follow the deployment process again. My questions: Does this workflow make sense, if not, any recommendations Is my approach for reverting correct or is there a better way to say 'revert to before x commit'

    Read the article

  • Looking for an example of how a software project can be managed/deployed

    - by rguilbault
    My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is understanding how to move code between stages of development. We have what we call a pipeline, which consists of particular stops: [Source] - [QC] - [Production] At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. This movement of code is triggered by advancing the status of the change request to match the stage of the pipeline. I have been searching the internet for a few days trying to find how the process is accomplished elsewhere -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc. Can anyone point me to any resources detailing specific workflows or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative. Note: I've cleaned up the question to hopefully make it easier to understand. Also, I found another question (which I can't find now) that referenced this book, which sounds like it might be exactly what I am looking for -- not sure if I want to shell out the cash for it, though.

    Read the article

  • Is it possible to do A/B testing by page rather than by individual?

    - by mojones
    Lets say I have a simple ecommerce site that sells 100 different t-shirt designs. I want to do some a/b testing to optimise my sales. Let's say I want to test two different "buy" buttons. Normally, I would use AB testing to randomly assign each visitor to see button A or button B (and try to ensure that that the user experience is consistent by storing that assignment in session, cookies etc). Would it be possible to take a different approach and instead, randomly assign each of my 100 designs to use button A or B, and measure the conversion rate as (number of sales of design n) / (pageviews of design n) This approach would seem to have some advantages; I would not have to worry about keeping the user experience consistent - a given page (e.g. www.example.com/viewdesign?id=6) would always return the same html. If I were to test different prices, it would be far less distressing to the user to see different prices for different designs than different prices for the same design on different computers. I also wonder whether it might be better for SEO - my suspicion is that Google would "prefer" that it always sees the same html when crawling a page. Obviously this approach would only be suitable for a limited number of sites; I was just wondering if anyone has tried it?

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >