Search Results

Search found 12844 results on 514 pages for 'manual testing'.

Page 220/514 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • Cleaning cruft from the stored configs database

    - by Zoredache
    I have setup stored configuration primarily as a method to manage my ssh known_hosts. Unfortunately as I retire hosts the old configs still exist in my database. The answer seems to be run the command puppet node clean <hostname>. The problem is that while this does command does run, and does clean up some data, it doesn't seem to clean up everything. For example I can still find values in the puppet_tags table that only applied to a hosts that no longer exists. What should I be doing to keep my stored configuration database clean of all extra junk that seems to be building up? P.S. Can anyone point me any documentation for the stored configuration schema?  If I could find good documentation, or at least an entity-relationship-diagram, I would be tempted to just do some manual clean-up.

    Read the article

  • Books or other materials to overcome Linux learning curve?

    - by Marek Osvald
    I was born in 1989 and am an active Windows user since 1993. I've always struggled with Linux, not being able to configure the system the way I would like, snooping through blogs and forums for answers, never actually overcame the barrier. The books I've seen and read are either completely command line oriented (and don't get me wrong it's awesome to know this stuff when you're working on a server for example) but seems rather impractical to me on a desktop computer that's partially my development environment. The other kind are user manual describing step-by-step the controls of simplest applications like the Calculator, totally useless to me. What would you recommend for a programmer who needs to learn how to work with Linux but already knows the basics? What materials did you use to learn how to start with Linux?

    Read the article

  • External hard disk not showing up as removable media

    - by mark
    Windows 7 Pro x64, my external hard drive is a Western Digital Carviar Black. The hard drive is connected via USB (tried all my 2.0 and 3.0 ob my Gigabyte GA-X58A-UD7) but once I initialized it in the "disk management" (or whatever the proper term is; I'm using a German edition of Windows) I have to assign it a manual drive letter. Otherwise it doesn't show up. I'm confused because usually externally connected drives (be it USB sticks or real hard disks) just show up as "Removable media", but this one doesn't. Is this OK/expected? Will there be troubles when I go to another Windows computer? I formatted the drive with quick format and NTFS. I changed the permissions to have read/write for "Everyone" on that drive. thanks

    Read the article

  • How can I tell if I'm behind a portal or proxy

    - by user19269
    I'm testing out a Content management system on my computer and have emailed their support for a problem regarding login.. They've replied back and asked if my CMS is behind a portal or proxy. I feel kind of stupid for not knowing this, but how do I find out? Everything is local (eg, hosted on my computer, etc). Thanks in advance!

    Read the article

  • 2 servers, high availability and faster response

    - by user17886
    I recently bought a second webserver because I worry about hardware failure of my old server. Now that I have that second server I wish to do a little more then just have one server standby and replicate all day. As long as it's there I might as well get some advantage our of it ! I have a website powered by ubuntu 12.04, nginx, php-fpm, apc, mysql (5.5) and couchdb. Im currently testing configurations where i can achieve failover AND make good use of the extra harware for faster responses / distributed load. The setup I am testing nowinvolves heartbeat for ip failover and two identical servers. Of the two servers only one has a public ip adress. If one server crashes the other server takes over the public ip adress. On an incoming request nginx forwards the request tot php-fpm to either server a of server b (50/50 if both servers are alive). Once the request has been send to php-fpm both servers look at localhost for the mysql server. I use master-master mysql replication for this. The file system is synced with lsyncd. This works pretty well but Im reading it's discouraged by the (mysql) community. Another option I could think of is to use one server as a mysql master and one server as a web/php server. The servers would still sync their filesystem, would still run the same duplicate software (nginx,mysql) but master slave mysql replication could be used. As long as bother servers are alive I could just prefer nginx to listen to ip a and mysql to ip b. If one server is down, the other server could take over the task of the other server, simply by ip switching. But im completely new at this so I would greatly value your expert advice. Is either of the two setups any good ? If you have any thoughts on this please let me know ! PS, virtualisation, hosting on different locations or active/passive setups are not solutions im looking for. I find virtual server either too slow or too expensive. I already have a passive failover on another location. But in case of a crash I found the site was still unreachable for too long due to dns caching.

    Read the article

  • How to avoid specifying full path in sudoers file?

    - by s g
    I am trying to add a NOPASSWD entry for sudotest.sh (or any script/binary that requires sudo) in my /etc/sudoers file (on Ubuntu 12.04 LTS server), but in order to make it work, I must specify the full path. The following entry works just fine: %jenkins ALL=(ALL)NOPASSWD:/home/vts_share/test/sudotest.sh The problem is that the script might move to a different directory. This seems like a great chance to use the * wildcard in the path (i.e. /*/sudotest.sh) so that my script could be in any directory but the manual states that wildcards will not match the / character when used in a path. I've confirmed that it doesn't work. I know that I can use the word ALL in place of my script, but this means there is no password prompt for any commands which seems unsafe. How do I solve this?

    Read the article

  • Alternatives to phpSchedule?

    - by paulw1128
    Currently we have a lightly customised version of phpSchedule for booking our development/test machines (including a raft of VMs), which no-one uses. It's slow, time-consuming to use, out of date, and therefore pretty much useless. Getting a new system in place suddenly appeared on the management radar, as there's been complaints that people are being blocked by not being able to get any machines to do development testing on (running our product on a desktop machine is not an option). Does anyone have any recommendations for alternative booking systems that are worth investigating?

    Read the article

  • RewriteRule to only affect first segment

    - by mschoening
    I have the following RewriteRule in place. This rewrites everything that starts with mac, linux or windows: RewriteCond $1 ^(mac|linux|windows) [NC] RewriteRule ^(.*)$ /index.php/site/index/$1 [L] Unfortunately this also rewrites /mac/test/testing. How can I prevent it from applying if there is another segmend after /mac?

    Read the article

  • How to know ".automaticDestinations-ms" files to which app relates?

    - by Timotei Dolean
    Hello! Does anyone know (Because on microsoft forums nobody answered me), how can I find what app has which automaticDestinations-ms file in %appdata%\microsoft\windows\recent\automaticdestinations ? That's the folder where Windows 7 stores its jump lists, and I want to know how to automatically/programmatic find the relation between each file and an application. At least, even manual I didn't found any pattern, just to look after file extensions in the files, because some programs open files with the same extension (like images), so this method it's not OK for all programs. Do you have any other idea? Maybe knowing the format of those files? Thanks.

    Read the article

  • How can I force mail clients like Outlook or Thunderbird to load SSL images with invalid certificate?

    - by Abel
    Essentially, we only encounter this issue with our testing environments, where we send mail that contains HTTPS links that point to test servers with self-signed SSL certificates. In a browser I can select to accept the certificate. Is there something similar for mail clients like Thunderbird or Outlook that I can use? Currently, we don't see any images in our test mail, which greatly confuses the UAT team.

    Read the article

  • SVN - managing .htaccess file on various instances

    - by user178087
    I have a question and need a suggestion. We have to manage .htaccess file on three different instances - dev, qa and prod. Currently we have tortoise configured for scm. The issue we are facing is that .htaccess for dev & qa is different from the prod. At present, we have to manually merge differences from dev .htaccess to prod .htaccess. Is there any alternative way of managing this file without this manual process since it is error prone. Any suggestions in this regard will be highly appreciated

    Read the article

  • How to save your Linux state (suspend to disk) periodically to recover from crashes?

    - by WoLpH
    One in a while my laptop crashes/dies because of a bad/empty battery, crappy wifi driver or whatever other reason. For a while I've wondered if it's possible to force Linux to periodically save the state (like vmware snapshots) to disk so you can restore from that with possibly slightly outdated work but at least with all of your apps open in the same state you've left them. I don't really see the point in having to boot everything from cratch constantly, although KDE saves your state on logout, that doesn't happen periodically (by default) either. It would make it much nicer to recover from your crashes if your ram was written to disk periodically. Anyone know if there's a system call to do this without also shutting down the machine? Even a manual button to save the entire state would be nice.

    Read the article

  • MySQL max_user_connections

    - by Sheriffen
    We're releasing a site in a couple of weeks that has been developed on a local machine but now when were testing on dev server we get MySQL error 'max_user_connections'. We have talked to the host company (biggest in sweden) and they say that we don't close our connections properly. But the thing is that we user the EXACT same code on another host where it works. And I also added echo "closed"; in the database_close function so that now in the very bottom of very page there is "closed". To me this means that we do close the connection, anyone got any idea of what could be wrong? We connect through the PHP PDO function and closes it by setting it to 'null', all according to the manual.

    Read the article

  • Having problems using haml and rails3

    - by Victor Rodrigues
    After installing rails3, I'm experiencing problems when trying to use haml with it. I have the updated gem installed, and after rails PROJECT_NAME , I did haml --rails in its root. It apparently had worked fine, since I have haml folder inside plugins, init.rb, as expected. But when I try to rake, or rails server, I get: rake aborted! no such file to load -- haml With --trace I get this: ** Invoke default (first_time) ** Invoke test (first_time) ** Execute test ** Invoke test:units (first_time) ** Invoke db:test:prepare (first_time) ** Invoke db:abort_if_pending_migrations (first_time) ** Invoke environment (first_time) ** Execute environment rake aborted! no such file to load -- haml /usr/local/lib/ruby/gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' /usr/local/lib/ruby/gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' /usr/local/lib/ruby/gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:537:in `new_constants_in' /usr/local/lib/ruby/gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' RAILS_PROJECT_ROOT/vendor/plugins/haml/init.rb:5 /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/plugin.rb:49 /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/initializable.rb:25:in `instance_exec' /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/initializable.rb:25:in `run' /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/initializable.rb:55:in `run_initializers' /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/initializable.rb:54:in `each' /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/initializable.rb:54:in `run_initializers' /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/application.rb:71:in `initialize!' /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/application.rb:112:in `initialize_tasks' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `call' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `execute' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `each' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `execute' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in `invoke_with_call_chain' /usr/local/lib/ruby/1.8/monitor.rb:242:in `synchronize' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:607:in `invoke_prerequisites' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `each' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `invoke_prerequisites' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:596:in `invoke_with_call_chain' /usr/local/lib/ruby/1.8/monitor.rb:242:in `synchronize' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:607:in `invoke_prerequisites' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `each' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `invoke_prerequisites' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:596:in `invoke_with_call_chain' /usr/local/lib/ruby/1.8/monitor.rb:242:in `synchronize' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:607:in `invoke_prerequisites' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `each' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `invoke_prerequisites' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:596:in `invoke_with_call_chain' /usr/local/lib/ruby/1.8/monitor.rb:242:in `synchronize' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in `invoke' /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/test_unit/testing.rake:45 /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/test_unit/testing.rake:43:in `collect' /usr/local/lib/ruby/gems/1.8/gems/railties-3.0.0.beta/lib/rails/test_unit/testing.rake:43 /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `call' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `execute' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `each' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `execute' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in `invoke_with_call_chain' /usr/local/lib/ruby/1.8/monitor.rb:242:in `synchronize' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:607:in `invoke_prerequisites' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `each' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in `invoke_prerequisites' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:596:in `invoke_with_call_chain' /usr/local/lib/ruby/1.8/monitor.rb:242:in `synchronize' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in `invoke' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in `invoke_task' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `each' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in `run' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 /usr/local/bin/rake:19:in `load' /usr/local/bin/rake:19

    Read the article

  • Lots of mysql Sleep processes

    - by user259284
    Hello, I am still having trouble with my mysql server. It seems that since i optimize it, the tables were growing and now sometimes is very slow again. I have no idea of how to optimize more. mySQL server has 48GB of RAM and mysqld is using about 8, most of the tables are innoDB. Site has about 2000 users online. I also run explain on every query and every one of them is indexed. mySQL processes: http://www.pik.ba/mysqlStanje.php my.cnf: # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 10.100.27.30 # # * Fine Tuning # key_buffer = 64M key_buffer_size = 512M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 1000 table_cache = 1000 join_buffer_size = 2M tmp_table_size = 2G max_heap_table_size = 2G innodb_buffer_pool_size = 3G innodb_additional_mem_pool_size = 128M innodb_log_file_size = 100M log-slow-queries = /var/log/mysql/slow.log sort_buffer_size = 5M net_buffer_length = 5M read_buffer_size = 2M read_rnd_buffer_size = 12M thread_concurrency = 10 ft_min_word_len = 3 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 512M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • "Phased" execution of functions in javascript

    - by FK82
    Hey there! This is my first post on stackoverflow, so please don't flame me too hard if I come across like a total nitwit or if I'm unable ot make myself perfectly clear. :-) Here's my problem: I'm trying to write a javascript function that "ties" two functions to another by checking the first one's completion and then executing the second one. The easy solution to this obviously would be to write a meta function that calls both functions within it's scope. However, if the first function is asynchronous (specifically an AJAX call) and the second function requires the first one's result data, that simply won't work. My idea for a solution was to give the first function a "flag", i.e. making it create a public property "this.trigger" (initialized as "0", set to "1" upon completion) once it is called; doing that should make it possible for another function to check the flag for its value ([0,1]). If the condition is met ("trigger == 1") the second function should get called. The following is an abstract example code that I have used for testing: <script type="text/javascript" > /**/function cllFnc(tgt) { //!! first function this.trigger = 0 ; var trigger = this.trigger ; var _tgt = document.getElementById(tgt) ; //!! changes the color of the target div to signalize the function's execution _tgt.style.background = '#66f' ; alert('Calling! ...') ; setTimeout(function() { //!! in place of an AJAX call, duration 5000ms trigger = 1 ; },5000) ; } /**/function rcvFnc(tgt) { //!! second function that should get called upon the first function's completion var _tgt = document.getElementById(tgt) ; //!! changes color of the target div to signalize the function's execution _tgt.style.background = '#f63' ; alert('... Someone picked up!') ; } /**/function callCheck(obj) { //alert(obj.trigger ) ; //!! correctly returns initial "0" if(obj.trigger == 1) { //!! here's the problem: trigger never receives change from function on success and thus function two never fires alert('trigger is one') ; return true ; } else if(obj.trigger == 0) { return false ; } } /**/function tieExc(fncA,fncB,prms) { if(fncA == 'cllFnc') { var objA = new cllFnc(prms) ; alert(typeof objA + '\n' + objA.trigger) ; //!! returns expected values "object" and "0" } //room for more case definitions var myItv = window.setInterval(function() { document.getElementById(prms).innerHTML = new Date() ; //!! displays date in target div to signalize the interval increments var myCallCheck = new callCheck(objA) ; if( myCallCheck == true ) { if(fncB == 'rcvFnc') { var objB = new rcvFnc(prms) ; } //room for more case definitions window.clearInterval(myItv) ; } else if( myCallCheck == false ) { return ; } },500) ; } </script> The HTML part for testing: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/strict.dtd > <html> <head> <script type="text/javascript" > <!-- see above --> </script> <title> Test page </title> </head> <body> <!-- !! testing area --> <div id='target' style='float:left ; height:6em ; width:8em ; padding:0.1em 0 0 0; font-size:5em ; text-align:center ; font-weight:bold ; color:#eee ; background:#fff;border:0.1em solid #555 ; -webkit-border-radius:0.5em ;' > Test Div </div> <div style="float:left;" > <input type="button" value="tie calls" onmousedown="tieExc('cllFnc','rcvFnc','target') ;" /> </div> <body> </html> I'm pretty sure that this is some issue with javascript scope as I have checked whether the trigger gets set to "1" correctly and it does. Very likely the "checkCall()" function does not receive the updated object but instead only checks its old instance which obviously never flags completion by setting "this.trigger" to "1". If so I don't know how to address that issue. Anyway, hope someone has an idea or experience with this particular kind of problem. Thanks for reading! FK

    Read the article

  • Job conditions conflicting with personal principles on software-development - how much is too much?

    - by Baelnorn
    Sorry for the incoming wall'o'text (and for my probably bad English) but I just need to get this off somehow. I also accept that this question will be probably closed as subjective and argumentative, but I need to know one thing: "how much BS are programmers supposed to put up with before breaking?" My background I'm 27 years old and have a B.Sc. in Computer engineering with a graduation grade of 1.8 from a university of applied science. I went looking for a job right after graduation. I got three offers right away, with two offers paying vastly more than the last one, but that last one seemed more interesting so I went for that. My situation I've been working for the company now for 17 months now, but it feels like a drag more and more each day. Primarily because the company (which has only 5 other developers but me, and of these I work with 4) turned out to be pretty much the anti-thesis of what I expected (and was taught in university) from a modern software company. I agreed to accept less than half of the usual payment appropriate for my qualification for the first year because I was promised a trainee program. However, the trainee program turned out to be "here you got a computer, there's some links on the stuff we use, and now do what you colleagues tell you". Further, during my whole time there (trainee or not) I haven't been given the grace of even a single code-review - apparently nobody's interested in my work as long as it "just works". I was told in the job interview that "Microsoft technology played a central role in the company" yet I've been slowly eroding my congnitive functions with Flex/Actionscript/Cairngorm ever since I started (despite having applied as a C#/.NET developer). Actually, the company's primary projects are based on Java/XSLT and Flex/Actionscript (with some SAP/ABAP stuff here and there but I'm not involved in that) and they've been working on these before I even applied. Having had no experience either with that particular technology nor the framework nor the field (RIA) nor in developing business scale applications I obviously made several mistakes. However, my boss told me that he let me make those mistakes (which ate at least 2 months of development time on their own) on purpose to provide some "learning experience". Even when I was still a trainee I was already tasked with working on a business-critical application. On my own. Without supervision. Without code-reviews. My boss thinks agile methods are a waste of time/money and deems putting more than one developer on any project not efficient. Documentation is not necessary and each developer should only document what he himself needs for his work. Recently he wanted us to do bug tracking with Excel and Email instead of using an already existing Bugzilla, overriding an unanimous decision made by all developers and testers involved in the process - only after another senior developer had another hour-long private discussion with him he agreed to let us use the bugtracker. Project management is basically not present, there are only a few Excel sheets floating around where the senior developer lists some things (not all, mind you) with a time estimate ranging from days to months, trying to at least somehow organize the whole mess. A development process is also basically not present, each developer just works on his own however he wants. There are not even coding conventions in the company. Testing is done manually with a single tester (sometimes two testers) per project because automated testing wasn't given the least thought when the whole project was started. I guess it's not a big surprise when I say that each developer also has his own share of hundreds of overhours (which are, of course, unpaid). Each developer is tasked with working on his own project(s) which in turn leads to a very extensive knowledge monopolization - if one developer was to have an accident or become ill there would be absolutely no one who could even hope to do his work. Considering that each developer has his own business-critical application to work on, I guess that's a pretty bad situation. I've been trying to change things for the better. I tried to introduce a development process, but my first attempt was pretty much shot down by my boss with "I don't want to discuss agile methods". After that I put together a process that at least resembled how most of the developers were already working and then include stuff like automated (or at least organized) testing, coding conventions, etc. However, this was also shot down because it wasn't "simple" enought to be shown on a business slide (actually, I wasn't even given the 15 minutes I'd have needed to present the process in the meeting). My problem I can't stand working there any longer. Seriously, I consider to resign on monday, which still leaves me with 3 months to work there due to the cancelation period. My primary goal since I started studying computer science was being a good computer scientist, working with modern technologies and adhering to modern and proven principles and methods. However, the company I'm working for seems to make that impossible. Some days I feel as if was living in a perverted real-life version of the Dilbert comics. My question Am I overreacting? Is this the reality each graduate from university has to face? Should I betray my sound principles and just accept these working conditions? Or should I gtfo of there? What's the opinion of other developers on this matter. Would you put up with all that stuff?

    Read the article

  • Can't get Zend Studio and PHPunit to work together

    - by dimbo
    I have a created a simple doctrine2/zend skeleton project and am trying to get unit testing working with zend studio. The tests work perfectly through the PHPunit CLI but I just can't get them to work in zend studio. It comes up with an error saying : 'No Tests was executed' and the following output in the debug window : X-Powered-By: PHP/5.2.14 ZendServer/5.0 Set-Cookie: ZendDebuggerCookie=127.0.0.1%3A10137%3A0||084|77742D65|1016; path=/ Content-type: text/html <br /> <b>Warning</b>: Unexpected character in input: '\' (ASCII=92) state=1 in <b>/var/www/z2d2/tests/application/models/UserModelTest.php</b> on line <b>8</b><br /> <br /> <b>Warning</b>: Unexpected character in input: '\' (ASCII=92) state=1 in <b>/var/www/z2d2/tests/application/models/UserModelTest.php</b> on line <b>8</b><br /> <br /> <b>Parse error</b>: syntax error, unexpected T_STRING in <b>/var/www/z2d2/tests/application/models/UserModelTest.php</b> on line <b>8</b><br /> The test is as follows: <?php require_once 'Zend/Application.php'; require_once 'Zend/Test/PHPUnit/ControllerTestCase.php'; abstract class ControllerTestCase extends Zend_Test_PHPUnit_ControllerTestCase { public function setUp() { $this->bootstrap = new Zend_Application( 'testing', APPLICATION_PATH . '/configs/application.ini' ); parent::setUp(); } public function tearDown() { parent::tearDown(); } } <?php class IndexControllerTest extends ControllerTestCase { public function testDoesHomePageExist() { $this->dispatch('/'); $this->assertController('index'); $this->assertAction('index'); } } <?php class ModelTestCase extends PHPUnit_Framework_TestCase { protected $em; public function setUp() { $application = new Zend_Application( 'testing', APPLICATION_PATH . '/configs/application.ini' ); $bootstrap = $application->bootstrap()->getBootstrap(); $this->em = $bootstrap->getResource('entityManager'); parent::setUp(); } public function tearDown() { parent::tearDown(); } } <?php class UserModelTest extends ModelTestCase { public function testCanInstantiateUser() { $this->assertInstanceOf('\Entities\User', new \Entities\User); } public function testCanSaveAndRetrieveUser() { $user = new \Entities\User; $user->setFirstname('wjgilmore-test'); $user->setemail('[email protected]'); $user->setpassword('jason'); $user->setAddress1('calle san antonio'); $user->setAddress2('albayzin'); $user->setSurname('testman'); $user->setConfirmed(TRUE); $this->em->persist($user); $this->em->flush(); $user = $this->em->getRepository('Entities\User')->findOneByFirstname('wjgilmore-test'); $this->assertEquals('wjgilmore-test', $user->getFirstname()); } public function testCanDeleteUser() { $user = new \Entities\User; $user = $this->em->getRepository('Entities\User')->findOneByFirstname('wjgilmore-test'); $this->em->remove($user); $this->em->flush(); } } And the bootstrap: <?php define('BASE_PATH', realpath(dirname(__FILE__) . '/../../')); define('APPLICATION_PATH', BASE_PATH . '/application'); set_include_path( '.' . PATH_SEPARATOR . BASE_PATH . '/library' . PATH_SEPARATOR . get_include_path() ); require_once 'controllers/ControllerTestCase.php'; require_once 'models/ModelTestCase.php'; Here is the new error after setting PHP Executable to 5.3 as Gordon suggested: X-Powered-By: PHP/5.3.3 ZendServer/5.0 Set-Cookie: ZendDebuggerCookie=127.0.0.1%3A10137%3A0||084|77742D65|1000; path=/ Content-type: text/html <br /> <b>Fatal error</b>: Class 'ModelTestCase' not found in <b>/var/www/z2d2/tests/application/models/UserModelTest.php</b> on line <b>4</b><br />

    Read the article

  • python socket related question.

    - by paul
    Hello,All im totally new to socket programming in python. i was read some tutorial and manual, but i didn't found what i want to make python related socket script in manual or tutorial. i want to make socket script which can send some info to server and also receive some info from server. For example, i want to send my login information to server, and want to receive result reply from server. but i have no idea..how to send my login information(id and password) to server. i was captured with wireshark, some process to send login info to server. and i was found port number is 5300 and server ip is 58.225.56.152 and i was send id is 'aaaaaaa' and password 'bbbbbbb' and i was received 'USER NOT FOUND' result from server. how can i make this kind of process with python socket ? if anyone help me some reference or some example or anything help much appreciate! 0000 00 50 56 f2 c8 cc 00 0c 29 a8 f8 c0 08 00 45 00 .PV.....).....E. 0010 00 e2 2a 19 40 00 80 06 d0 55 c0 a8 cb 85 3a e1 ..*[email protected]....:. 0020 38 98 05 f3 15 9a b9 86 62 7b 0d ab 0f ba 50 18 8.......b{....P. 0030 fa f0 26 14 00 00 50 54 3f 09 a2 91 7f 13 00 00 ..&...PT?....... 0040 00 1f 14 00 02 00 00 00 00 00 00 00 07 00 00 00 ................ 0050 61 61 61 61 61 61 61 50 54 3f 09 a2 91 7f 8b 00 aaaaaaaPT?...... 0060 00 00 1f 15 00 08 00 00 00 07 00 00 00 61 61 61 .............aaa 0070 61 61 61 61 07 00 00 00 62 62 62 62 62 62 62 01 aaaa....bbbbbbb. 0080 00 00 00 31 02 00 00 00 4b 52 0f 00 00 00 31 39 ...1....KR....19 0090 32 2e 31 36 38 2e 32 30 33 2e 31 33 33 30 00 00 2.168.203.1330.. 00a0 00 4d 69 63 72 6f 73 6f 66 74 20 57 69 6e 64 6f .Microsoft Windo 00b0 77 73 20 58 50 20 50 72 6f 66 65 73 73 69 6f 6e ws XP Profession 00c0 61 6c 20 53 65 72 76 69 63 65 20 50 61 63 6b 20 al Service Pack 00d0 32 14 00 00 00 31 30 30 31 33 30 30 35 33 31 35 2....10013005315 00e0 37 38 33 37 32 30 31 32 33 03 00 00 00 34 37 30 783720123....470 0000 00 0c 29 a8 f8 c0 00 50 56 f2 c8 cc 08 00 45 00 ..)....PV.....E. 0010 00 28 ae 37 00 00 80 06 8c f1 3a e1 38 98 c0 a8 .(.7......:.8... 0020 cb 85 15 9a 05 f3 0d ab 0f ba b9 86 63 35 50 10 ............c5P. 0030 fa f0 5f 8e 00 00 00 00 00 00 00 00 .._......... 0000 00 0c 29 a8 f8 c0 00 50 56 f2 c8 cc 08 00 45 00 ..)....PV.....E. 0010 00 4c ae 38 00 00 80 06 8c cc 3a e1 38 98 c0 a8 .L.8......:.8... 0020 cb 85 15 9a 05 f3 0d ab 0f ba b9 86 63 35 50 18 ............c5P. 0030 fa f0 3e 75 00 00 50 54 3f 09 a2 91 7f 16 00 00 ..>u..PT?....... 0040 00 1f 18 00 01 00 00 00 0e 00 00 00 55 73 65 72 ............User 0050 20 4e 6f 74 20 46 6f 75 6e 64 Not Found

    Read the article

  • How can I keep my MVC Views, models, and model binders as clean as possible?

    - by MBonig
    I'm rather new to MVC and as I'm getting into the whole framework more and more I'm finding the modelbinders are becoming tough to maintain. Let me explain... I am writing a basic CRUD-over-database app. My domain models are going to be very rich. In an attempt to keep my controllers as thin as possible I've set it up so that on Create/Edit commands the parameter for the action is a richly populated instance of my domain model. To do this I've implemented a custom model binder. As a result, though, this custom model binder is very specific to the view and the model. I've decided to just override the DefaultModelBinder that ships with MVC 2. In the case where the field being bound to my model is just a textbox (or something as simple), I just delegate to the base method. However, when I'm working with a dropdown or something more complex (the UI dictates that date and time are separate data entry fields but for the model it is one Property), I have to perform some checks and some manual data munging. The end result of this is that I have some pretty tight ties between the View and Binder. I'm architecturally fine with this but from a code maintenance standpoint, it's a nightmare. For example, my model I'm binding here is of type Log (this is the object I will get as a parameter on my Action). The "ServiceStateTime" is a property on Log. The form values of "log.ServiceStartDate" and "log.ServiceStartTime" are totally arbitrary and come from two textboxes on the form (Html.TextBox("log.ServiceStartTime",...)) protected override object GetPropertyValue(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor, IModelBinder propertyBinder) { if (propertyDescriptor.Name == "ServiceStartTime") { string date = bindingContext.ValueProvider.GetValue("log.ServiceStartDate").ConvertTo(typeof (string)) as string; string time = bindingContext.ValueProvider.GetValue("log.ServiceStartTime").ConvertTo(typeof (string)) as string; DateTime dateTime = DateTime.Parse(date + " " + time); return dateTime; } if (propertyDescriptor.Name == "ServiceEndTime") { string date = bindingContext.ValueProvider.GetValue("log.ServiceEndDate").ConvertTo(typeof(string)) as string; string time = bindingContext.ValueProvider.GetValue("log.ServiceEndTime").ConvertTo(typeof(string)) as string; DateTime dateTime = DateTime.Parse(date + " " + time); return dateTime; } The Log.ServiceEndTime is a similar field. This doesn't feel very DRY to me. First, if I refactor the ServiceStartTime or ServiceEndTime into different field names, the text strings may get missed (although my refactoring tool of choice, R#, is pretty good at this sort of thing, it wouldn't cause a build-time failure and would only get caught by manual testing). Second, if I decided to arbitrarily change the descriptors "log.ServiceStartDate" and "log.ServiceStartTime", I would run into the same problem. To me, runtime silent errors are the worst kind of error out there. So, I see a couple of options to help here and would love to get some input from people who have come across some of these issues: Refactor any text strings in common between the view and model binders out into const strings attached to the ViewModel object I pass from controller to the aspx/ascx view. This pollutes the ViewModel object, though. Provide unit tests around all of the interactions. I'm a big proponent of unit tests and haven't started fleshing this option out but I've got a gut feeling that it won't save me from foot-shootings. If it matters, the Log and other entities in the system are persisted to the database using Fluent NHibernate. I really want to keep my controllers as thin as possible. So, any suggestions here are greatly welcomed! Thanks

    Read the article

  • Replace html element with data in javascript

    - by Ultimate
    I trying to auto increment the serial number when row increases and automatically readjust the numbering order when a row gets deleted in javascript. For that I am using the following clone method for doing my task. Rest of the thing is working correct except its not increasing the srno because its creating the clone of it. Following is my code for this: function addCloneRow(obj) { if(obj) { var tBody = obj.parentNode.parentNode.parentNode; var trTable = tBody.getElementsByTagName("tr")[1]; var trClone = trTable.cloneNode(true); if(trClone) { var txt = trClone.getElementsByTagName("input"); var srno = trClone.getElementsByTagName("span"); var dd = trClone.getElementsByTagName("select"); text = tBody.getElementsByTagName("tr").length; alert(text) //here i am getting the srno in increasing order //I tried something like following but not working //var ele = srno.replace(document.createElement("h1"), srno); //alert(ele); for(var i=0; i<dd.length; i++) { dd[i].options[0].selected=true; var nm = dd[i].name; var nNm = nm.substring((nm.indexOf("_")+1),nm.indexOf("[")); dd[i].name = nNm+"[]"; } for(var j=0; j<txt.length; j++) { var nm = txt[j].name; var nNm = nm.substring((nm.indexOf("_")+1),nm.indexOf("[")); txt[j].name = nNm+"[]"; if(txt[j].type == "hidden"){ txt[j].value = "0"; }else if(txt[j].type == "text") { txt[j].value = ""; }else if(txt[j].type == "checkbox") { txt[j].checked = false; } } for(var j=0; j<txt.length; j++) { var nm = txt[j].name; var nNm = nm.substring((nm.indexOf("_")+1),nm.indexOf("[")); txt[j].name = nNm+"[]"; if(txt[j].type == "hidden"){ txt[j].value = "0"; }else if(txt[j].type == "text") { txt[j].value = ""; }else if(txt[j].type == "checkbox") { txt[j].checked = false; } } tBody.insertBefore(trClone,tBody.childNodes[1]); } } } Following is my html : <table id="step_details" style="display:none;"> <tr> <th width="5">#</th> <th width="45%">Step details</th> <th>Expected Results</th> <th width="25">Execution</th> <th><img src="gui/themes/default/images/ico_add.gif" onclick="addCloneRow(this);"/></th> </tr> <tr> <td><span>1</span></td> <td><textArea name="step_details[]"></textArea></td> <td><textArea name="expected_results[]"></textArea></td> <td><select onchange="content_modified = true" name="exec_type[]"> <option selected="selected" value="1" label="Manual">Manual</option> <option value="2" label="Automated">Automated</option> </select> </td> <td><img src="gui/themes/default/images/ico_del.gif" onclick="removeCloneRow(this);"/></td> </tr> </table> I want to change the srno. of span element dynamically after increment and decrement on it. Need help thanks

    Read the article

  • Excel-based Performance Reviews transformed into Web Application for Performance Management

    - by Webgui
    HR TMS provides enterprise talent management solutions for healthcare, retail and corporate customers, focusing on performance management, compensation management and succession planning. As the competency of nurses and other healthcare workers is critical, the government, via the Joint Commission (JCAHO), tightly monitors their performances. On a regular basis, accredited healthcare organizations are required to review employee performance using a complex set of position dependent job descriptions and competencies. Middlesex Hospital managed their performance reviews for 2500 employees manually with Excel spreadsheets. This was a labor intensive process that proved to be error prone and difficult to manage. Reviews were not always where they belonged and the job descriptions and competencies for healthcare workers were difficult to keep accurate and up to date. As a result, when the Joint Commission visited and requested to see specific review documentation, there was intense stress. Middlesex Hospital needed to automate their review process, pull in the position information from those spreadsheets and be able to deliver reviews online. Users needed to have online access to those reviews from a standard browser. Although the manual system had its issues, it did have the advantage of being very comprehensive and familiar to users. The decision was made to provide a web-based solution that leveraged the look and feel of those spreadsheets in order to insure user acceptance of the system and minimize the training needed. Read the full article here >

    Read the article

  • CodePlex Daily Summary for Wednesday, October 02, 2013

    CodePlex Daily Summary for Wednesday, October 02, 2013Popular ReleasesEla, functional programming language: Ela, dynamic functional language (PDF, book, 0.6): A book about Ela, dynamic functional language in PDF format.Compact 2013 Tools: Managed Code Version of Apps 1.0: Compact13MinShell Download https://download-codeplex.sec.s-msft.com/Images/v20779/RuntimeBinary.gif Compact13MinShellV3.0.zip The Codeplex Project Downloads Page AboutCompact13Tools.zip: Each app as an OS Content Subproject. Includes CoreCon3 Subproject. Apps.zip: Just the apps in a a zip file AppInstallersx86.zip: The apps as separate x86 installers Compact13MinShell Download: (Separate Codeplex Project) The Minshell that implements the menu that includes these apps via registr...Application Architecture Guidelines: App Architecture Guidelines 3.0.8: This document is an overview of software qualities, principles, patterns, practices, tools and libraries.C# Intellisense for Notepad++: Release v1.0.7.0: - smart indentation - document formatting To avoid the DLLs getting locked by OS use MSI file for the installation.CS-Script for Notepad++: Release v1.0.7.0: - smart indentation - document formatting To avoid the DLLs getting locked by OS use MSI file for the installation.State of Decay Save Manager: Version 1.0.2: Added Start/Stop button for timer to manually enable/disable Quick save routine updated to force it to refresh the folder date Quick save added to backup listing Manual update button Lower level hooking for F5 and F9 buttons workingSharePoint Farm documentation tool: SPDocumentor 0.1: SPDocumentor 0.1 This is a POC version of the tool that will be implemented.DotNetNuke® Form and List: 06.00.06: DotNetNuke Form and List 06.00.06 Changes to 6.0.6•Add in Sql to remove 'text on row' setting for UserDefinedTable to make SQL Azure compatible. •Add new azureCompatible element to manifest. •Added a fix for importing templates. Changes to 6.0.2•Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 •Change: Data is now stored in nvarchar(max) instead of ntext Changes to 6.0.1•Scripts now compatible with SQL Azure. Changes to 6.0.0•Icons are shown in module action b...BlackJumboDog: Ver5.9.6: 2013.09.30 Ver5.9.6 (1)SMTP???????、???????????????? (2)WinAPI??????? (3)Web???????CGI???????????????????????Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.2: Mostly internal code tweaks. added -nosize switch to turn off the size- and gzip-calculations done after minification. removed the comments in the build targets script for the old AjaxMin build task (discussion #458831). Fixed an issue with extended Unicode characters encoded inside a string literal with adjacent \uHHHH\uHHHH sequences. Fixed an IndexOutOfRange exception when encountering a CSS identifier that's a single underscore character (_). In previous builds, the net35 and net20...AJAX Control Toolkit: September 2013 Release: AJAX Control Toolkit Release Notes - September 2013 Release (Updated) Version 7.1001September 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Important UpdateThis release has been updated to fix two issues: Upda...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.4.apifix-alpha: WDTVHubGen.v2.1.4.apifix-alpha is for testers to figure out if we got the NEW api plugged in ok. thanksVisual Log Parser: VisualLogParser: Portable Visual Log Parser for Dotnet 4.0Trace Reader for Microsoft Dynamics CRM: Trace Reader (1.2013.9.29): Initial releaseAudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download ends. word lis...Wsus Package Publisher: Release v1.3.1309.28: Fix a bug, where WPP crash when running on a computer where Windows was installed in another language than Fr, En or De, and launching the Update Creation Wizard. Fix a bug, where WPP crash if some Multi-Thread job are launch with more than 64 items. Add a button to abort "Install This Update" wizard. Allow WPP to remember which columns are shown last time. Make URL clickable on the Update Information Tab. Add a new feature, when Double-Clicking on an update, the default action exec...Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...AcDown?????: AcDown????? v4.5: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.5 ???? AcPlay????????v3.5 ????????,???????????30% ?? ???????GoodManga.net???? ?? ?????????? ?? ??Acfun?????????? ??Bilibili??????????? ?????????flvcd???????? ??SfAcg????????????? ???????????? ???????????????? ????32...Magick.NET: Magick.NET 6.8.7.001: Magick.NET linked with ImageMagick 6.8.7.0. Breaking changes: - ToBitmap method of MagickImage returns a png instead of a bmp. - Changed the value for full transparency from 255(Q8)/65535(Q16) to 0. - MagickColor now uses floats instead of Byte/UInt16.Media Companion: Media Companion MC3.578b: With the feedback received over the renaming of Movie Folders, and files, there has been some refinement done. As well as I would like to introduce Blu-Ray movie folder support, for Pre-Frodo and Frodo onwards versions of XBMC. To start with, Context menu option for renaming movies, now has three sub options: Movie & Folder, Movie only & Folder only. The option Manual Movie Rename needs to be selected from Movie Preferences, but the autoscrape boxes do not need to be selected. Blu Ray Fo...New ProjectsAll CRM Resources for Microsoft Dynamics CRM: Microsoft Dynamics CRM Resources Windows 8 App with News, Feeds, Forums, Blogs, Videos & Twitter updates, information, guides & resources #MSDynCRM community.BasiliskBugTracker: A sample teamwork project for the Telerik Academy's ASP.NET Course 2013.CagerAutoPilot: Programmatically control a toy helicopter with kinectClass Libraries & Database Management: ClassDBManager permette la sincronizzazione delle classi (creazione/modifica/cancellazione) in base alle tabelle contenute nel databaseCommand Line Utility: Enables fast, easy creation of object-oriented settings classes in C# that interface directly with command line input. Minimize code and increase robustness.Controles | Versa: Login Pagina Principal Cadastro UsuáriosDispage: DisPage is a system to hide a website under a different browser title (For example "Vimeo" could look like "Google" (I am working on a way of changing this)ExpressiveDataGenerators: Expressive and powerfull test data generators.Fabrikam Fiber: This project provides download and support to anyone (i.e. trainers) who want to access the Fabrikam Fiber sample application, setup scripts, notes, etc.Get all numbers in between a pair of numbers: Get all integers between two numbers. C#, VB.NETHungryCrowd food lovers market: food lovers market, food, marketsInvalid User Details for SharePoint 2007 and 2010 Sites: Client Based Utility to export invalid users from a SharePoint site (2007 and 2010), as a CSV file using native SP Web Services (UserGroup.asmx and People.asmx)Kh?o Sát Công Ngh?: 1. Tên d? tài: Th?c tr?ng và gi?i pháp h? tr? nâng cao nang l?c c?nh tranh c?a các doanh nghi?p nh? và v?a t?nh Thanh Hóa Lightning: Micro toolkit to make it easy to get content on your site, and serve it fast.LovelyCMS: LovelyCMS ist ein sehr einfaches Content Management System auf der Basis von ASP.NET MVC4.MVC Error Handler: Simple library that allows you to easily create error pages for common HTTP error and application exceptions.MVC Table Styling selection to CSS and demo table: Enter table styling by selection from drop-down list and both generated CSS and see effect of the CSS on a demo table.MvcWebApiFramework: main frameworkNoDemo: It is not only a demo.NumbersInWordsRU: ?????? ??? ??????????? ????? ??????? ? ????? ????????Omnifactotum: Omnifactotum is the .NET library intended to help .NET developers avoid writing the same helper types, methods and extension methods for different projects.Outlook Rules Offline Processor: A utility for organizing Microsoft Outlook rules. The utility uses the rules export file, *.RWZ, to make changes.SharePoint Farm documentation tool: The SPDocumentor (SharePoint Farm documentation tool) allows you to generate a word document that includes most of your farm settings. Startup Shutdown Mailer: This tool is a simple Windows Service which sends an e-mail to a specified account whenever your PC was started up or shut down.YüzKitabi: Daha güvenli ve etkilesimli YüzKitabi Uygulamasi

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >