Search Results

Search found 9154 results on 367 pages for 'replication jobs job'.

Page 113/367 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Can you do this with Hudson?

    - by damian
    I want to create a hudson job, that takes an id as a parameter. And use that id to calculate the svn-repo path. Where I work you have a svn path for every issue that you resolve. And then all the issues are joined into a single svn-path. What I want to do is to run static code analysis on the partial issues. So I think maybe having an Ant build.xml that I use for every issue, then, parametrize the job with the issue id. I have tried to achieve that but the svn path doesn't replace the parameter. I have tried with #issueId, %issueId%, ${issueId} and ${env.issueId} without success. Jump error like: Location 'http://svn-path:8181/svn/devSet/issues/${env.chuid}' does not exist Checking out a fresh workspace because C:\Documents and Settings\dnoseda\.hudson\jobs\test\workspace\${env.chuid} doesn't exist Checking out http://svn-path:8181/svn/devSet/issues/${env.chuid} ERROR: Failed to check out http://svn-path:8181/svn/devSet/issues/${env.chuid} org.tmatesoft.svn.core.SVNException: svn: '/svn/!svn/bc/46190/devSet/issues/$%7Benv.chuid%7D' path not found: 404 Not Found (http://svn-path:8181) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51) at I am think that I can not do what I want. Do you know how I can setup the correct configuration to achieve this matter? Thanks for any help. Edit The section of the configurate job that I want to put this parameter is this: <scm class="hudson.scm.SubversionSCM"> <locations> <hudson.scm.SubversionSCM_-ModuleLocation> <remote>http://svn-path:8181/svn/devSet/issues/${env.issueid}</remote> </hudson.scm.SubversionSCM_-ModuleLocation> </locations>

    Read the article

  • What are your Programming Falacies/Myths?

    - by pms1969
    I recently started a new job and as is typical of all jobs, if you've left, you get blamed for everything. Not long after I started there was a change required for an app (web based) that we maintain, and it was quickly pointed out that the actual code for this site had been lost a long time ago, and the only changes we could make to the it were ones that required changes to mark-up [it was a pre-compiled site]. Being new, I needed a little help finding my way around the code, and enlisted the services of one of my colleagues. Made my changes, and then re-enlisted his help to deploy it. While prepping for the deployment (getting the app on the QA server) we discovered that there were actually 2 different, very similarly named, folders in our source repository. It transpired that for the last year or so, mark-up changes had been made to the site directly, and these were the only differences with the code in the slightly incorrectly named folder in source control. So we did have all the code, and can now properly support the site. This put me in mind of a trick we played on a junior programmer once in a previous job, where we told him he couldn't/shouldn't do a certain thing in code as this would likely bring the server to it's knees and cost the company thousands of pounds (a gag that last months :-). And another one in the first programming job I took on - the batch commission run was just going to crash once a month and there was nothing to be done about it, causing a call out, and call out compensation for the on-call guy (a bug I fixed as soon as I became the on-call guy - 2am call outs don't work for me). So I was wondering... What other programming fallacies/myths are out there that are worth sharing?

    Read the article

  • What do I do about a Java program that spawned two instaces of itself?

    - by user288915
    I have a java JAR file that is triggered by a SQL server job. It's been running successfully for months. The process pulls in a structured flat file to a staging database then pushes that data into an XML file. However yesterday the process was triggered twice at the same time. I can tell from a log file that gets created, it looks like the process ran twice simultaneously. This caused a lot of issues and the XML file that it kicked out was malformed and contained duplicate nodes etc. My question is, is this a known issue with Java JVM's spawning multiple instances of itself? Or should I be looking at sql server as the culprit? I'm looking into 'socket locking' or file locking to prevent multiple instances in the future. This is the first instance of this issue that I've ever heard of. More info: The job is scheduled to run every minute. The job triggers a .bat file that contains the java.exe - jar filename.jar The java program runs, scans a directory for a file and then executes a loop to process if the file if it finds one. After it processes the file it runs another loop that kicks out XML messages. I can provide code samples if that would help. Thank you, Kevin

    Read the article

  • Displaying data from linked tables in netbeans JTable

    - by Darc
    I have been writing in java for a few months now and have just started using netbeans. I have spent all day today trying to work out how to connect to an SQL database and display data from 2 tables (ie display the data from from a select statement with an inner join) in a JTable. I have tried using JPQL with the following statment SELECT j, cust.name FROM Job j JOIN j.jobnumber cust where the job table has a field called customer that references id in the customer table. This throws the exception: Caused by: Exception [TOPLINK-8029] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.EJBQLException Exception Description: Error compiling the query [SELECT j, cust.name FROM Job j JOIN j.jobnumber cust], line 1, column 11: invalid navigation expression [cust.name], cannot navigate expression [cust] of type [java.lang.Integer] inside a query. at oracle.toplink.essentials.exceptions.EJBQLException.invalidNavigation(EJBQLException.java:430) What am i doing wrong? Can anyone point me to some examples of how to make a linked table java application? I am still in the very early stages of development so a complete change is not out of the question if using a mysql database isnt the best way to go about things thanks

    Read the article

  • How to get correct children ids using fields_for "parents[]", parent do |f| using f.fields_for :children, child ?

    - by Anatortoise House
    I'm editing multiple instances of a parent model in an index view in one form, as in Railscasts #198. Each parent has_many :children and accepts_nested_attributes_for :children, as in Railscasts #196 and #197 <%= form_tag %> <% for parent in @parents %> <%= fields_for "parents[]", parent do |f| <%= f.text_field :job %> <%= f.fields_for :children do |cf| %> <% cf.text_field :chore %> <% end %> <% end %> <% end %> <% end %> Given parent.id==1 f.text_field :job correctly generates <input id="parents_1_job" type="text" value="coding" size="30" name="parents[1][job]"> But cf.text_field :chore generates ids and names that don't have the parent index. id="parents_children_attributes_0_chore" name="parents[children_attributes][0][chore]" If I try passing the specific child object to f.fields_for like this: <% for child in parent.children %> <%= f.fields_for :children, child do |cf| %> <%= cf.text_field :chore %> <% end %> <% end %> I get the same. If I change the method from :children to "[]children" I get id="parents_1___children_chore" which gets the right parent_index but doesn't provide an array slot for the child index. "[]children[]" isn't right either: id="parents_1__children_3_chore" as I was expecting attributes_0_chore instead of 3_chore. Do I need to directly modify an attribute of the FormBuilder object, or subclass FormBuilder to make this work, or is there a syntax that fits this situation? Thanks for any thoughts.

    Read the article

  • DB Strategy for inserting into a high read table (Sql Server)

    - by Tom
    Looking for strategies for a very large table with data maintained for reporting and historical purposes, a very small subset of that data is used in daily operations. Background: We have Visitor and Visits tables which are continuously updated by our consumer facing site. These tables contain information on every visit and visitor, including bots and crawlers, direct traffic that does not result in a conversion, etc. Our back end site allows management of the visitor's (leads) from the front end site. Most of the management occurs on a small subset of our visitors (visitors that become leads). The vast majority of the data in our visitor and visit tables is maintained only for a much smaller subset of user activity (basically reporting type functionality). This is NOT an indexing problem, we have done all we can with indexing and keeping our indexes clean, small, and not fragmented. ps: We do not currently have the budget or expertise for a data warehouse. The problem: We would like the system to be more responsive to our end users when they are querying, for instance, the list of their assigned leads. Currently the query is against a huge data set of mostly irrelevant data. I am pondering a few ideas. One involves new tables and a fairly major re-architecture, I'm not asking for help on that. The other involves creating redundant data, (for instance a Visitor_Archive and a Visitor_Small table) where the larger visitor and visit tables exist for inserts and history/reporting, the smaller visitor1 table would exist for managing leads, sending lead an email, need leads phone number, need my list of leads, etc.. The reason I am reaching out is that I would love opinions on the best way to keep the Visitor_Archive and the Visitor_Small tables in sync... Replication? Can I use replication to replicate only data with a certain column value (FooID = x) Any other strategies?

    Read the article

  • Task Scheduler Cannot Apply My Changes - Adding a User with Permissions

    - by Aaron
    I can log in to the server using a domain account without administrator privileges and create a task in the Task Scheduler. I am allowed to do an initial save of the task but unable to modify it with the same user account. When changes are complete, a message box prompts for the user password (same domain user I logged in with), then fails with the following message. Task Scheduler cannot apply your changes. The user account is unknown, the password is incorrect, or the account does not have permission to modify the task. When I check Log on as Batch Job Properties (found this from the Help documentation): This policy is accessible by opening the Control Panel, Administrative Tools, and then Local Security Policy. In the Local Security Policy window, click Local Policy, User Rights Assignment, and then Logon as batch job. Everything is grayed out, so I can't add a user. How can I add a user?

    Read the article

  • mysql: Bind on unix socket: Permission denied

    - by Alex
    Can't start mysql with: sudo /usr/bin/mysqld_safe --datadir=/srv/mysql/myDB --log-error=/srv/mysql/logs/mysqld-myDB.log --pid-file=/srv/mysql/pids/mysqld-myDB.pid --user=mysql --socket=/srv/mysql/sockets/mysql-myDB.sock --port=3700 120222 13:40:48 mysqld_safe Starting mysqld daemon with databases from /srv/mysql/myDB 120222 13:40:54 mysqld_safe mysqld from pid file /srv/mysql/pids/mysqld-myDB.pid ended /srv/mysql/logs/mysqld-myDB.log: 120222 13:43:53 mysqld_safe Starting mysqld daemon with databases from /srv/mysql/myDB 120222 13:43:53 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld: Table 'plugin' is read only 120222 13:43:53 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 120222 13:43:53 InnoDB: Completed initialization of buffer pool 120222 13:43:53 InnoDB: Started; log sequence number 32 4232720908 120222 13:43:53 [ERROR] Can't start server : Bind on unix socket: Permission denied 120222 13:43:53 [ERROR] Do you already have another mysqld server running on socket: /srv/mysql/sockets/mysql-myDB.sock ? 120222 13:43:53 [ERROR] Aborting 120222 13:43:53 InnoDB: Starting shutdown... One instance mysqld is running: $ ps aux | grep mysql mysql 1093 0.0 0.2 169972 18700 ? Ssl 11:50 0:02 /usr/sbin/mysqld $ Port 3700 is available: $ netstat -a | grep 3700 $ Directory with sockets is empty: $ ls /srv/mysql/sockets/ $ There are all permissions: $ ls -l /srv/mysql/ total 20 drwxrwxrwx 2 mysql mysql 4096 2012-02-22 13:28 logs drwxrwxrwx 13 mysql mysql 4096 2012-02-22 13:44 myDB drwxrwxrwx 2 mysql mysql 4096 2012-02-22 12:55 pids drwxrwxrwx 2 mysql mysql 4096 2012-02-22 12:55 sockets drwxrwxrwx 2 mysql mysql 4096 2012-02-22 13:25 version Apparmor config: $cat /etc/apparmor.d/usr.sbin.mysqld # vim:syntax=apparmor # Last Modified: Tue Jun 19 17:37:30 2007 #include <tunables/global> /usr/sbin/mysqld flags=(complain) { #include <abstractions/base> #include <abstractions/nameservice> #include <abstractions/user-tmp> #include <abstractions/mysql> #include <abstractions/winbind> capability dac_override, capability sys_resource, capability setgid, capability setuid, network tcp, /etc/hosts.allow r, /etc/hosts.deny r, /etc/mysql/*.pem r, /etc/mysql/conf.d/ r, /etc/mysql/conf.d/* r, /etc/mysql/*.cnf r, /usr/lib/mysql/plugin/ r, /usr/lib/mysql/plugin/*.so* mr, /usr/sbin/mysqld mr, /usr/share/mysql/** r, /var/log/mysql.log rw, /var/log/mysql.err rw, /var/lib/mysql/ r, /var/lib/mysql/** rwk, /var/log/mysql/ r, /var/log/mysql/* rw, /{,var/}run/mysqld/mysqld.pid w, /{,var/}run/mysqld/mysqld.sock w, /srv/mysql/ r, /srv/mysql/** rwk, /sys/devices/system/cpu/ r, # Site-specific additions and overrides. See local/README for details. #include <local/usr.sbin.mysqld> } Any suggestions? UPD1: $ touch /srv/mysql/sockets/mysql-myDB.sock $ sudo chown mysql:mysql /srv/mysql/sockets/mysql-myDB.sock $ ls -l /srv/mysql/sockets/mysql-myDB.sock -rw-rw-r-- 1 mysql mysql 0 2012-02-22 14:29 /srv/mysql/sockets/mysql-myDB.sock $ sudo /usr/bin/mysqld_safe --datadir=/srv/mysql/myDB --log-error=/srv/mysql/logs/mysqld-myDB.log --pid-file=/srv/mysql/pids/mysqld-myDB.pid --user=mysql --socket=/srv/mysql/sockets/mysql-myDB.sock --port=3700 120222 14:30:18 mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect. 120222 14:30:18 mysqld_safe Logging to '/srv/mysql/logs/mysqld-myDB.log'. 120222 14:30:18 mysqld_safe Starting mysqld daemon with databases from /srv/mysqlmyDB 120222 14:30:24 mysqld_safe mysqld from pid file /srv/mysql/pids/mysqld-myDB.pid ended $ ls -l /srv/mysql/sockets/mysql-myDB.sock ls: cannot access /srv/mysql/sockets/mysql-myDB.sock: No such file or directory $ UPD2: $ sudo netstat -lnp | grep mysql tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 1093/mysqld unix 2 [ ACC ] STREAM LISTENING 5912 1093/mysqld /var/run/mysqld/mysqld.sock $ sudo lsof | grep /srv/mysql/sockets/mysql-myDB.sock lsof: WARNING: can't stat() fuse.gvfs-fuse-daemon file system /home/sears/.gvfs Output information may be incomplete. UPD3: $ cat /etc/mysql/my.cnf # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • eventcreate with multiline description

    - by Adam J.R. Erickson
    I'd like to use eventcreate from a batch file to log the results of a file copy job (robocopy). What I'd really like to do is use the output of the file copy job as the description of the event (/D of createevent). The trouble is, there are multiple lines in the file copy output, and I've only been able to get one line into a local variable or a pipe command. I've tried reading a local variable in from file, like set /P myVar=<temp.txt but it only gets the first line. How can I write multiple lines to the description of an event from a batch file?

    Read the article

  • Execute a remote command after sudo - su anotheruser in Rundeck

    - by Bera
    I'm new with Rundeck and completely amazed with it and I'm trying to execute a job and my scenario is detailed below: Rundeck is configured with ssh passwordless authentication for user master between node Server (rundeck server) and node Target (remote Solaris host) for user "master" In node Target I want to execute a script /app/acme/stopApp.sh with a user appmanager Normally and manually, when I need to run script above I proceed with ssh master@server sudo su - appmanager or simply ssh -t master@server 'sudo su - appmanager' works without password and finally run (as appmanager) /app/acme/stopApp.sh But I'm not sure how can I reproduce these steps using Rundeck. I read in some previous messages that for each job line rundeck use a new ssh connection, so the workflow below always fails for me with the messages: sudo: no tty present and no askpass program specified Remote command failed with exit status 1 Please someone could help me with some information to solve this issue. Without this functionality I wouldn't be able to introduce a little DevOps in my department. I read the user guide and admin guide but I couldn't find an easy example, neither in this forum, to follow.

    Read the article

  • windows 2003 server : can't find a primary authoritative dns server for the name srv.domain1.local [

    - by phill
    I originally tried to rejoin a computer to a network which led to a "cannot find domain" error. The username/password box don't even come up. some tests i ran: I can ping the server, however I can't ping the domain name domain1.local. nslookup can't find the domain either. It looks to the isp's dns instead of my own to resolve the local machines. So i go to the dns and run netdiag.exe and gives me this error. DNS test . . . . . . . . . . . . . : Failed [WARNING] Cannot find a primary authoritative DNS server for the name 'stmartinsrv.stmartin.local.'. [RCODE_SERVER_FAILURE] The name 'srv.domain1.local.' may not be registered in DNS. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.156.1'. Please wait for 30 minutes for DNS server replication. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.157.1'. Please wait for 30 minutes for DNS server replication. [FATAL] No DNS servers have the DNS records for this DC registered. Redir and Browser test . . . . . . : Passed List of NetBt transports currently bound to the Redir NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The redir is bound to 1 NetBt transport. List of NetBt transports currently bound to the browser NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The browser is bound to 1 NetBt transport. from previous postings, I've tried adding the domain suffix to the nic ip properties to both the client machine and the dc server which didn't help. any ideas? thanks in advance

    Read the article

  • Parallel prologue and epilogue in Grid Engine

    - by ajdecon
    We have a cluster being used to run MPI jobs for a customer. Previously this cluster used Torque as the scheduler, but we are transitioning to Grid Engine 6.2u5 (for some other features). Unfortunately, we are having trouble duplicating some of our maintenance scripts in the Grid Engine environment. In Torque, we have a prologue.parallel script which is used to carry out an automated health-check on the node. If this script returns a fail condition, Torque will helpfully offline the node and re-queue the job to use a different group of nodes. In Grid Engine, however, the queue "prolog" only runs on the head node of the job. We can manually run our prologue script from the startmpi.sh initialization script, for the mpi parallel environment; but I can't figure out how to detect a fail condition and carry out the same "mark offline and requeue" procedure. Any suggestions?

    Read the article

  • Binary Log Format in MySQL

    - by amritansu
    Reference manual for MySQL 5.6 states that " Some changes, however, still use the statement-based format. Examples include all DDL (data definition language) statements such as CREATE TABLE, ALTER TABLE, or DROP TABLE. " Does this statement means that even if we have ROW format for binary logs all DDLs will be logged in binary log as statement based? How does this affect replication? Kindly help me to understand this.

    Read the article

  • How much time do I have to remove a domain controller that was lost using metadata cleanup?

    - by dasko
    I am wondering how long after a Domain Controller has been permanently lost, from the domain due to it dying from hardware perspective, do I have before I have to clean up references of it using meta data cleanup? Planning on doing it soon but not sure if there is anything I should be concerned about. This dead or lost domain controller had no roles other than being a Domain Controller and having integrated Active Directory DNS. As well it held NO FSMO roles, it was just an older replication Domain Controller.

    Read the article

  • Torque and maui node status

    - by Lafada
    I am new for torque and maui. I was checking for node state to looking for which nodes are free and which nodes are in use. For torque one command is pbsnodes. Which gives status and other info related to node. When I was checking for maui then I found command diagnose -n which also shows status of the node. I was wondering between these 2 status. Both are giving different status for the same situation. When I do man pbsnodes I got the possible states for node "free", "offline", "down", "reserve", "job-exclusive", "job-sharing", "busy", "time-shared", or "state-unknown" But this type of different state I cant find for diagnose -n. How pbsnodes and diagnose -n get the status for node. Is there any database like xCAT use for torque or maui? Thx in advance for your valuable time.

    Read the article

  • Connecting a TV remote control to a Windows 7 Desktop?

    - by Craig Butikofer
    I was just wondering how possible it is to connect a remote control to a Windows 7 system. I’m connecting the computer to a 24" TV via HDMI to be used as a TV, playing things mainly through VLC. Clearly I don’t want to have to access the PC every time I want to change something. The problem is that I have no idea how to configure a remote to a PC, or even what remote will do the job. I’m not looking for anything overly flashy or in the $100-$200 range, but something that is functional and will get the job done.

    Read the article

  • Automatically Snapshoting AWS instances (or other back up strategy)

    - by user1172468
    I just realized that my aws instance count has risen into the double digits. I'm currently backing portions of my folders and dbs and moving them off to a backup instance. What I think I should be doing is taking a snapshot of the instances (automatically) and persisting them on S3 so I have a running 7 day collection of daily backups. There is a question asking the same thing here, however the answers don't go into depth. So the closest answer seems to be: use a cron job to snapshot the instance. So do I run the cron job on the instance itself? or do I have a micro instance to run these snapshots? Could I get an example script or the command for say a linux flavor? what software must I have installed to get this to run? Thanks.

    Read the article

  • OSX: Selecting default application for all unknown and different file types (extensions)

    - by Leo
    I work in cluster computing and am using Mac OS X 10.6. I send off hundreds of computing jobs a day, and each one comes back with with a different extension. For example, svmGeneSelect.o12345 which is the std output of my svmGeneSelect job which is job number 12345. I don't control the extensions. All files are plain text. I want OSX to open any file extension that it hasn't seen before with my favorite text editor when I click on it. Or even better set up file association defaults for extension patterns ie textEdit for extensions matching *.o*. I do NOT want to create file associations for individual files since this extension will only ever exist once, and I do not want to go through the process of selecting the application to use for each file. Thanks for any help you can offer.

    Read the article

  • is there software that can fill in forms on the Internet, automatically and if so, what kind of server would work best [closed]

    - by Stevew51
    Possible Duplicate: Firefox Form Fill Add On I have been looking to get into that fill in forms for cash type job/business. I have been searching for software that can do the job automatically. I guess some sort of copy and paste I have been told that there is software for everything. I need software that can fill in forms on the Internet automatically. And if so, what kind of server works best. Not what brand what kind of server.I am not sure if you understand what I am looking for. I am looking for software they can take information from a business site and automatically place it in a form on that same site.I am not asking, what brand I should buy, what kind of server is it.

    Read the article

  • kill process but fail

    - by Tim
    Hi, I am running a bash script as a background job. The bash script calls a time-consuming executable. If I am not wrong, the running of the bash script is the parent process and the running of the executable is the child process. I now want to stop the whole running by killing the parent process which is the background job kill -9 $(jobs -p) The terminal shows that the running of the bash script is killed. But the running of the executable still hangs on the output of top. I just wonder how to also kill the child process? Thanks and regards!

    Read the article

  • What is the standard system architecture for MongoDB

    - by learner
    I know this question is too vague, so I would like to add some key numbers to give insights about what the scenario is Each Document size - 360KB Total Documents - 1.5 million Document created/day - 2k read intensive - YES Availability requirement - HIGH With these requirements in mind, here is what I believe should be the architecture, but not too sure, please share your experiences and point me to right directions 2 Linux Box(Ubuntu 11 would do)(on a different rack setup for availability) 64-bit Mongo Database 1-master(for read/wr1te) and 1-slave(read-only with replication ON) Sharding not needed at this point in time Thank you in advance

    Read the article

  • How can I get Transaction Logs to auto-truncate after Backup

    - by Yaakov Ellis
    Setup: Sql Server 2008 R2, databases set up with Full recovery mode. I have set up a maintenance plan that backs up the transaction logs for a number of databases on the server. It is set to create backup files in sub-directories for each database, verify backup integrity is turned on, and backup compression is used. The job is set to run once every 2 hours during business hours (8am-6pm). I have tested the job and it runs fine, creates the log backup files as it should. However, from what I have read, once the transaction log is backed up, it should be ok to truncate the transaction log. I do not see any option for doing this in the Sql Server Maintenance Plan designer. How can I set this up?

    Read the article

  • sharepoint administrative server not started causing sharepoint deployment instability

    - by Nathan
    When i deploy anything to a variety of sharepoint servers i get problems with some packages failing, from what i can see the only error given is as below. The timer job for this operation has been created, but it will fail because the administrative service for this server is not enabled. If the timer job is sched uled to run at a later time, you can run the jobs all at once using stsadm.exe - o execadmsvcjobs. To avoid this problem in the future, enable the Windows ShareP oint Services administrative service, or run your operation through the STSADM.e xe command line utility. Is this a known problem? I googled for the text but got only one other person with the problem who simply worked round it. I'm trying to batch deploy solutions programmatically and so these errors render the whole code worthless if you have to go back and redo bits by hand afterwards. Is batch deployment simply not possible? Thanks!

    Read the article

  • Dual Screens with Widescreen monitors?

    - by nmuntz
    I want to build a new computer and purchase new monitor(s). At my old job I had two 20" 4:3 and I absolutely loved this setup. However, the stores in my country only seem to have widescreen monitors nowadays, and the only 4:3 LCDs i have been able to find are 17". My question is: Do widescreens suck for using them as dual monitors? Can anyone with this setup comment on their experience with having multiple widescreen monitors? Would it be better to get three 17" 4:3 LCDs instead of two widescreens? If i go with widescreens, should i go with the smallest ones i can find? Purchasing a single big widescreen monitor is not an option for me, since being able to maximize an app on a specific area of the screen is a must have for me and im not willing to use "hacky" apps for this purpose that do a crappy job. Thanks in advance for your advise.

    Read the article

  • drbd block device as storage for kvm virtual machine

    - by facha
    Hi, everyone I've setup a drbd replication between two machines and used a drbd block device as storage for a kvm machine. Everything is running well. However I'm in doubt if this setup is ok to use. From what I've read so far on the internet, people tend to use drbd-ospf-qcow2_file as storage for their virtual machines.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >