Search Results

Search found 5527 results on 222 pages for 'unique constraint'.

Page 112/222 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Cron Permission Denied

    - by worldthreat
    good day, I have a bash script in my home directory that works properly from the command line (file structure is default media temple DV. < noted for certain permission issues) but receive this error from cron: "/home/myFile.sh: line 2: /var/www/vhosts/domain.com/subdomains/techspatch/installation.sql: Permission denied" NOTICE: it's just line 2... it writes to the local server just fine. Below is the Bash File: #!/bin/bash mysqldump -uUSER -pPASSWORD -hHOST dbName> /var/www/vhosts/domain.com/subdomains/techspatch/installation.sql mysql -uadmin -pPASSWORD -hlocalhost dbName< /var/www/vhosts/domain.com/subdomains/techspatch/installation.sql can't chmod from bash (lol, yeah i tried). writing the file there and setting the permissions before the transfer is useless... i have googled the heck out of this situation and this one still seems unique.... any insight is appreciated

    Read the article

  • Update mysql database with arpwatch textfile database

    - by bVector
    I'm looking to keep arpwatch entries in a mysql database to crossreference with other information I'm storing based on mac addresses. I've manually imported the arpwatch database into my mysql database, but being a novice with databases I'm not sure what the best way to continually update the database with new entries without creating duplicates would be. None of the fields can be unique, as even the time is duplicated frequently. I'm not interested in the actual arpwatch events like flip flop or new station, just the mac/ip/time pairings. Would a simple bash (or sql) shell script do the trick? Would it be possible to make the mac address plus the time be a composite key of some sort? the database is called utility, table is arpwatch, columns are mac, ip, time a seperate table named 'hosts' with columns mac, ip, type, hostname, location, notes has mac as the primary key. This table will correlate different ip addresses that a mac had over time using the arpwatch column initial import was done with MySQL Workbench using INSERT INTO commands with creative search and replace on the text file

    Read the article

  • Mac Book Pro and Photosohp working canvas lost off screen somewhere

    - by chris
    I seem to have a unique issue. I have a mac book pro which I often have connected to one or 2 additional monitors, and I think this is in part my issue. Cause when Im at home working with only the built in monitor and I go to open something new or existing in photoshop my canvas if you will call it that is no where to be found. I checked my display settings, its treating it as a one monitor solution. I've tried to in photoshop to bring all my windows to front, Ive tried to cascade and tile my windows, ive tried to reset my work area, and to no avail I can't for the life of me get the canvas to come to my screen so I can work with what ever I intend to work on. Anyone else have this problem or run into similar? How did you fix it?

    Read the article

  • Getting MSExchange transport Error on Server 2003 SP2

    - by Scott
    I am getting the following Error messages and do not know how to fix it. Event Type: Error Event Source: MSExchangeTransport Event Category: (8) Event ID: 3017 Date: 4/29/2010 Time: 1:21:12 PM User: N/A Computer: NETSRV Description: A non-delivery report with a status code of 5.3.5 was generated for recipient rfc822;[email protected] (Message-ID <19104335.51321272561635734.JavaMail.SYSTEM@PARROT). Causes: A looping condition was detected. (The server is configured to route mail back to itself). If you have multiple SMTP Virtual Servers configured on your Exchange server, make sure they are defined by a unique incoming port and that the outgoing SMTP port configuration is valid to avoid looping between local virtual servers. Thanks for any help you can provide.

    Read the article

  • Software to copy non-duplicate files from CD/DVD

    - by John22
    I have several CDs/DVDs which have partially overlapping content (the overlapping files are identical, but have different names), and some of the files are on my hard disk. I need to get the remaining unique files copied to my hard disk. I found a really good duplicate file finder, Duplicate Cleaner, which lets you select multiple folders and then finds duplicates by checksum (or file name, size, date) and is very fast, and free. It won't help me do what I want though, unless I just copied everything, and then deleted the duplicates - but I would have to do multiple cycles, as I don't have room to copy all the CDs/DVDs to my hard disk. I found a couple of file sync programs, but they don't have the compare function - the file names must match. (I tried other duplicate file finders on CNET, but they aren't as good as Duplicate Cleaner, and also don't have the functionality I need.) Thanks for any help.

    Read the article

  • Multiple WAN interfaces in same subnet on Sonicwall NSA220?

    - by Ttamsen
    (eta salutation, which keeps getting eaten.) Hi, all. I see a bunch of related questions, so I'm hesitant to ask, but: I have a situation where I have a Sonicwall NSA220 serving as firewall/router for two internal subnets to two external WAN connections. In some locations this is two separate ISPs. In others, it's the same ISP but with multiple circuits. The problem is that one ISP has been unable to provide unique subnets for each WAN interface. Is there any possibility that I might be able to bond the two WAN interfaces into a single virtual interface, and then use source-routing to get internal subnets communicating out the appropriate physical interface? Or even just use traffic-shaping to give each internal network appropriate shared bandwidth? I haven't found anything in the docs, but it seemed like it might be worth asking. Thanks for any help! -Steve.

    Read the article

  • postgresql deleteing old tables

    - by BB
    I have a postgresql database which stores my radius connection information. What I want to do is only store a months worth of logs. How would I craft a sql statement that I can run from cron that would go and delete and rows that where older then a month? Format of the date in the table. that date is taken from acctstoptime collum Date format 2010-01-27 16:02:17-05 Format of the table in question. -- Table: radacct -- DROP TABLE radacct; CREATE TABLE radacct ( radacctid bigserial NOT NULL, acctsessionid character varying(32) NOT NULL, acctuniqueid character varying(32) NOT NULL, username character varying(253), groupname character varying(253), realm character varying(64), nasipaddress inet NOT NULL, nasportid character varying(15), nasporttype character varying(32), acctstarttime timestamp with time zone, acctstoptime timestamp with time zone, acctsessiontime bigint, acctauthentic character varying(32), connectinfo_start character varying(50), connectinfo_stop character varying(50), acctinputoctets bigint, acctoutputoctets bigint, calledstationid character varying(50), callingstationid character varying(50), acctterminatecause character varying(32), servicetype character varying(32), xascendsessionsvrkey character varying(10), framedprotocol character varying(32), framedipaddress inet, acctstartdelay integer, acctstopdelay integer, freesidestatus character varying(32), CONSTRAINT radacct_pkey PRIMARY KEY (radacctid) ) WITH (OIDS=FALSE); ALTER TABLE radacct OWNER TO radius; -- Index: freesidestatus -- DROP INDEX freesidestatus; CREATE INDEX freesidestatus ON radacct USING btree (freesidestatus); -- Index: radacct_active_user_idx -- DROP INDEX radacct_active_user_idx; CREATE INDEX radacct_active_user_idx ON radacct USING btree (username, nasipaddress, acctsessionid) WHERE acctstoptime IS NULL; -- Index: radacct_start_user_idx -- DROP INDEX radacct_start_user_idx; CREATE INDEX radacct_start_user_idx ON radacct USING btree (acctstarttime, username);

    Read the article

  • SQL like group by and sum for text files in command line?

    - by dnkb
    I have huge text files with two fields, the first is a string the second is an integer. The files are sorted by the first field. What I'd like to get in the output is one line per unique string and the sum of the numbers for the identical strings. Some strings appear only once while other appear multiple times. E.g. Given the sample data below, for the string glehnia I'd like to get 10+22=32 in the result. Any suggestions how to do this either with gnuwin32 command line tools or in linux shell? Thanks! glehnia 10 glehnia 22 glehniae 343 glehnii 923 glei 1171 glei 2283 glei 3466 gleib 914 gleiber 652 gleiberg 495 gleiberg 709

    Read the article

  • HTML client-side portable file generation - no external resources or server calls

    - by awashburn
    I have the following situation: I have set up a series of Cron jobs on an internal company server to run various PHP scripts designed to check data integrity. Each PHP script queries a company database, formats the returned query data into an HTML file containing one or more <tables>, and then mails the HTML file to several client emails as an attachment. From my experience, most of the PHP scripts generate HTML files with only a few tables, however there are a few PHP scripts the create HTML files with around 30 tables. HTML files have been chosen as the distribution format of these scans because HTML makes it easy to view many tables at once in a browser window. I would like to add the functionality for the clients to download a table in the HTML file as a CSV file. I anticipate clients using this feature when they suspect a data integrity issue based on the table data. It would be ideal for them to be able to take the table in question, export the data out to a CSV file, and then study it further. Because need for exporting the data to CSV format is at the discretion of the client, unpredictable as to what table will be under scrutiny, and intermittently used I do not want to create CSV files for every table. Normally creating a CSV file wouldn't be too difficult, using JavaScript/jQuery to preform DOM traversal and generate the CSV file data into a string utilizing a server call or flash library to facilitate the download process; but I have one limiting constraint: The HTML file needs to be "portable." I would like the clients to be able to take their HTML file and preform analysis of the data outside the company intranet. Also it is likely these HTML files will be archived, so making the export functionality "self contained" in the HTML files is a highly desirable feature for the two previous reasons. The "portable" constraint of CSV file generation from a HTML file means: I cannot make a server call. This means ALL the file generation must be done client-side. I want the single HTML file attached to the email to contain all the resources to generate the CSV file. This means I cannot use jQuery or flash libraries to generate the file. I understand, for obvious security reasons, that writing out files to disk using JavaScript isn't supported by any browser. I don't want to create a file without the user knowledge; I would like to generate the file using JavaScript in memory and then prompt the user the "download" the file from memory. I have looked into generating the CSV file as a URI however, according to my research and testing, this approach has a few problems: URIs for files are not supported by IE (See Here) URIs in FireFox saves the file with a random file name and as a .part file As much as it pains me, I can accept the fact the IE<=v9 won't create a URI for files. I would like to create a semi-cross-browser solution in which Chrome, Firefox, and Safari create a URI to download the CSV file after JavaScript DOM traversal compiles the data. My Example Table: <table> <thead class="resulttitle"> <tr> <th style="text-align:center;" colspan="3"> NameOfTheTable</th> </tr> </thead> <tbody> <tr class="resultheader"> <td>VEN_PK</td> <td>VEN_CompanyName</td> <td>VEN_Order</td> </tr> <tr> <td class='resultfield'>1</td> <td class='resultfield'>Brander Ranch</td> <td class='resultfield'>Beef</td> </tr> <tr> <td class='resultfield'>2</td> <td class='resultfield'>Super Tree Produce</td> <td class='resultfield'>Apples</td> </tr> <tr> <td class='resultfield'>3</td> <td class='resultfield'>John's Distilery</td> <td class='resultfield'>Beer</td> </tr> </tbody> <tfoot> <tr> <td colspan="3" style="text-align:right;"> <button onclick="doSomething(this);">Export to CSV File</button></td> </tr> </tfoot> </table> My Example JavaScript: <script type="text/javascript"> function doSomething(inButton) { /* locate elements */ var table = inButton.parentNode.parentNode.parentNode.parentNode; var name = table.rows[0].cells[0].textContent; var tbody = table.tBodies[0]; /* create CSV String through DOM traversal */ var rows = tbody.rows; var csvStr = ""; for (var i=0; i < rows.length; i++) { for (var j=0; j < rows[i].cells.length; j++) { csvStr += rows[i].cells[j].textContent +","; } csvStr += "\n"; } /* temporary proof DOM traversal was successful */ alert("Table Name:\t" + name + "\nCSV String:\n" + csvStr); /* Create URI Here! * (code I am missing) */ /* Approach 1 : Auto-download * downloads CSV data but: * In FireFox downloads as randomCharacers.part instead of name.csv * In Chrome downloads without prompting the user * In Safari opens the files in browser (textfile) */ //var hrefData = "data:text/csv;charset=US-ASCII," + encodeURIComponent(csvStr); //document.location.href = hrefData; /* Approach 2 : Right-Click Save As... */ var hrefData = "data:text/csv;charset=US-ASCII," + encodeURIComponent(csvStr); var fileLink = document.createElement("a"); fileLink.href = hrefData; fileLink.innerHTML = "download"; parentTD = inButton.parentNode; parentTD.appendChild(fileLink); parentTD.removeChild(inButton); } </script> I am looking for an example solution in which the above example table can be downloaded as a CSV file: using a URI the user is prompted to save the file the default filename is the name of the table. code works as described in modern versions of FireFox, Safari, & Chrome I have added a <script> tag with the DOM traversal function doSomething(). The real help I need is with formatting the URI to what I want within the doSomething() function.

    Read the article

  • What are the consequences of giving an AD domain differing NetBIOS and DNS names?

    - by Newt
    In the past, when creating AD domains, I've used the common convention of using a sub-domain of the company's publicly registered domain name, e.g "corp.mycompany.com" or "int.mycompany.com". I've always accepted the default NetBIOS name when running DCPromo, for fear that creating a NetBIOS name that differs from the sub-domain may cause complications. I've recently been doing a bit of research on the consequences of providing an alternate NetBIOS name. The main reasons behind this are: The NetBIOS name isn't particularly descriptive or unique to the company Apparently generic NetBIOS names such as "CORP" or "INT" can cause issues when merging IT systems (although I've not had experience with this myself) Providing something "before the slash" that means more to users (less important) In looking at the possible downsides, the only one I can come up with is the disjointed namespace issue when configuring Exchange. Can anybody with more experience than I elaborate on my findings at all? Many thanks

    Read the article

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • POST data not being received

    - by Alexander
    I've got an iPhone App that is supposed to send POST data to my server to register the device in a MySQL database so we can send notifications etc... to it. It sends it's unique identifier, device name, token, and a few other small things like passwords and usernames as a POST request to our server. The problem is that sometimes the server doesn't receive the data. And by this I mean, its not just receiving blank values for the POST inputs but, its not receiving ANY post data at all. I am logging all POST inputs to my server into some log files and when the script that relies on the POST data from the device fails (detects no data) I notice that its because NO POST data was sent. Is this a problem on the server, like refusing data or something or does this have to be on the client's side? What could be causing this?

    Read the article

  • Are there any security concerns when using Windows' default workgroup?

    - by koiyu
    Are there any security concerns one should be aware of if you're using Windows' default workgroup as the workgroup? (Or is worrying just tinfoiling?) Should it be commonplace to rename the workgroup to something personal/unique after Windows installation? Are there any other benefits in renaming the workgroup from the default besides making it to look more describing? Ie. is renaming worth the hassle as it makes the workgroup generally less accessible? It is used in local area network after all.

    Read the article

  • How to retain background colors when pasting between documents in Excel

    - by Iain Fraser
    I have a script that programatically generates excel spreadsheets - cleaning up ugly reports that are given to us from another organisation. For interests sake; I'm using PHPExcel to generate the "clean" reports. We get these reports every week for an event that happens every couple of months. The reports contain a list of attendees along with a group id that allows us to know that some attendees belong together. To help the event organisers out, I've taken the event ID and generated a unqiue color code (based on the hash of the event ID - truncated to 6 characters). This unique colour code is set as the background color of a cell in each row. This helps organisers quickly visually identify group members. Trouble is, when the organisers copy the rows from the weeks report into the master report (which contains all attendees, not just the ones that signed up this week) - all the colour-codes snap to the master template's color pallette. Thank you very much for your time All the best Iain

    Read the article

  • Writing more efficient xquery code (avoiding redundant iteration)

    - by Coquelicot
    Here's a simplified version of a problem I'm working on: I have a bunch of xml data that encodes information about people. Each person is uniquely identified by an 'id' attribute, but they may go by many names. For example, in one document, I might find <person id=1>Paul Mcartney</person> <person id=2>Ringo Starr</person> And in another I might find: <person id=1>Sir Paul McCartney</person> <person id=2>Richard Starkey</person> I want to use xquery to produce a new document that lists every name associated with a given id. i.e.: <person id=1> <name>Paul McCartney</name> <name>Sir Paul McCartney</name> <name>James Paul McCartney</name> </person> <person id=2> ... </person> The way I'm doing this now in xquery is something like this (pseudocode-esque): let $ids := distinct-terms( [all the id attributes on people] ) for $id in $ids return <person id={$id}> { for $unique-name in distinct-values ( for $name in ( [all names] ) where $name/@id=$id return $name ) return <name>{$unique-name}</name> } </person> The problem is that this is really slow. I imagine the bottleneck is the innermost loop, which executes once for every id (of which there are about 1200). I'm dealing with a fair bit of data (300 MB, spread over about 800 xml files), so even a single execution of the query in the inner loop takes about 12 seconds, which means that repeating it 1200 times will take about 4 hours (which might be optimistic - the process has been running for 3 hours so far). Not only is it slow, it's using a whole lot of virtual memory. I'm using Saxon, and I had to set java's maximum heap size to 10 GB (!) to avoid getting out of memory errors, and it's currently using 6 GB of physical memory. So here's how I'd really like to do this (in Pythonic pseudocode): persons = {} for id in ids: person[id] = set() for person in all_the_people_in_my_xml_document: persons[person.id].add(person.name) There, I just did it in linear time, with only one sweep of the xml document. Now, is there some way to do something similar in xquery? Surely if I can imagine it, a reasonable programming language should be able to do it (he said quixotically). The problem, I suppose, is that unlike Python, xquery doesn't (as far as I know) have anything like an associative array. Is there some clever way around this? Failing that, is there something better than xquery that I might use to accomplish my goal? Because really, the computational resources I'm throwing at this relatively simple problem are kind of ridiculous.

    Read the article

  • Sound recognition software

    - by Cawas
    I'm looking for a software able to recognize an specific sound and then do some action. I want to leave my notebook close by the house intercom so when I hear someone ringing it, a very specific and unique sound, it will send me an email at my office or something. The main issue is that there's a lot of different noises there, but none would be as loud as the intercom for the specific place I've left the microphone. Is there any software out there able to do this? Hopefully with a mac version. I trust there's nothing closely related in this to speech or voice recognition technologies, and specially the softwares in there.

    Read the article

  • postgresql deleteing old records from log tables

    - by Max
    I have a postgresql database which stores my radius connection information. What I want to do is only store a months worth of logs. How would I craft a sql statement that I can run from cron that would go and delete and rows that where older then a month? Format of the date in the table. that date is taken from acctstoptime collum Date format 2010-01-27 16:02:17-05 Format of the table in question. -- Table: radacct CREATE TABLE radacct ( radacctid bigserial NOT NULL, acctsessionid character varying(32) NOT NULL, acctuniqueid character varying(32) NOT NULL, username character varying(253), groupname character varying(253), realm character varying(64), nasipaddress inet NOT NULL, nasportid character varying(15), nasporttype character varying(32), acctstarttime timestamp with time zone, acctstoptime timestamp with time zone, acctsessiontime bigint, acctauthentic character varying(32), connectinfo_start character varying(50), connectinfo_stop character varying(50), acctinputoctets bigint, acctoutputoctets bigint, calledstationid character varying(50), callingstationid character varying(50), acctterminatecause character varying(32), servicetype character varying(32), xascendsessionsvrkey character varying(10), framedprotocol character varying(32), framedipaddress inet, acctstartdelay integer, acctstopdelay integer, freesidestatus character varying(32), CONSTRAINT radacct_pkey PRIMARY KEY (radacctid) ) WITH (OIDS=FALSE); ALTER TABLE radacct OWNER TO radius; -- Index: freesidestatus CREATE INDEX freesidestatus ON radacct USING btree (freesidestatus); -- Index: radacct_active_user_idx CREATE INDEX radacct_active_user_idx ON radacct USING btree (username, nasipaddress, acctsessionid) WHERE acctstoptime IS NULL; -- Index: radacct_start_user_idx CREATE INDEX radacct_start_user_idx ON radacct USING btree (acctstarttime, username);

    Read the article

  • matplotlib and python multithread file processing

    - by Napseis
    I have a large number of files to process. I have written a script that get, sort and plot the datas I want. So far, so good. I have tested it and it gives the desired result. Then I wanted to do this using multithreading. I have looked into the doc and examples on the internet, and using one thread in my program works fine. But when I use more, at some point I get random matplotlib error, and I suspect some conflict there, even though I use a function with names for the plots, and iI can't see where the problem could be. Here is the whole script should you need more comment, i'll add them. Thank you. #!/usr/bin/python import matplotlib matplotlib.use('GTKAgg') import numpy as np from scipy.interpolate import griddata import matplotlib.pyplot as plt import matplotlib.colors as mcl from matplotlib import rc #for latex import time as tm import sys import threading import Queue #queue in 3.2 and Queue in 2.7 ! import pdb #the debugger rc('text', usetex=True)#for latex map=0 #initialize the map index. It will be use to index the array like this: array[map,[x,y]] time=np.zeros(1) #an array to store the time middle_h=np.zeros((0,3)) #x phi c #for the middle of the box current_file=open("single_void_cyl_periodic_phi_c_middle_h_out",'r') for line in current_file: if line.startswith('# === time'): map+=1 np.append(time,[float(line.strip('# === time '))]) elif line.startswith('#'): pass else: v=np.fromstring(line,dtype=float,sep=' ') middle_h=np.vstack( (middle_h,v[[1,3,4]]) ) current_file.close() middle_h=middle_h.reshape((map,-1,3)) #3d array: map, x, phi,c ##### def load_and_plot(): #will load a map file, and plot it along with the corresponding profile loaded before while not exit_flag: print("fecthing work ...") #try: if not tasks_queue.empty(): map_index=tasks_queue.get() print("----> working on map: %s" %map_index) x,y,zp=np.loadtxt("single_void_cyl_growth_periodic_post_map_"+str(map_index),unpack=True, usecols=[1, 2,3]) for i,el in enumerate(zp): if el<0.: zp[i]=0. xv=np.unique(x) yv=np.unique(y) X,Y= np.meshgrid(xv,yv) Z = griddata((x, y), zp, (X, Y),method='nearest') figure=plt.figure(num=map_index,figsize=(14, 8)) ax1=plt.subplot2grid((2,2),(0,0)) ax1.plot(middle_h[map_index,:,0],middle_h[map_index,:,1],'*b') ax1.grid(True) ax1.axis([-15, 15, 0, 1]) ax1.set_title('Profiles') ax1.set_ylabel(r'$\phi$') ax1.set_xlabel('x') ax2=plt.subplot2grid((2,2),(1,0)) ax2.plot(middle_h[map_index,:,0],middle_h[map_index,:,2],'*r') ax2.grid(True) ax2.axis([-15, 15, 0, 1]) ax2.set_ylabel('c') ax2.set_xlabel('x') ax3=plt.subplot2grid((2,2),(0,1),rowspan=2,aspect='equal') sub_contour=ax3.contourf(X,Y,Z,np.linspace(0,1,11),vmin=0.) figure.colorbar(sub_contour,ax=ax3) figure.savefig('single_void_cyl_'+str(map_index)+'.png') plt.close(map_index) tasks_queue.task_done() else: print("nothing left to do, other threads finishing,sleeping 2 seconds...") tm.sleep(2) # except: # print("failed this time: %s" %map_index+". Sleeping 2 seconds") # tm.sleep(2) ##### exit_flag=0 nb_threads=2 tasks_queue=Queue.Queue() threads_list=[] jobs=list(range(map)) #each job is composed of a map print("inserting jobs in the queue...") for job in jobs: tasks_queue.put(job) print("done") #launch the threads for i in range(nb_threads): working_bee=threading.Thread(target=load_and_plot) working_bee.daemon=True print("starting thread "+str(i)+' ...') threads_list.append(working_bee) working_bee.start() #wait for all tasks to be treated tasks_queue.join() #flip the flag, so the threads know it's time to stop exit_flag=1 for t in threads_list: print("waiting for threads %s to stop..."%t) t.join() print("all threads stopped")

    Read the article

  • cloning a kvm guest os to a vmdk file

    - by Bond
    I have a production environment where I am having 4 Guest OS running on a Ubuntu server which uses kvm. These OS are in an LVM based setup.I want these Virtual Machines to be in a vmdk format also.Where people would do experiments with these Virtual Machines so this in a vmware environment (or it can be Xen too) would be different from the kvm server.I would not have any control on that other environment so I want to give people vmdk images of these virtual machines. The production Virtual Machines will still keep running on kvm server but the VMs on which experiments would be done would be of type vmdk.(vmdk is a constraint) Here is output of lvscan ACTIVE '/dev/abcd/lvm1' [100.00 GiB] inherit ACTIVE '/dev/abcd/lvm2' [150.00 GiB] inherit ACTIVE '/dev/abcd/lvm3' [50.00 GiB] inherit ACTIVE '/dev/abcd/lvm4' [100.00 GiB] inherit I was reading man page of qemu-img and what I understand is I need to first create a qcow image file which I need to populate and then convert that to a vmdk file. Is that understanding correct? Now suppose /dev/abcd/lvm4 is the virtual machine with which I am going to start this experiment.I can shutdown the production VMs for some time to do this. So is the following way correct to go on server 1 (where kvm is running) qemu-img convert -c -f raw -O vmdk /dev/abcd/lvm4 /backup/lvm4.img or it will affect the lvm4 on kvm server 1. I do not want the VM running on original server to at all loose its any of the content but also have a vmdk file for each of the Guest OS on kvm. Before I proceed with any of the above things on the production machine I just want to make sure that I am doing the correct thing so I asking here.

    Read the article

  • Apache mod_proxy_html Substitute: how to re-use part of regex match? (regex variables?)

    - by goober
    Hi all, Have a unique URL-rewriting situation in Apache. I need to be able to take a URL that starts with "\u002f[X]" or '\u002f[X]" Where X is the rest of some URL, and substitute the text "\u002fmeis2\u002f[X] I'm not sure how the Regex works in Apache -- I think it's the same as Perl 5? -- but even then I'm a little unsure how this would be done. My hunch is that it has to do with Regex grouping and then using $1 to pull the variable out, but I'm entirely unfamiliar with this process in Apache. Hoping someone can help -- thanks!

    Read the article

  • Simple Workstation Imaging Solution?

    - by Will
    Hey guys, I need a fairly cheap imaging solution for Windows XP corporate desktops. Ideally, I'd be able to set up a desktop exactly as we want it, create an image, deploy this image to a server, then boot a new desktop to a CD/USB Drive/Network and quickly set up the workstation. Ideally, each computer would also have a unique workstation name. Any ideas? Right now I'm using a custom built Linux DD solution, but it's slow, not network-based, can't image multiple computers at the same time as there's only one copy on a USB drive, and can't uniquely name the computers. Thanks, Will

    Read the article

  • Import a puppet manifest from the node itself?

    - by bobinabottle
    I have a somewhat unique situation. Our systems team manages our main puppet master, and the development team is fine with everything however they are thinking of using it to control some elements on their desktop machines, whilst still being connected to our central puppet master. Since we don't want the changes they make to go into our puppet master.. is there a way of puppet importing a manifest from the node directly? As in.. on the developer machine, they put a file "/root/development.pp" or something, and then on our puppet master we put something like node { "developermachine": # Do the majority of normal things # import "/root/development.pp" } We have a few different options we can take about security of write access to the puppet manifests, but if puppet were to support something like this it would probably be the cleanest for us. Any help is appreciated :)

    Read the article

  • How to Configure IIS 7.5 to Host Several Classic ASP Sites using 1 IP Address?

    - by SidC
    We are running Windows Server 2008 R2 with IIS 7.5 and have 4-5 Classic ASP sites to install. The main site is stored in wwwroot and the other sites are stored in folders below wwwroot. We have 1 IP address for the server. How do I configure IIS to allow folks to browse/test the sites before domain names are pointed to te server? When I setup one of the sites in a subfolder of wwwroot and assign a separate port to it, I receive a message stating: Config Error Cannot add duplicate collection entry of type 'add' with unique key attribute 'value' set to 'index.asp' How do I remedy this error and permit IIS to render the site?

    Read the article

  • How to deal with the extremely big *.ost files in a Terminal Server environment which is running out of space

    - by Wolfgang Kuehne
    Our Terminal Server is running out of hard disk space, and the major files which occupy most of the space are *.ost files of the Outook, which come form the users which use the Terminal Server all the time through remote desktop. The Outlook is installed on the Terminal Server and various users can use it. What would be a solution in this case. Is there a way to limit the size of the *.ost files? I read in forums that having the Outlook 2010 set up in Cached Exchange Mode isn't the best practice for an environment where the hdd space is a major constraint. First thing that came to my mind is using folder redirection, and place the ost files (together with the AppData forlder) in a network share, but this does not help, because the ost files are saved a part of the AppData folder which can not be redirected. Then I thought if it is possible to limit the size of the ost file? Or limit the time that it keeps emailed cached, say just emails from the last 6 months are sufficient. Another solutions that came to my mind, moving the ost files somewhere else, this required the old ost file to be removed, and creation of a new one. I am not quite sure if the new OST file will still have cached the emails which where available in the old ost, or will it start caching from where the other one left. What do you suggest?

    Read the article

  • Duplicate pseudo terminals in linux

    - by bobtheowl2
    On a redhat box [ Red Hat Enterprise Linux AS release 4 (Nahant Update 3) ] Frequently we notice two people being assigned to the same pseudo terminal. For example: $who am i user1 pts/4 Dec 29 08:38 (localhost:13.0) user2 pts/4 Dec 29 09:43 (199.xxx.xxx.xxx) $who -m user1 pts/4 Dec 29 08:38 (localhost:13.0) user2 pts/4 Dec 29 09:43 (199.xxx.xxx.xxx) $whoami user2 This causes problems in a script because "who am i" returns two rows. I know there are differences between the two commands, and obviously we can change the script to fix the problem. But it still bothers me that two users are being returned with the same terminal. We suspect it may be related to dead sessions. Can anyone explain why two (non-unique) pts number are being assigned and/or how that can be prevented in the future?

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >