Search Results

Search found 13283 results on 532 pages for 'master pdf editor'.

Page 441/532 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • TicTac Photo and Windows 7

    - by Ben
    Hello, My wife has been creating a tictac photo album. I had to upgrade to windows 7 as i had enough of Vista so i backed up the tic tac photo file and the photos to an external hard disk and performed a fresh install of win7. Now here is the problem. TicTacPhoto says it can find the photos in the album. The locations were as follows: Vista: C:\Users\Kelly\Pictures Win 7 C:\Users\Kelly\My Pictures When i try to create a Pictures folder under Kelly it popups a message about merging the two folders and simply moves the pictures to the My Pictures folder. Does anyone know a way to make a foler called pictures so i can eliminate the file path problem and then try again with tic tac photo support to get them to fix my file. My wife is going to kill me as its our wedding album and she has spent upwards of 30hrs designing it and me upgrading to win 7 means its all my fault. She does not understand file paths etc. Im going to try and open the album file in a text editor and see if i can see anything but thought i would ask here as well. Any help appreciated.

    Read the article

  • How can the little guys effectively learn and use puppet?

    - by drumfire
    Six months ago, in our not-for-profit project we decided to start migrating our system management to a Puppet controlled environment because we are expecting our number of servers to grow substantially between now and a year from now. Since the decision has been made our IT guys have become a bit too annoyed a bit too often. Their biggest objections are: "We're not programmers, we're sysadmins"; Modules are available online but many differ from one another; wheels are being reinvented too often, how do you decide which one fits the bill; Code in our repo is not transparent enough, to find how something works they have to recurse through manifests and modules they might have even written themselves a while ago; One new daemon requires writing a new module, conventions have to be similar to other modules, a difficult process; "Let's just run it and see how it works" Tons of hardly known 'extensions' in community modules: 'trocla', 'augeas', 'hiera'... how can our sysadmins keep track? I can see why a large organisation would dispatch their sysadmins to puppet courses to become puppet masters. But how would smaller players get to learn puppet to a professional level if they do not go to courses and basically learn it via their browser and editor?

    Read the article

  • How do I effectively use WinSCP on my GoDaddy Dedicated Hosting

    - by Scott
    After being told that Virtual Private Servers would not fit the scope of my project, I have timidly entered the world of dedicated hosting. Unfortunately, this is forcing me how to learn the basics of being a Linux server admin. GoDaddy has a master account for the server. When you use SSH, they want you to use "su" to switch to the root user. Thus far, I have been able to do everything I have needed to thus far via the command line as this root user. However, now I need to upload files to my server. I'm used to using WinSCP to upload files. I can use my general server account to view the files but when I try to drag or create files its says that I cannot because I do not have permission to do so. I have researched the WinSCP documentation and it seems that this "su" function is beyond the scope of the program. How am I to grant myself access to upload these files using SSH? Should I create a user with the proper permissions? I'm happy to do this but thus far I have not been able to make sense of what I have found online. I'm going to try and move forward but any help and/or insight is appreciated.

    Read the article

  • Switch Text Paragraphs in OpenOfficeOrg Writer mailmerge

    - by Glen S. Dalton
    I am using mailmerge to write the same letter with minor differenes to many peolpe. I experienced that switching text paragraphs depending on database values was not easy for me. I ended up putting huge text paragraphs into the database becaus switching did not really work for me. Actually I dont' understand how writer does it and maybe the boolean evaluation is buggy? There is some possibility making paragraphs invisible depending on database fields, but it was frustrating. After marking a paragraph as invisible (depending on a condition) it went invisible in the main document and did not come back, I lost the content. An example in pseudocode of what I want in my mailmerge document: {if [[balance]] 10} We owe you money. Please can you send your bank details. {end if} {if [[balance]] < -10} Please transfer the remaining amount to our banc account 123... {end if} Maybe this could be done with makros? But how to combine makros with mailmerge? Can you tell me what are the pitfalls and how to master them? I once did this with ms word, it was a lot easier. The normal mailmerge (including database fields in the letters) works fine for me in OpenOffice writer.

    Read the article

  • Interpreting Inkscape SVG path coordinates for HTML map

    - by tovare
    I needed some coordinates for a HTML MAP and tried to use inkskape by opening the image and just draw a path with my polygon coordinates. My document properties are set to 256 x 256 pixels and units: px When opening the svg file i get coordinates which are not immediately apparent. <path style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt; stroke-linejoin:miter;stroke-opacity:1" d="m 23.864407,126.91525 3.254237, 44.47458 35.79661, 44.47458 71.593216, 19.52542 71.59322, -37.9661 22.77967, -72.67797 L 218.0339, 64 192,49.898305 l -32.54237, 8.677966 -18.44068, -35.79661 1.08474, -17.3559322 -71.593215,0 L 45.559322,34.711864 35. 79661,57.491525 5.4237288, 74.847458 6.5084746,101.9661 23.864407,126.91525 z" id="path2840" /> How can I get coordinates I can use ? The original image The SVG file from inkscape Link to SVG Progress: I tried a tool called InkscapeMap which looks promising and simple, but unfortunately it looks like it didn't work with this particular svn file. Solved! Saving the file as a Plain SVG solved the problem and InkscapeMap worked perfectly. (Btw. saving as an optimized svg caused a parsing error) Update 13.11 Using inkscapeMap 0.6 and Inkscape 0.48 i needed to uncheck relative coordinates in SVG output preferences. Also if you get a C error message, hunt down the polygon with a C in it, and redraw the polygon using the XML editor in inkscape. Update 25.11.2011 I modified the source to improve parsing. http://tovare.com/articles/createhtmlimagemapsusinginkscape/

    Read the article

  • bind9 "error sending response: host unreachable"

    - by wolfgangsz
    of course), I have a number of DNS servers, all running bind9 (9.5.1, to be specific) under fedora. 4 of them are slaves, fed by a common master for our public DNS. These are all located on the public gateways of our various offices. One of them has tons of messages in its log files similar to these: Jul 21 17:26:18 gateway named[3487]: client 10.171.3.8#52500: view internal: error sending response: host unreachable I wonder where that comes from. The firewall is open on port 53 between the two machines (10.171.3.8 is an internal DNS server located on a Windows Domain Controller). The internal domains do NOT list the gateway as a name server (so there should not be any attempts of replicating the domains), and the gateway does not handle any internal DNS. The clients in these messages vary between the two domain controllers on the internal network and a third internal name server (running bind9 on debian in a different segment of the network). Any pointers are highly welcome. In response to the first reply: The issue with this really is that tcpdump doesn't show any problems. Here is an extract from "tcpdump -i any port 53" 09:13:38.283308 IP valine.aminocom.com.61815 ns-pri.ripe.net.domain: 14075 PTR? 166.225.58.95.in-addr.arpa. (44) 09:13:42.007410 IP gateway-eng.aminocom.com.37047 alanine.aminocom.com.domain: 35410+ PTR? 12.3.172.10.in-addr.arpa. (42) At the same time, the DNS log shows: Jul 22 09:13:38 gateway named[3487]: client 10.171.3.6#61300: view internal: error sending response: host unreachable Jul 22 09:13:40 gateway named[3487]: client 10.172.3.12#56230: view internal: error sending response: host unreachable Jul 22 09:13:40 gateway named[3487]: client 10.171.3.8#55221: view internal: error sending response: host unreachable Jul 22 09:13:49 gateway named[3487]: client 10.171.3.8#51342: view internal: error sending response: host unreachable So clearly at 09:13:40 there were two unsuccessful attempts to connect to internal machines (10.172.3.12 and 10.171.3.8, both are DNS servers), but nothing in the tcpdump output.

    Read the article

  • Is there a tool that can test what SSL/TLS cipher suites a particular website offers?

    - by Jeremy Powell
    Is there a tool that can test what SSL/TLS cipher suites a particular website offers? I've tried openssl, but if you examine the output: $ echo -n | openssl s_client -connect www.google.com:443 CONNECTED(00000003) depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com i:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA 1 s:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x MTEyMTgyMzU5NTlaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRcw FQYDVQQDFA53d3cuZ29vZ2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC gYEA6PmGD5D6htffvXImttdEAoN4c9kCKO+IRTn7EOh8rqk41XXGOOsKFQebg+jN gtXj9xVoRaELGYW84u+E593y17iYwqG7tcFR39SDAqc9BkJb4SLD3muFXxzW2k6L 05vuuWciKh0R73mkszeK9P4Y/bz5RiNQl/Os/CRGK1w7t0UCAwEAAaOB5zCB5DAM BgNVHRMBAf8EAjAAMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwudGhhd3Rl LmNvbS9UaGF3dGVTR0NDQS5jcmwwKAYDVR0lBCEwHwYIKwYBBQUHAwEGCCsGAQUF BwMCBglghkgBhvhCBAEwcgYIKwYBBQUHAQEEZjBkMCIGCCsGAQUFBzABhhZodHRw Oi8vb2NzcC50aGF3dGUuY29tMD4GCCsGAQUFBzAChjJodHRwOi8vd3d3LnRoYXd0 ZS5jb20vcmVwb3NpdG9yeS9UaGF3dGVfU0dDX0NBLmNydDANBgkqhkiG9w0BAQUF AAOBgQCfQ89bxFApsb/isJr/aiEdLRLDLE5a+RLizrmCUi3nHX4adpaQedEkUjh5 u2ONgJd8IyAPkU0Wueru9G2Jysa9zCRo1kNbzipYvzwY4OA8Ys+WAi0oR1A04Se6 z5nRUP8pJcA2NhUzUnC+MY+f6H/nEQyNv4SgQhqAibAxWEEHXw== -----END CERTIFICATE----- subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA --- No client certificate CA names sent --- SSL handshake has read 1777 bytes and written 316 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 748E2B5FEFF9EA065DA2F04A06FBF456502F3E64DF1B4FF054F54817C473270C Session-ID-ctx: Master-Key: C4284AE7D76421F782A822B3780FA9677A726A25E1258160CA30D346D65C5F4049DA3D10A41F3FA4816DD9606197FAE5 Key-Arg : None Start Time: 1266259321 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- it just shows that the cipher suite is something with AES256-SHA. I know I could grep through the hex dump of the conversation, but I was hoping for something a little more elegant. I would prefer Linux tools, but Windows (or other) would be fine. This question is motivated by the security testing I do for PCI and general penetration testing. Update: GregS points out below that the SSL server picks from the cipher suites of the client. So it seems I would need to test all cipher suites one at a time. I think I can hack something together, but is there a tool that does particularly this?

    Read the article

  • How can I install Satchmo?

    - by Jonathan Hayward
    I am trying to install Satchmo 0.9 on an Ubuntu 9.10 32-bit guest off of the instructions at http://bitbucket.org/chris1610/satchmo/downloads/Satchmo.pdf. I run into difficulties at 2.1.2: pip install -r http://bitbucket.org/chris1610/satchmo/raw/tip/scripts/requirements.txt pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo The first command fails because a compile error for how it's trying to build PIL. So I ran an "aptitude install python-imaging", locally copy the first line's requirements.text, and remove the line that's unsuccessfully trying to build PIL. The first line completes without reported error, as does the second. The next step tells me to change directory to the /path/to/new/store, and run: python clonesatchmo.py A little bit of trouble here; I am told that clonesatchmo.py will be in /bin by now, and it isn't there, but I put some Satchmo stuff under /usr/local, create a symlink in /bin, and run: python /bin/clonesatchmo.py This gives: jonathan@ubuntu:~/store$ python /bin/clonesatchmo.py Creating the Satchmo Application Traceback (most recent call last): File "/bin/clonesatchmo.py", line 108, in <module> create_satchmo_site(opts.site_name) File "/bin/clonesatchmo.py", line 47, in create_satchmo_site import satchmo_skeleton ImportError: No module named satchmo_skeleton A find after apparently checking out the repository reveals that there is no file with a name like satchmo*skeleton* on my system. I thought that bash might be prone to take part of the second pip invocation's URL as the beginning of a comment; I tried both: pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9\#egg=satchmo pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo Neither way of doing it seems to take care of the import error mentioned above. How can I get a Satchmo installation under Ubuntu, or at least enough of a Satchmo installation that I am able to start with a skeleton of a store and then flesh it out the way I want? Thanks, Jonathan

    Read the article

  • Chroot jail of Nginx and php

    - by sqren
    I'm hosting multiple websites on one VPS, and want to chroot each website, eg. /chroot/website1 /chroot/website2 I'm using makejail, which is a highlevel tool, for creating the jails, and copying the libraries and dependencies. Easy peasy. Each website will need nginx, php and mysql. For php I'm using php5-fpm which actually supports chroot by configuration, however I'm not using this (maybe I should?) My question is which approach of the following three is the better: 1) Every website will have its own seperated instance of nginx, php and mysql. The downside is, that each webserver + php has to listen to a different port. I also need a "master" nginx web server in front of them, reverse proxying to the chrooted servers behind it. Probably most secure, but also most advanced. 2) I don't make any chroot jails manually. I setup one nginx web server, that proxies php requests to php-fpm, on different ports. I can have multiple php-fpm configurations each with is own chroot'ed folder. This is quite managable - however only php will be chrooted. Not the actual webserver. Is this secure enough. Also, I tried this option out, and it seems I will need to use TCP instead of sockets for connecting to MySQL. 3) You tell me ;) I'm quite new to chroot jailing, so please correct me if I'm wrong in my assumptions. I've been reading all the tutorials I could find, however, I find the market for chroot guides very scarce. Any help or inputs much appreciated!

    Read the article

  • MS SQL Server slows down over time?

    - by Dave Holland
    Have any of you experienced the following, and have you found a solution: A large part of our website's back-end is MS SQL Server 2005. Every week or two weeks the site begins running slower - and I see queries taking longer and longer to complete in SQL. I have a query that I like to use: USE master select text,wait_time,blocking_session_id AS "Block", percent_complete, * from sys.dm_exec_requests CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 order by start_time asc Which is fairly useful... it gives a snapshot of everything that's running right at that moment against your SQL server. What's nice is that even if your CPU is pegged at 100% for some reason and Activity Monitor is refusing to load (I'm sure some of you have been there) this query still returns and you can see what query is killing your DB. When I run this, or Activity Monitor during the times that SQL has begun to slow down I don't see any specific queries causing the issue - they are ALL running slower across the board. If I restart the MS SQL Service then everything is fine, it speeds right up - for a week or two until it happens again. Nothing that I can think of has changed, but this just started a few months ago... Ideas? --Added Please note that when this database slowdown happens it doesn't matter if we are getting 100K page views an hour (busier time of day) or 10K page views an hour (slow time) the queries all take a longer time to complete than normal. The server isn't really under stress - the CPU isn't high, the disk usage doesn't seem to be out of control... it feels like index fragmentation or something of the sort but that doesn't seem to be the case. As far as pasting results of the query I pasted above I really can't do that. The Query above lists the login of the user performing the task, the entire query, etc etc.. and I'd really not like to hand out the names of my databases, tables, columns and the logins online :)... I can tell you that the queries running at that time are normal, standard queries for our site that run all the time, nothing out of the norm.

    Read the article

  • Can't successfully run Sharepoint Foundation 2010 first time configuration

    - by Robert Koritnik
    I'm trying to run the non-GUI version of configuration wizard using power shell because I would like to set config and admin database names. GUI wizard doesn't give you all possible options for configuration (but even though it doesn't do it either). I run this command: New-SPConfigurationDatabase -DatabaseName "Sharepoint2010Config" -DatabaseServer "developer.mydomain.pri" -AdministrationContentDatabaseName "Sharepoint2010Admin" -DatabaseCredentials (Get-Credential) -Passphrase (ConvertTo-SecureString "%h4r3p0int" -AsPlainText -Force) Of course all these are in the same line. I've broken them down into separate lines to make it easier to read. When I run this command I get this error: New-SPConfigurationDatabase : Cannot connect to database master at SQL server a t developer.mydomain.pri. The database might not exist, or the current user does not have permission to connect to it. At line:1 char:28 + New-SPConfigurationDatabase <<<< -DatabaseName "Sharepoint2010Config" -Datab aseServer "developer.mydomain.pri" -AdministrationContentDatabaseName "Sharepoint 2010Admin" -DatabaseCredentials (Get-Credential) -Passphrase (ConvertTo-SecureS tring "%h4r3p0int" -AsPlainText -Force) + CategoryInfo : InvalidData: (Microsoft.Share...urationDatabase: SPCmdletNewSPConfigurationDatabase) [New-SPConfigurationDatabase], SPExcep tion + FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletNewSPCon figurationDatabase I created two domain accounts and haven't added them to any group: SPF_DATABASE - database account SPF_ADMIN - farm account I'm running powershell console as domain administrator. I've tried to run SQL Management studio as domain admin and created a dummy database and it worked without a problem. I'm running: Windows 7 x64 on the machine where Sharepoint Foundation 2010 should be installed and also has preinstalled SQL Server 2008 R2 database Windows Server 2008 R2 Server Core is my domain controller that just serves domain features and nothing else I've installed Sharepoint according to MS guides http://msdn.microsoft.com/en-us/library/ee554869%28office.14%29.aspx installing all additional patches that are related to my configuration. Any ideas what should I do to make it work?

    Read the article

  • SSH garbling characters in vim/nano on remote server

    - by geerlingguy
    ... and it's driving me insane. Basically (this has been happening over the past couple months), I log into a few different CentOS servers (one Linode, another VPS, and a shared host to which I have shell access), running 5.5, 5.7, and 6, from my Mac running OS X Lion, using Terminal. Basically: $ ssh [email protected] [remote-host] $ nano somefile.txt Once I start editing the file, if I use the arrow keys to move around the cursor, or start deleting, then typing again, the cursor jumps around a bit, and if I save the file and reopen it, it's obvious that the cursor was, in fact, jumping all over the place on a line for no apparent reason. I end up getting things like "This is a neof text." When I had typed in (to the cursor-crazy editor) "This is a line of text." It's a big problem when it comes to editing configuration files, because I often have to edit one line, save and close, then reopen just to make sure that line is right... then edit another line... and it's getting quite annoying. I found Linode Lish Shell Vim and Nano rendering troubles: lines not appearing / cursor positions wrong, but I don't know if that relates much, since that's specifically referring to lish.

    Read the article

  • Digital Asset Management, iPhoto / Aperture server... alternative

    - by Sisyphus
    Afternoon, Clients, 10 : All Apples running either Leopard or Snow Leopard Server : Snow Leopard server, (and I have a old Dell Poweredge 650 at home running Gentoo 2.6, if anybody as a Linux solution). The situation: I work in small design company with 8 people, at present we are looking to consolidate all our image files onto one location, at present we each use our preferred single user DAM solution, be it, Adobe Bridge, iPhoto/Aperture (some don't bother at all) The filetypes commonly used are .psd, .pdf, .eps, .tiff, .jpg and RAW image files. Ideally what is needed: Centralised on one server, but allows us to search via spotlight (not essential, but would be nice) Include searchable metadata information such as date, location, title Open-source or as low cost as possibly Allow simultaneous users to import files So far, I have looked at a few open source DAM, systems, such as Razuna, Gallery (not strictly DAM), ResourceSpace, Notre-DAM, while these are brilliant and open-source, they don't integrate as smoothly with the Desktop as iPhoto and aperture. For iPhoto and aperture, I have tried creating a Shared library on the server (a tad laggy), and also using a drive with no permissions, put a library and letting each client read from it, however if they want to put images onto the library only, it's only supports one user at a time writing to the library... Any ideas what could fulfill our needs? Or is it time to bite the bullet for FinalCut Server? Thanks in advance.

    Read the article

  • Biztalk 2009 logshipping with SQL 2008

    - by Manjot
    Hi, I am setting up biztalk logshipping for Biztalk 2009 database. Following http://msdn.microsoft.com/en-us/library/aa560961.aspx article, I am doing the following to setup biztalk logshipping on destination server: Enable Ad-hoc queries by: sp_configure 'show advanced options',1 go reconfigure go sp_configure 'Ad Hoc Distributed Queries',1 go reconfigure go sp_configure 'show advanced options',0 go reconfigure go Execute LogShipping_Destination_Schema & LogShipping_Destination_Logic in master on destinations server Run: exec bts_ConfigureBizTalkLogShipping @nvcDescription = '', @nvcMgmtDatabaseName = '', @nvcMgmtServerName = '', @SourceServerName = null, -- null indicates that this destination server restores all databases @fLinkServers = 1 -- 1 automatically links the server to the management database When I run this I am receiving the following error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. After some research I found some info : Usually this error means that the SQL Server Service Principal Name (SPN) was not configured, and NTLM was not being used as an authentication mechanism. SQl services are runing under different domain accounts. So, I asked the domain admin to create SPNs for the servers, SQL service accounts for beoth source and destination using name and FQDN. enabled computer name and service accounts for delegation. When I run the following: select * from sys.dm_exec_connections I get the the same error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON' Any help please?

    Read the article

  • Windows Server 2008 Remote Desktop printing blank pages

    - by Colin Pickard
    I have a Windows Server 2008 (not R2) machine which has problems with redirected printing. Clients connecting via Remote Desktop have their printers redirected and appearing for them to print to, but printing from applications on the server to local printers is giving blank pages, missing pages, or pages with headers/footers but no middle section. The issues are consistant for similar prints, but sometimes other prints and/or applications will work correctly. I have installed PDFCreator locally on the server, and the same print jobs sent by the same application appear correctly in the PDFs. Printing that PDF via the redirected printer prints correctly. I have tried the following: Installing drivers. I’ve installed several drivers different drivers, for both the client and server operating system and architecture, on the client and the server. Reinstalling the printers. I’ve tried reinstalling on remote print servers, the clients, and the host server, and tried different client machines. Granting everyone full permissions on the print spool folder on the server. Editing the registry to forward non-USB ports (http://support.microsoft.com/kb/302361) None of these have made any difference. The clients are using Windows 7 or Windows XP and none of them have any issues with printing locally. Any ideas? Thanks!

    Read the article

  • Ripping CD Audio simultaneously from 2 drives on one PC via USB or PATA - rip accuracy preserved?

    - by Rob
    I'm considering ripping audio (reading audio) from CDs using 2 drives simultaneously to speed up the process of ripping the CDs - i.e. 2 at a time rather than 1. Are there any issues with achieving maximum rip accuracy? In general I wondered if people have tried this and if the simultaneous streams from both rip activities would overload the host machine and cause packet loss or read retries resulting in a sub-standard CD-DA Audio CD rip? If it just means the rip is slightly slower (but still faster than sequentially doing one rip followed by another) but still of maximum accuracy then that is OK for me. I will be using dbPowerAmp to rip the CDs and converting to FLAC lossless format. Specific examples: There are 2 machines I intend to do it on: A Toshiba NB100 1.6Ghz Atom netbook, 2Gb RAM, running Windows XP Home with 1 external LG DVD/CD burner and external 1 LG Blu-ray burner attached via USB 2.0, ripping to the machine's 5400rpm internal hard drive. This rips from one CD drive very well, more than adequate, it is a nippy, fast little machine for its specification. A Desktop PC running Windows 7 Home Premium with MSI P4M900M2-L/ MS-7255v2.0 motherboard and 1.86Ghz Intel Core 2 Duo E6320, 7200rpm hard drive and 2Gb RAM, with an internal LG PATA DVD/CD burner (master) and a Philips DVD/CD burner (slave) on the same PATA bus (perhaps separate buses would be another option to consider here). Thoughts?

    Read the article

  • NIS: which mechanism hides shadow.byname for unpriviledged users?

    - by Mark Salzer
    On some Linux box (SLES 11.1) which is a NIS client I can do as root: ypcat shadow.byname and get output, i.e. some lines with the encrypted passwords, amongst other information. On the same Linux box, if I run the same command as unpriviledged user, I get No such map shadow.byname. Reason: No such map in server's domain Now I am surprised. My good old knowlege says that shadow passwords in NIS are absurd because there is no access control or authentication in the protocol and thus every (unpriviledged) user can access the shadow map and thereby obtain the encrypted passwords. Obviously we have a different picture here. Unfortunately I don't have access to the NIS server to figure out what is happening. My only guess is that the NIS master gives the map only to clients conection from a priviledged port (1024), but this is only an uneducated guess. What mechanisms are there in current NIS implementations to lead to a behavior like the above? How "secure" are they? Can the be circumvented easily? Or are shadow passwords in NIS as secure as the good old shadow files?

    Read the article

  • New Computer freezing up at random

    - by Benjamin Frost
    Since I built my system about 4 weeks ago I'v been getting random freezes, Sometimes it can happen directly after startup and sometimes it wont freeze up for 3-4 days of 24/7 running. It seems to be happening under all stress loads but mainly when the CPU is under 10% load. It doesn't give me a BSOD or anything, it simply just freezes and repeats the last sound before the freeze until I shut it down by the power button. I'v re-seated everything in the system except the CPU, Cleaned the RAM sockets and gold fittings. None of the components have been clocked above their factory settings as of yet, don't want to overclock them until I sort out these freezes. Temps are all well under the rated max temps, the highest the temps have been are below CPU: Low load: 16-21°C Full Load (100%) 40-43°C *(From HWMonitor by CPUID) GPU 1: Low Load: 25-30°C Full Load (100%): 45-50°C GPU 2: Low Load: 23-27°C Full Load (100%): 45-50°C *(GPU Temps from Catalyst Control Center) General Case temps Rear: 18-20°C Mid: 20-21°C Front (HDD/SSD Bays): 14-19°C (Case temps may be a little off as it's from the Kaze master pro fan controller) I have Un-installed EVERY driver for Motherboard, GPU & Soundcard and Re-installed twice. Windows is all up to date. To date i'v tried the following Running Memtest for 24 hrs straight, No errors Running Memtest on each individual RAM modules, No errors Reseated everything except the CPU Cleaned DIMM Sockets and Gold inputs Tested the Graphics cards 1 at a time Re-arranged all the SATA devices to run on Chipset controlled ports Re-installing all drivers OS: Win7 Professional 64bit Motherboard: ASRock X79 Extreme9 CPU: i7 3930k 3.2GHz GPU: Sapphire 7950 OC Edition V2 (2 card Crossfire) RAM: G.Skill Ripjaws Z F3-17000CL11Q-16GBZL 16GB (4x4GB) DDR3 Boot Drive: OCZ Agility 4 128GB Data Drive 1: Western Digital Black 2TB Data Drive 2: Western Digital Black 2TB Data Drive 3: Western Digital Green 3TB Power Supply: Corsair AX1200 Gold

    Read the article

  • nginx starts up before apache

    - by paullb
    I've been fumbling through setting up redmine on a unbuntu (12.04) box and somewhere along the line NginX got set up and now apache no longer loads because nginx has already grabbed the port. I tried removing NginX with the below command but that didn't seem to make any difference. When I restarted the server and pointed my web browser I still got the "Welcome to NginX" message sudo apt-get purge nginx I have confirmed that NginX is gone because when I run the above now I get as an output Package nginx is not installed, so not removed Yet everytime I start the machine it is running again. I noticed the following for the running processes (if that is helpful) root 923 0.0 0.0 76784 1280 ? Ss 03:00 0:00 nginx: master process /usr/sbin/nginx www-data 925 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process www-data 926 0.0 0.1 77092 2204 ? S 03:00 0:00 nginx: worker process www-data 927 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process www-data 928 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process Any advice for bringing back apache2 as the "default" (for lack of a better term) web server?

    Read the article

  • what's the difference between a Volume and a Partition in Windows 7 diskpart

    - by user170232
    I was trying to follow the Intel guide for setting up iRST (Intel Rapid Start Technology) on my new laptop. The Intel manual says you need to create a *Volume that is as big or bigger than your available memory, set it to a specific id (id=84), then go into the iRST tool and adjust some settings. Looking at the disk manager on the laptop, I see there is already a Partition labeled as "Hibernation Partition" which is a little bigger than the memory in my system. So it looks like iRST was already set up...BUT, it's a Partition, not a Volume. Here's what the manual says to do: (from: http://download.intel.com/support/motherboards/desktop/sb/rapid_start_technology_user_guide.pdf) diskpart list disk select disk x (where x is the disk to use, there's only one disk in this laptop) create partition primary size=X000 (where X000 is the size to create) detail disk (which lists details for the disk. This is where i get hung up) select volume Z (where Z is the *partition you created previously) ** it says the 'detail disk' command will list the volume #, but it doesn't. ** 'detail disk' only lists two "volumes" for Recovery and OS. ** if i do 'list partition', i see the 8 GB *partition labeled as "Hibernation Partition") ** so I can't continue with the following steps: set id=84 override exit The reason I went looking for the manual is because when iRST is enabled in the BIOS, the system won't resume from sleep. When it's disabled, it works fine, but the system goes into (legacy?) Hibernation mode and takes a while to come out of Hibernation. the iRST is supposed to resume from deep sleep very quickly. So, what's the difference between a Volume and a Partition? Should I delete the Hibernation Partition and create a Hibernation Volume? Anyone have any ideas? (if it matters, this is on a Dell XPS 13 with BIOS A08) Thanks! J

    Read the article

  • Puppet and Vim fighting over Ruby version

    - by devians
    I have installed puppet from the .dmg from puppetlabs. If I remove ruby 1.9.3, puppet works, but other things like my vim install (dependant plugins) do not. According to http://docs.puppetlabs.com/guides/platforms.html#ruby-versions 1.9.3 is supported. So whats going wrong with puppet? % uname -a Darwin Kusanagi.local 11.4.2 Darwin Kernel Version 11.4.2: Thu Aug 23 16:25:48 PDT 2012; root:xnu-1699.32.7~1/RELEASE_X86_64 x86_64 % which ruby /usr/local/bin/ruby % ruby --version ruby 1.9.3p327 (2012-11-10 revision 37606) [x86_64-darwin11.4.2] % /usr/bin/ruby --version ruby 1.8.7 (2012-02-08 patchlevel 358) [universal-darwin11.0] % brew info ruby 1 ? ruby: stable 1.9.3-p327, HEAD http://www.ruby-lang.org/en/ Depends on: pkg-config, readline, gdbm, libyaml /usr/local/Cellar/ruby/1.9.3-p327 (796 files, 17M) * https://github.com/mxcl/homebrew/commits/master/Library/Formula/ruby.rb ==> Options --with-tcltk Install with Tcl/Tk support --with-suffix Suffix commands with "19" --universal Build a universal binary --with-doc Install documentation ==> Caveats NOTE: By default, gem installed binaries will be placed into: /usr/local/Cellar/ruby/1.9.3-p327/bin You may want to add this to your PATH. % puppet /usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- puppet/util/command_line (LoadError) from /usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/bin/puppet:3:in `<main>'

    Read the article

  • NetBackup prefers "Scratch" tapes over dedicated tapes

    - by wfaulk
    I have a NetBackup 6.0MP7 installation running on Windows Server 2003. It functions as the only Master Server and Media Server. I swap a full set of tapes in and out every week, but leave a set of tapes with their Volume Pool set to "Scratch" in all the time. The weekly tape sets then get rotated back in after a period of time. Largely, this works fine. I seldom actually need the scratch tapes, but every once in a while, a backup will run over what I have dedicated to the task. However, one week's set of tapes consistently gets declined in favor of the scratch pool. The backup policies are the same for every week, they all have "Policy Volume Pool" set to "NetBackup", and all of the tapes for every week (beside the scratch tapes) have had their pools assigned as "NetBackup", definitely including the week that always gets ignored. That said, it doesn't ignore all of the NetBackup pool tapes for that week. It does usually write to two or three of them, but it writes to like 20 of the scratch tapes. (I haven't thought to look to see if it's always the same two or three tapes.) And this problem never seems to occur for any other week. It doesn't load the tapes and then reject them; it never seems to try to use them at all. They are not flagged as frozen. They are all active and unassigned when I swap them in. The tapes are in a Quantum PX510 tape library. The NetBackup server is attached to the library/robot via fibrechannel going through an HP-branded Brocade switch. I'm not an expert on NetBackup at all. I don't really even know where to look. Any advice on logs to look at or logging to enable or really anything at all would be appreciated. I'll keep an eye on the question and update it if anyone needs any more info to help.

    Read the article

  • puppetca never returns anything

    - by mrisher
    Hi: I'm trying to configure Puppet on Ubuntu, and strangely I am never able to generate a certificate because my server never shows any pending certificate requests. Put differently, on the server I am running puppetmasterd and on the client I am able to connect to the server, but the client continues printing notice: Did not receive certificate warning: peer certificate won't be verified in this SSL session and yet the server never sees the request mrisher@lab2$ puppetca --list [nothing shows up] mrisher@lab2$ puppetca --sign clientname.domain.com clientname.domain.com err: Could not call sign: Could not find certificate request for clientname.domain.com Edit: There was a suggestion that autosign was happening, but that does not seem to be it. There is no autosign.conf file, and when I run puppetmasterd --no-daemonize -d -v I receive the following output: info: Could not find certificate for 'clientname.domain.com' every time the client says notice: Did not receive certificate I checked the certs on the server and there don't seem to be any: mrisher@lab2:~$ puppetca --list --all mrisher@lab2:~$ sudo puppetca --list --all + lab2.domain.com // this is the server (master) mrisher@lab2:~$ sudo puppetca --list [blank line] mrisher@lab2:~$ Note: This is mostly running the default install from Ubuntu, if that gives any leads. Thanks for any help out there.

    Read the article

  • How to use a Macro command button in mac excel 2011

    - by user21255
    Im using Mac excel 2011 and I can't seem to get Macro to work. What I am trying to do is that in Worksheet (1st) I am trying to get all of the data entered in the Cases Table at the bottom to all be automatically inserted into the table in the "Cases" worksheet when I click on the "Update" button. But instead I keep getting a pop up saying runtime error and then it asks if I want to End, debug or something else. I just don't know if it is because I am not using Mac Excel correctly as I am used to using windows because I believe my code is correct in the VBA editor to get the button working. Anone who is able to use Mac excel 11 can they check to see if they can use the file provided to see i the button works? If anyone has windows excel then please feel free to check to see if it works on there as well. If it is a coding problem then can you please let me know. My question is simply how to run and stop a Macro in Mac excel 2011. The file can be accessed below: http://ge.tt/76qNwIx/v/0 Thanks

    Read the article

  • Convert a Linksys WAG54GP2 ADSL router into Access point only to extend my Wifi range

    - by Preet Sangha
    I have a wireless lan running on my ASDL2 connection. The router (Seimens Gigaset sx763) is provided by the ISP and is generally good. However I have couple of dead spots at the far end of the house and since I have my old router sitting in the drawer I thought that I'd try to convert it into simple WAP. However downloading the manual from linksys it seems to be that the manual is from an earlier firmware, but the very first option on the very first page seems promising: Wan Mode: Router or ADSL However after this I'm a bit lost. I know that the wireless card on this box will need a mac address and it must get its address from the master router (I thought static might be best). However the again the manual is out of date I have the option of DHCP: ON or OFF or RELAY I've not even got to the more complex options yet. Question is can this device even work this way (seems like it but I cannot find any docs on it), and if so how? Edit: Having now fiddled around I'm of the opinion that this cannot be done.

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >