Search Results

Search found 11982 results on 480 pages for 'non greedy'.

Page 424/480 | < Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >

  • UPS vs Solar Power in case of power failure for a server [on hold]

    - by Zen 8000k
    I am looking for a low power, low end pc able to run 24/7 without overheating and a way to support it in case of power failure. Power failures can be up to 72 hours. The pc dosen't need a monitor or keyboard. A modem must also be protected in case of power failure. When i say low end, i don't mean crap. The cpu needs to be x86 and have at least 1k cpu in this chart: http://www.cpubenchmark.net/index.php What's the best way to do this? EDIT: more info. I need to run a home server. The server will perform light tasks mainly. A x86 cpu sadly is the only route for my use. I want to be able to run the server and the router/modem in case of power failure. Now, regarding how long the power will fail: 1) 1 hours is OK for most situations. (say 90%) 2) 3 hours is OK (say 98%) 3) 6 hours is more thank OK. (say 99.5%) 4) On extreme cases the power might fail days. I believe this is very unlikely to happen. More is great but, really, how ofter power will fail more than 3 hours? I believe once every year at best. Well, that's too rare to care about. Given the above, I am looking for a cost effective way to archive 1-3 hour power or 6 hour if possible. Solutions: You guys give me great ideas. 1) Power generator: no good as power will fail for 10 seconds before returning. Also I read online, "clean" power generators cost 1.5k+, so it's out of budged. Non clean generator might damage electronics, right? 2) Solar power: i don't know for sure about this. Sounds like a great idea, too good to be true, honestly. For only 200$ i get 100+w? What are the drawbacks here? 3) UPS: This seems to be the best. The only problem is the cost. Cost < 200$ = great 400$ = budged limit

    Read the article

  • VBA + Polymorphism: Override worksheet functions from 3rd party

    - by phi
    my company makes extensive use of a data provider using a (closed source) VBA plugin. In principal, every query follows follows a certain structure: Fill one cell with a formula, where arguments to the formula specify the query the range of that formula is extended (not an arrray formula!) and cells below/right are filled with data For this to work, however, a user has to have a terminal program installed on the machine, as well as a com-plugin referenced in VBA/Excel. My Problem These Excelsheets are used and extended by multiple users, and not all of them have access to the data provider. While they can open the sheet, it will recalculate and the data will be gone. However, frequent recalculation is required. I would like every user to be able to use the sheets, without executing a very specific set of formulas. Attempts remove the reference on those computers where I do not have terminal access. This generates a NAME error i the cell containing the query (acceptable), but this query overrides parts of the data (not acceptable) If you allow the program to refresh, all data will be gone after a failed query Replace all formulas with the plain-text result in the respective cells (press a button and loop over every cell...). Obviously destroys any refresh-capabilities the querys offer for all subsequent users, so pretty bad, too. A theoretical idea, and I'm not sure how to implement it: Replace the functions offered by the plugin with something that will be called either first (and relay the query through to the original function, if thats available) or instead of the original function (by only deploying the solution on non-terminal machines), which just returns the original value. More specifically, if my query function is used like this: =GETALLDATA(Startdate, Enddate, Stockticker, etc) I would like to transparently swap the function behind the call. Do you see any hope, or am I lost? I appreciate your help. PS: Of course I'm talking about Bloomberg... Some additional points to clarify issues raise by Frank: The formula in the sheets may not be changed. This is mission-critical software, and its way too complex for any sane person to try and touch it. Only excel and VBA may be used (which is the reason for the previous point...) It would be sufficient to prevent execution of these few specific formulas/functions on a specific machine for all excel sheets to come This looks more and more like a problem for stackoverflow ;-)

    Read the article

  • Why many applications close after opening a document or doing a specific actions?

    - by Mohsen Farjami
    I have some encrypted pdf files that have no problem and in my last windows, I could open them easily with Adobe Reader 9.2 and other pdf readers. But now, I can only open non-encrypted pdf files and one encrypted file with Adobe Reader. every time I open almost any encrypted pdf, it closes itself. Also, when I try to search a folder for a keyword with Foxit Reader, once it closed. This is not related to Adobe Reader, because I have the same problem with Word 2007. When I open a document, sometimes it closes instantly and sometimes it closes after a few seconds and sometimes it is stable. My windows is Fresh. I have installed it a few days ago. I have ESET Smart Security 5.2 and I have updated it today. OS: XP Pro SP3, RAM: 3 GB, CPU: 2 GHZ, HDD: 320 GB My installed applications: Adobe AIR Adobe Flash Player 11 ActiveX Adobe Flash Player 11 Plugin Adobe Photoshop CS4 Adobe Reader 9.2 Atheros Wireless LAN Client Adapter Babylon Bluetooth Stack for Windows by Toshiba CCleaner Conexant HD Audio Dell Touchpad ESET Smart Security Farsi (101) Custom Foxit Reader Framing Studio 3.27 Google Chrome Hard Disk Sentinel PRO HDAUDIO Soft Data Fax Modem with SmartCP Intel(R) Graphics Media Accelerator Driver IrfanView (remove only) Java(TM) 6 Update 18 K-Lite Mega Codec Pack 8.8.0 Microsoft .NET Framework 2.0 Service Pack 1 Microsoft .NET Framework 3.0 Service Pack 1 Microsoft .NET Framework 3.5 Microsoft Data Access Components KB870669 Microsoft Office 2007 Primary Interop Assemblies Microsoft Office Enterprise 2007 Microsoft User-Mode Driver Framework Feature Pack 1.0.0 (Pre-Release 5348) Mozilla Firefox 7.0.1 (x86 en-US) Notepad++ Office Tab FreeEdition 8.50 ParsQuran PerfectDisk 12 Professional Registry First Aid RICOH R5C83x/84x Flash Media Controller Driver Ver.3.54.06 Sahar Money Manager 2.5 Stickies 7.1d The KMPlayer (remove only) TurboLaunch 5.1.2 Unlocker 1.9.1 USB Safely Remove 4.2 Virastyar Visual Studio 2005 Tools for Office Second Edition Runtime Winamp Windows Internet Explorer 8 Windows Media Player 11.0.5358.4826 Windows XP Service Pack 3 WinRAR 4.11 (32-bit) WorkPause 1.2 Z Dictionary My startup applications: WorkPause USB Safely Remove TurboLaunch SunJavaUpdateSched Stickies rfagent Persistence ParsQuran Daily Verse ITSecMng IgfxTray HotKeysCmds Hard Disk Sentinel egui disable shift+delete CTFMON.EXE Bluetooth Manager Babylon Client Apoint AdobeCS4ServiceManager Adobe Reader Speed Launcher Adobe ARM What should I do to solve it? If you recommend installing Windows again, what guarantees that it won't happen again?

    Read the article

  • filtering itunes library items by file location

    - by Cawas
    3 answers and unfortunately no solution yet. The Problem I've got way more than 1000 duplicated items in my iTunes Library pointing to a non-existant place (the "where" under "get info" window), along with other duplicated items and other MIAs (Missing In Action). Is there any simple way to just delete all of them and only them? From the library, of course. By that I mean some MIAs are pointing to /Volumes while some are pointing to .../music/Music/... or just .../music/.... I want to delete all pointing to /Volumes as to later I'll recover the rest. Check the image below. Some Background I tried searching for a specific key word on the path and creating smart play list, but with no result. Being able to just sort all library by path would be a perfect solution! I believe old iTunes could do that. PowerTunes can do it (sort by path) but I can't do anything with its list. I would also welcome any program able to handle this, then import and properly export back the iTunes library. Since this seems to just not be clear enough... AppleScript doesn't work That's because AppleScript just can't gather the missing info anywhere in iTunes Library. Maybe we could use AppleScript by opening the XML file, but that's a whole nother issue. Here's a quote from my conversation with Doug the man himself Adams last december: I don't think you do understand. There is no way to get the path to the file of a dead track because iTunes has "forgotten" it. That is, by definition, what a dead track is. Doug On Dec 21, 2010, at 7:08 AM, Caue Rego wrote: yes I understand that and have seem the script. but I'm not looking for the file. just the old broken path reference to it. Sent from my iPhone On 21/12/2010, at 10:00, Doug Adams wrote: You cannot locate missing files of dead tracks because, by definition, a dead track is one that doesn't have any file information. If you look at "Super Remove Dead Tracks", you will notice it looks for tracks that have "missing value" for the location property.

    Read the article

  • pam_unix(sshd:session) session opened for user NOT ROOT by (uid=0), then closes immediately using using TortiseSVN

    - by codewaggle
    I'm having problems accessing an SVN repository using TortoiseSVN 1.7.8. The SVN repository is on a CentOS 6.3 box and appears to be functioning correctly. # svnadmin --version # svnadmin, version 1.6.11 (r934486) I can access the repository from another CentOS box with this command: svn list svn+ssh://[email protected]/var/svn/joetest But when I attempt to browse the repository using TortiseSVN from a Win 7 workstation I'm unable to do so using the following path: svn+ssh://[email protected]/var/svn/joetest I'm able to login via SSH from the workstation using Putty. The results are the same if I attempt access as root. I've given ownership of the repository to USER:USER and ran chmod 2700 -R /var/svn/. Because I can access the repository via ssh from another Linux box, permissions don't appear to be the problem. When I watch the log file using tail -fn 2000 /var/log/secure, I see the following each time TortiseSVN asks for the password: Sep 26 17:34:31 dev sshd[30361]: Accepted password for USER from xx.xxx.xx.xxx port 59101 ssh2 Sep 26 17:34:31 dev sshd[30361]: pam_unix(sshd:session): session opened for user USER by (uid=0) Sep 26 17:34:31 dev sshd[30361]: pam_unix(sshd:session): session closed for user USER I'm actually able to login, but the session is then closed immediately. It caught my eye that the session is being opened for USER by root (uid=0), which may be correct, but I'll mention it in case it has something to do with the problem. I looked into modifying the svnserve.conf, but as far as I can tell, it's not used when accessing the repository via svn+ssh, a private svnserve instance is created for each log in via this method. From the manual: There's still a third way to invoke svnserve, and that's in “tunnel mode”, with the -t option. This mode assumes that a remote-service program such as RSH or SSH has successfully authenticated a user and is now invoking a private svnserve process as that user. The svnserve program behaves normally (communicating via stdin and stdout), and assumes that the traffic is being automatically redirected over some sort of tunnel back to the client. When svnserve is invoked by a tunnel agent like this, be sure that the authenticated user has full read and write access to the repository database files. (See Servers and Permissions: A Word of Warning.) It's essentially the same as a local user accessing the repository via file:/// URLs. The only non-default settings in sshd_config are: Protocol 2 # to disable Protocol 1 SyslogFacility AUTHPRIV ChallengeResponseAuthentication no GSSAPIAuthentication yes GSSAPICleanupCredentials yes UsePAM yes AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE AcceptEnv XMODIFIERS X11Forwarding no Subsystem sftp /usr/libexec/openssh/sftp-server Any thoughts?

    Read the article

  • Can Vagrant point to a directory of Puppet manifests for execution?

    - by SeligkeitIstInGott
    I am using Vagrant to jump start some initial Puppet config and am confused on how to include/run multiple manifests (other than just site.pp) in the puppet execution workflow without making the extra manifests into modules and including them that way. In the puppet manifests directory that I point Vagrant to (see below) I have two manifests that I want executed: site.pp and hierasetup.pp. config.vm.provision "puppet" do |puppet| puppet.manifests_path = "puppet_files/manifests" puppet.module_path = "puppet_files/modules" puppet.manifest_file = "site.pp" puppet.options = "--verbose --debug" end Currently I am having site.pp be the manifest that calls hierasetup.pp. My site.pp looks like this: File { owner => 'root', group => 'root', mode => '0644', } import "hierasetup.pp" include jboss But I get this error about the deprecation of "import": Warning: The use of 'import' is deprecated at /tmp/vagrant-puppet-1/manifests/site.pp:33. See http://links.puppetlabs.com/puppet-import-deprecation (at grammar.ra:610:in `_reduce_190') According to the referenced URL under "Things to try instead" it says "To keep your node definitions in separate files, specify a directory as your main manifest". Further this puppet doc on main manifests says: "Recommended: If you’re using the main manifest heavily instead of relying on an ENC, consider changing the manifest setting to $confdir/manifests. This lets you split up your top-level code into multiple files while avoiding the import keyword. It will also match the behavior of simple environments." It appears that Puppet can reference an entire directory instead of just a specific manifest file, such that I would expect that Vagrant would make a provision for this and allow me to drop the "puppet.manifest_file = "site.pp" line and point to the parent directory instead in which all the *.pp files there will be executed. However removing that line in Vagrant merely generates a complaint about an expected "default.pp" in its stead: puppet provisioner: * The configured Puppet manifest is missing. Please specify a path to an existing manifest: /some/path/puppet_files/manifests/default.pp So: Firstly, do I understand the "new" (non-import) way of calling multiple manifests correctly, in that a directory is to be pointed to in which all the *.pp files inside it will be executed? And secondly, has Vagrant "caught up" with this new change to accommodate the referencing of directories in conjunction with Puppet's deprecation of "import"? Update: Thanks to Shane the issue with #2 (Vagrant's code not being caught up to allow pointing to puppet manifest directories) was reported on Vagrant's GitHub issue tracker site and has since been patched: https://github.com/mitchellh/vagrant/issues/4169

    Read the article

  • networked storage for a research group, 10-100 TB

    - by Marc
    this is related to this post: http://serverfault.com/questions/80854/scalable-24-tb-nas-for-research-department but perhaps a little more general. Background: We're a research lab of around 10 people who do a lot of experiments that involve taking pictures at one of several lab setups and then analyzing it an one of several lab computers. Each experiment may produce 2 or 3 GB of data, and we are generating data at the rate of about 10 TB/year. Right now, we are storing the data on a 6-bay netgear readynas pro, but even with 2 TB drive, this only gives us 10 TB of storage. Also, right now we are not backing up at all. Our short term backup plan is to get a second readynas, put it in a different building and mirror the one drive onto the other. Obviously, this is somewhat non-ideal. Our options: 1) We can pay our university $400/ TB /year for "backed up" online storage. We trust them more than we trust us, but not a whole lot. 2) We can continue to buy small NASs and mirror them between offices. One limit, although stupid, is that we don't have an unlimited number of ethernet jacks. 3) We can try to implement our own data storage solution, which is why I'm asking you guys. One thing to consider is that we're a very transient population and none of us are network administration experts. I will probably be here only another year or so, and graduate students, who are here the longest, have a 5-6 year time scale. So nothing can require expert oversight. Our data transfer rates are low - most of the data will just sit on the server waiting for someone to look at it once or twice - so we don't need a really high speed system. Given these contraints, can someone recommend a fairly low-cost, scalable, more or less turn key shared data storage system with backup in a separate physical location. Does such a thing exist or should we just pay the university to take care of it for us? As a second question, our professor just got tenure and is putting together a budget. Here the goal is to ask for as much as you can and hope you get a fraction of it. So the same question, minus the low-cost. Without budget constraints, can you recommend a scalable turn-key backed up storage system. Thanks

    Read the article

  • Adaptec 5805 not recognized after reboot

    - by Rakedko ShotGuns
    After rebooting the system, the controller is not recognized. It only works if the computer is shut down and turned off. I have recently updated the firmware to "Adaptec RAID 5805 Firmware Build 18948". How do I fix the problem? Configuration summary --------------------------- 1. Server name.....................raid_test Adaptec Storage Manager agent...7.31.00 (18856) Adaptec Storage Manager console.7.31.00 (18856) Number of controllers...........1 Operating system................Windows Configuration information for controller 1 ------------------------------------------------------- Type............................Controller Model...........................Adaptec 5805 Controller number...............1 Physical slot...................2 Installed memory size...........512 MB Serial number...................8C4510C6C9E Boot ROM........................5.2-0 (18948) Firmware........................5.2-0 (18948) Device driver...................5.2-0 (16119) Controller status...............Optimal Battery status..................Charging Battery temperature.............Normal Battery charge amount (%).......37 Estimated charge remaining......0 days, 16 hours, 12 minutes Background consistency check....Disabled Copy back.......................Disabled Controller temperature..........Normal (40C / 104F) Default logical drive task priority High Performance mode................Dynamic Number of logical devices.......1 Number of hot-spare drives......0 Number of ready drives..........0 Number of drive(s) assigned to MaxCache cache0 Maximum drives allowed for MaxCache cache8 MaxCache Read Cache Pool Size...0 GB NCQ status......................Enabled Stay awake status...............Disabled Internal drive spinup limit.....0 External drive spinup limit.....0 Phy 0...........................No device attached Phy 1...........................No device attached Phy 2...........................No device attached Phy 3...........................1.50 Gb/s Phy 4...........................No device attached Phy 5...........................No device attached Phy 6...........................No device attached Phy 7...........................No device attached Statistics version..............2.0 SSD Cache size..................0 Pages on fetch list.............0 Fetch list candidates...........0 Candidate replacements..........0 69319...........................31293 Logical device..................0 Logical device name............. RAID level......................Simple volume Data space......................148,916 GB Date created....................09/19/2012 Interface type..................Serial ATA State...........................Optimal Read-cache mode.................Enabled Preferred MaxCache read cache settingEnabled Actual MaxCache read cache setting Disabled Write-cache mode................Enabled (write-back) Write-cache setting.............Enabled (write-back) Partitioned.....................Yes Protected by hot spare..........No Bootable........................Yes Bad stripes.....................No Power Status....................Disabled Power State.....................Active Reduce RPM timer................Never Power off timer.................Never Verify timer....................Never Segment 0.......................Present: controller 1, connector 0, device 0, S/N 9RX3KZMT Overall host IOs................99075 Overall MB......................4411203 DRAM cache hits.................71929 SSD cache hits..................0 Uncached IOs....................29239 Overall disk failures...........0 DRAM cache full hits............71929 DRAM cache fetch / flush wait...0 DRAM cache hybrid reads.........3476 DRAM cache flushes..............-- Read hits.......................0 Write hits......................0 Valid Pages.....................0 Updates on writes...............0 Invalidations by large writes...0 Invalidations by R/W balance....0 Invalidations by replacement....0 Invalidations by other..........0 Page Fetches....................0 0...............................0 73..............................10822 8...............................3 46138...........................4916 27184...........................15226 20875...........................323 16982...........................1771 1563............................5317 1948............................2969 Serial attached SCSI ----------------------- Type............................Disk drive Vendor..........................Unknown Model...........................ST3160815AS Serial Number...................9RX3KZMT Firmware level..................3.AAD Reported channel................0 Reported SCSI device ID.........0 Interface type..................Serial ATA Size............................149,05 GB Negotiated transfer speed.......1.50 Gb/s State...........................Optimal S.M.A.R.T. error................No Write-cache mode................Write back Hardware errors.................0 Medium errors...................0 Parity errors...................0 Link failures...................0 Aborted commands................0 S.M.A.R.T. warnings.............0 Solid-state disk (non-spinning).false MaxCache cache capable..........false MaxCache cache assigned.........false NCQ status......................Enabled Phy 0...........................1.50 Gb/s Power State.....................Full rpm Supported power states..........Full rpm, Powered off 0x01............................113 0x03............................98 0x04............................99 0x05............................100 0x07............................83 0x09............................75 0x0A............................100 0x0C............................99 0xBB............................100 0xBD............................100 0xBE............................61 0xC2............................39 0xC3............................69 0xC5............................100 0xC6............................100 0xC7............................200 0xC8............................100 0xCA............................100 Aborted commands................0 Link failures...................0 Medium errors...................0 Parity errors...................0 Hardware errors.................0 SMART errors....................0 End of the configuration information for controller 1

    Read the article

  • Unable to locate Windows Server Error log files

    - by Sam007
    I am getting an error in my application on 500 Internal Server Error. The firebug gives me, NetworkError: 500 Internal Server Error - http://webgis.arizona.edu/ArcGIS/rest/services/webGIS/Shock_Models/GPServer/Income_Log/jobs/jc09c501156564f71abc5d98393581267/results/final_shp?dpi=96&transparent=true&format=png8&imageSR=102100&f=image&bbox=%7B%22xmin%22%3A-14519891.438356264%2C%22ymin%22%3A637618.0139790997%2C%22xmax%22%3A-6692739.741956295%2C%22ymax%22%3A6507981.786279075%2C%22spatialReference%22%3A%7B%22wkid%22%3A102100%7D%7D&bboxSR=102100&size=800%2C600 And when I goto that particular link, I get this error, Server Error - Object reference not set to an instance of an object. Any idea how I can correct it? UPDATE I was told to use fiddler and see the error details that is occurring over the network and this is the output that I got, SESSION STATE: Done. Response Entity Size: 849 bytes. == FLAGS ================== BitFlags: [ClientPipeReused, ServerPipeReused] 0x18 X-CLIENTPORT: 2010 X-RESPONSEBODYTRANSFERLENGTH: 849 X-EGRESSPORT: 2023 X-HOSTIP: 128.196.53.161 X-PROCESSINFO: firefox:2248 X-CLIENTIP: 127.0.0.1 X-SERVERSOCKET: REUSE ServerPipe#2 == TIMING INFO ============ ClientConnected: 15:53:51.383 ClientBeginRequest: 15:53:51.494 GotRequestHeaders: 15:53:51.494 ClientDoneRequest: 15:53:51.494 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 0ms HTTPS Handshake: 0ms ServerConnected: 15:52:45.077 FiddlerBeginRequest: 15:53:51.495 ServerGotRequest: 15:53:51.495 ServerBeginResponse: 15:53:51.679 GotResponseHeaders: 15:53:51.679 ServerDoneResponse: 15:53:51.679 ClientBeginResponse: 15:53:51.679 ClientDoneResponse: 15:53:51.679 Overall Elapsed: 00:00:00.1850106 The response was buffered before delivery to the client. == WININET CACHE INFO ============ This URL is not present in the WinINET cache. [Code: 2] * Note: Data above shows WinINET's current cache state, not the state at the time of the request. * Note: Data above shows WinINET's Medium Integrity (non-Protected Mode) cache only. But I am still confused as to what the error is? This is the application. I am not sure if the error is due to the ArcGIS-Server or the Windows Server 2008. I am new on working with the Windows Server and wanted to know where can I look for the error lof files? This is the link which gives the details and the log info of the job executed. This is the output.

    Read the article

  • Unextending Sharepoint 2007 Web Application from a zone

    - by dunxd
    When our Sharepoint was migrated from Sharepoint 2003 to Sharepoint 2007 (both fully paid versions), the consultants who carried it out extended each web app into two IIS sites/zones (e.g. the original Web App was http://intranet, then http://newintranet and http://intranet would be created for Sharepoint 2007 - each with its own IIS site). The idea was that during the migration period we would set up DNS to point the old url to SP2003 servers and the new one to SP2007, then once the migration was complete, do a DNS change so the SP2007 would recieve the requests to the http://intranet type URLs. Unfortunately the contractors did not tidy up the application extensions and IIS sites after the migration, and for some time both URLs were in use, resulting in many document links pointing to the http://newintranet type URLs. This means I need to maintain these URLs. Due to a rejig of organisation structure we now need to relocate some Sharepoint sites, and I'd like to use the RDA Collaboration Sharepoint URL Redirector feature. However a limitation of this is that it doesn't work for Web Applications which have been extended into multiple zones. So I have a need to tidy up the situation that our consultants left behind. I think the right thing to do is use the "Remove Sharepoint from IIS Web Site" page in Central Admin to remove the zone for the newintranet type sites, and select the option to also delete the IIS site. That should result in having no IIS sites listening for http://newintranet type URLs. Is this the right procedure? Once I have done that I need to set up Sharepoint to receive requests sent to the http://newintranet type URLs so they will continue to work. I am not sure if I should do this: using Alternative Access Mappings or, by adding a host header to the IIS site or, creating a non Sharepoint IIS site for each http://newintranet type URL, and use IIS redirection to forward the requests to the new URL using variables to pass the path to the Sharepoint site. Does anyone have any thoughts on these options, or any other way of achieving this? Sharepoint 2007 is running on Windows 2003 with IIS6. We don't currently have plans/budget to upgrade to Sharepoint 2010.

    Read the article

  • Innodb Queries Slow

    - by user105196
    I have redHat 5.3 (Tikanga) with Mysql 5.0.86 configued with RIAD 10 HW, I run an application inquiries from Mysql/InnoDB and MyIsam tables, the queries are super fast,but some quires on Innodb tables sometime slow down and took more than 1-3 seconds to run and these queries are simple and optimized, this problem occurred just on innodb tables in different time with random queries. Why is this happening only to Innodb tables? the below is the Innodb status and some Mysql variables: show innodb status\G ************* 1. row ************* Status: 120325 10:54:08 INNODB MONITOR OUTPUT Per second averages calculated from the last 19 seconds SEMAPHORES OS WAIT ARRAY INFO: reservation count 22943, signal count 22947 Mutex spin waits 0, rounds 561745, OS waits 7664 RW-shared spins 24427, OS waits 12201; RW-excl spins 1461, OS waits 1277 TRANSACTIONS Trx id counter 0 119069326 Purge done for trx's n:o < 0 119069326 undo n:o < 0 0 History list length 41 Total number of lock structs in row lock hash table 0 LIST OF TRANSACTIONS FOR EACH SESSION: ---TRANSACTION 0 0, not started, process no 29093, OS thread id 1166043456 MySQL thread id 703985, query id 5807220 localhost root show innodb status FILE I/O I/O thread 0 state: waiting for i/o request (insert buffer thread) I/O thread 1 state: waiting for i/o request (log thread) I/O thread 2 state: waiting for i/o request (read thread) I/O thread 3 state: waiting for i/o request (write thread) Pending normal aio reads: 0, aio writes: 0, ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 0; buffer pool: 0 132777 OS file reads, 689086 OS file writes, 252010 OS fsyncs 0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s INSERT BUFFER AND ADAPTIVE HASH INDEX Ibuf: size 1, free list len 366, seg size 368, 62237 inserts, 62237 merged recs, 52881 merges Hash table size 8850487, used cells 3698960, node heap has 7061 buffer(s) 0.00 hash searches/s, 0.00 non-hash searches/s LOG Log sequence number 15 3415398745 Log flushed up to 15 3415398745 Last checkpoint at 15 3415398745 0 pending log writes, 0 pending chkp writes 218214 log i/o's done, 0.00 log i/o's/second BUFFER POOL AND MEMORY Total memory allocated 4798817080; in additional pool allocated 12342784 Buffer pool size 262144 Free buffers 101603 Database pages 153480 Modified db pages 0 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages read 151954, created 1526, written 494505 0.00 reads/s, 0.00 creates/s, 0.00 writes/s No buffer pool page gets since the last printout ROW OPERATIONS 0 queries inside InnoDB, 0 queries in queue 1 read views open inside InnoDB Main thread process no. 29093, id 1162049856, state: waiting for server activity Number of rows inserted 77675, updated 85439, deleted 0, read 14377072495 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s END OF INNODB MONITOR OUTPUT 1 row in set, 1 warning (0.02 sec) read_buffer_size = 128M sort_buffer_size = 256M tmp_table_size = 1024M innodb_additional_mem_pool_size = 20M innodb_log_file_size=10M innodb_lock_wait_timeout=100 innodb_buffer_pool_size=4G join_buffer_size = 128M key_buffer_size = 1G can any one help me ?

    Read the article

  • Unable to burn Windows ISO from Fedora

    - by user331947
    First of all, English is not my native tongue, so apologies for any mistakes. My computer recently started prompting that it can't launch Windows successfully. So I just choose start Windows normally. Then, I found that the startup freezes at the Windows screen (before the login prompt). I have tried rebooting several times and get the same results. So I just gave up. After few days, I tried to boot up my laptop again. This time it got to the desktop, but it's extremely slow and the icons on my Desktop don't show up. I decided to format the Windows partition and reinstall a new one. (It is usually faster that way since I kept my 400GB+ data on aother partition and programs and the rest in the same partition as Windows). The thing is I get the Windows disc at the moment (Traveling aboard). But I have a Windows 7 disc image on my hard disk. So, I downloaded Ubuntu 14.04 LTS, made a Live USB, and then try to burn the image from Ubuntu. But the program just freezes and I don't know why. I tried several times and it's still the same. So I tried using Fedora instead, just to see if it will work. The Disk Image Writer report something like this. Error unmounting /dev/dm-0: Command-line `umount "/dev/dm-0"' exited with non-zero exit status 32: umount: /: target is busy (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1).) (udisks-error-quark, 14) Also, I tried installing linux on the windows partition. The installation program freezes (both Ubuntu and Fedora). So, I thought that maybe something are wrong with my hard disk. I seek the solution on the internet and found that gparted can be used to format a partition. And it also froze at "Searching /dev/sda/ partition ...". I'm using Lenovo Y570. Spec below. http://www.notebookreview.com/notebookreview/lenovo-ideapad-y570-review-a-lenovo-bestseller/3/ Can anyone suggest a next step in diagnosing and fixing this problem? Thanks in advance.

    Read the article

  • What Are All the Variables Necessary to Create Blackbox Logs for Nginx?

    - by Alan Gutierrez
    There's an article out there, Profiling LAMP Applications with Apache's Blackbox Logs, that describes how to create a log that records a lot of detailed information missing in the common and combined log formats. This information is supposed to help you resolve performance issues. As the author notes "While the common log-file format (and the combined format) are great for hit tracking, they aren't suitable for getting hardcore performance data." The article describes a "blackbox" log format, like a blackbox flight recorder on an aircraft, that gathers information used to profile server performance, missing from the hit tracking log formats: Keep alive status, remote port, child processes, bytes sent, etc. LogFormat "%a/%S %X %t \"%r\" %s/%>s %{pid}P/%{tid}P %T/%D %I/%O/%B" blackbox I'm trying to recreate as much of the format for Nginx, and would like help filling in the blanks. Here's what Nginx blackbox format would look like, the unmapped Apache directives have question marks after their names. access_log blackbox '$remote_addr/$remote_port X? [$time_local] "$request"' 's?/$status $pid/0 T?/D? I?/O?/B?' Here's a table of the variables I've been able to map from the Nginx documentation. %a = $remote_addr - The IP address of the remote client. %S = $remote_port - The port of the remote client. %X = ? - Keep alive status. %t = $time_local - The start time of the request. %r = $request - The first line of request containing method verb, path and protocol. %s = ? - Status before any redirections. %>s = $status - Status after any redirections. %{pid}P = $pid - The process id. %{tid}P = N/A - The thread id, which is non-applicable to Nignx. %T = ? - The time in seconds to handle the request. %D = ? - The time in milliseconds to handle the request. %I = ? - The count of bytes received including headers. %O = ? - The count of bytes sent including headers. %B = ? - The count of bytes sent excluding headers, but with a 0 for none instead of '-'. Looking for help filling in the missing variables, or confirmation that the missing variables are in fact, unavailable in Nginx.

    Read the article

  • How Can We Create Blackbox Logs for Nginx?

    - by Alan Gutierrez
    There's an article out there, Profiling LAMP Applications with Apache's Blackbox Logs, that describes how to create a log that records a lot of detailed information missing in the common and combined log formats. This information is supposed to help you resolve performance issues. As the author notes "While the common log-file format (and the combined format) are great for hit tracking, they aren't suitable for getting hardcore performance data." The article describes a "blackbox" log format, like a blackbox flight recorder on an aircraft, that gathers information used to profile server performance, missing from the hit tracking log formats: Keep alive status, remote port, child processes, bytes sent, etc. LogFormat "%a/%S %X %t \"%r\" %s/%>s %{pid}P/%{tid}P %T/%D %I/%O/%B" blackbox I'm trying to recreate as much of the format for Nginx, and would like help filling in the blanks. Here's what Nginx blackbox format would look like, the unmapped Apache directives have question marks after their names. access_log blackbox '$remote_addr/$remote_port X? [$time_local] "$request"' 's?/$status $pid/0 T?/D? I?/$bytes_sent/$body_bytes_sent' Here's a table of the variables I've been able to map from the Nginx documentation. %a = $remote_addr - The IP address of the remote client. %S = $remote_port - The port of the remote client. %X = ? - Keep alive status. %t = $time_local - The start time of the request. %r = $request - The first line of request containing method verb, path and protocol. %s = ? - Status before any redirections. %>s = $status - Status after any redirections. %{pid}P = $pid - The process id. %{tid}P = N/A - The thread id, which is non-applicable to Nignx. %T = ? - The time in seconds to handle the request. %D = $request_time - The time in milliseconds to handle the request. %I = ? - The count of bytes received including headers. %O = $bytes_sent - The count of bytes sent including headers. %B = $body_bytes_sent - The count of bytes sent excluding headers, but with a 0 for none instead of '-'. Looking for help filling in the missing variables, or confirmation that the missing variables are in fact, unavailable in Nginx.

    Read the article

  • PDF has garbled text when copy pasting

    - by ngm
    I'm trying to copy and paste text from a PDF file. However, whenever I paste the original text it is a huge mess of garbled characters. The text looks like the following (this is just one small extract): 4$/)5=$13! ,4&1*%-! )5'$! 1$2$)&,$40! 65))! .*5)1! -#$! )/'8*/8$03! (4/+$6&4;0!/'1!-&&)0!*0$1!.9!/,,)5%/-5&'!1$2$)&,$403!5'!+*%#!-#$! 0/+$!6/9! -#/-! &,$4/-5'8! 090-$+! 1$2$)&,$40! .*5)1!1$25%$! 1452$40! /'1! &-#$4! 090-$+! 0&(-6/4$! %&+,&'$'-0! *0$1! .9! /,,)5%/-5&'! 1$2$)&,$40!-&1/97!"#$!+5M!&(!,4&1*%-!)5'$!/'1!,4&1*%-!1$2$)&,$40! 65))! .$!+*%#!+&4$! $2$')9! ./)/'%$13! #&6$2$43! -#/'! -#$!+5M! &(! &,$4/-5'8!090-$+!/'1!/,,)5%/-5&'!1$2$)&,$40!-&1/97! )*+*+, C<88,?>8513AG<5A14, I've tried it in both Adobe and Foxit PDF readers. I did a 'Save as text' in Adobe Reader and the resultant text file is the same garbled text. Any ideas how I can get this text out non-garbled? (Other than manual typing... there's a lot of text to extract.)

    Read the article

  • POSTFIX bouncing when destination is my domain

    - by ZeC
    I am using provider mail hosting to send emails. On my Webserver I also have Postfix running and configured. Here is my main.cf smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = yes readme_directory = no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = 2-5-8.bih.net.ba alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = bhcom.info, 2-5-8.bih.net.ba, localhost.bih.net.ba, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = mailbox_size_limit = 10485760 recipient_delimiter = + inet_interfaces = 80.65.85.114 When I try sending email to my hosted domain name, every message gets bounced with this error: Nov 4 20:38:34 2-5-8 postfix/pickup[802]: 1492A3E0C6C: uid=0 from=<[email protected]> Nov 4 20:38:34 2-5-8 postfix/cleanup[988]: 1492A3E0C6C: message-id=<[email protected]> Nov 4 20:38:34 2-5-8 postfix/qmgr[803]: 1492A3E0C6C: from=<[email protected]>, size=348, nrcpt=1 (queue active) Nov 4 20:38:34 2-5-8 postfix/local[990]: 1492A3E0C6C: to=<[email protected]>, relay=local, delay=0.12, delays=0.08/0.01/0/0.04, dsn=5.1.1, status=bounced (unknown user: "info") Nov 4 20:38:34 2-5-8 postfix/cleanup[988]: 28ED53E0C6D: message-id=<[email protected]> Nov 4 20:38:34 2-5-8 postfix/qmgr[803]: 28ED53E0C6D: from=<>, size=2056, nrcpt=1 (queue active) Nov 4 20:38:34 2-5-8 postfix/bounce[991]: 1492A3E0C6C: sender non-delivery notification: 28ED53E0C6D Nov 4 20:38:34 2-5-8 postfix/qmgr[803]: 1492A3E0C6C: removed Nov 4 20:38:34 2-5-8 postfix/local[990]: 28ED53E0C6D: to=<[email protected]>, relay=local, delay=0.06, delays=0.03/0/0/0.02, dsn=5.1.1, status=bounced (unknown user: "razvoj") Nov 4 20:38:34 2-5-8 postfix/qmgr[803]: 28ED53E0C6D: removed However, when I try to @gmail.com, it sends message without problems, and here is log. What might be the issue? Nov 4 20:41:23 2-5-8 postfix/pickup[802]: B2EC63E0C6C: uid=0 from=<[email protected]> Nov 4 20:41:23 2-5-8 postfix/cleanup[1022]: B2EC63E0C6C: message-id=<[email protected]> Nov 4 20:41:23 2-5-8 postfix/qmgr[803]: B2EC63E0C6C: from=<[email protected]>, size=350, nrcpt=1 (queue active) Nov 4 20:41:23 2-5-8 postfix/smtp[1024]: connect to gmail-smtp-in.l.google.com[2a00:1450:4001:c02::1a]:25: Network is unreachable Nov 4 20:41:24 2-5-8 postfix/smtp[1024]: B2EC63E0C6C: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[173.194.70.26]:25, delay=0.97, delays=0.08/0.01/0.27/0.62, dsn=2.0.0, status=sent (250 2.0.0 OK 1352058066 f7si2180442eeo.46) Nov 4 20:41:24 2-5-8 postfix/qmgr[803]: B2EC63E0C6C: removed

    Read the article

  • Nvidia Drivers on Debian / Lenny (Stable) -> Installation successful -> Monitors gets black

    - by David
    I have successfully installed the proprietary drivers for my nvidia (geforce 7300 gt) graphics card on debian/lenny. I know its not the best way to chose for driver installation ( see this link: http://wiki.debian.org/NvidiaGraphicsDrivers#non-freedrivers ). but the two ways seem to be possible for me (nvidia-kernel module compilation). Now the problem is that the monitors gets black, the power light starts blinking after i launch the x-server. Have a short look a the logs (output truncated from /var/log/Xorg.0.log): (II) Setting vga for screen 0. (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 (==) NVIDIA(0): RGB weight 888 (==) NVIDIA(0): Default visual is TrueColor (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) (**) Jul 28 17:10:11 NVIDIA(0): Enabling RENDER acceleration (II) Jul 28 17:10:11 NVIDIA(0): Support for GLX with the Damage and Composite X extensions is (II) Jul 28 17:10:11 NVIDIA(0): enabled. (II) Jul 28 17:10:11 NVIDIA(0): NVIDIA GPU GeForce 7300 GT (G73) at PCI:1:0:0 (GPU-0) (--) Jul 28 17:10:11 NVIDIA(0): Memory: 262144 kBytes (--) Jul 28 17:10:11 NVIDIA(0): VideoBIOS: 05.73.22.25.00 (II) Jul 28 17:10:11 NVIDIA(0): Detected PCI Express Link width: 16X (--) Jul 28 17:10:11 NVIDIA(0): Interlaced video modes are supported on this GPU (--) Jul 28 17:10:11 NVIDIA(0): Connected display device(s) on GeForce 7300 GT at PCI:1:0:0: (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (CRT-0) (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0) (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (CRT-0): 400.0 MHz maximum pixel clock (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0): 165.0 MHz maximum pixel clock (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0): Internal Single Link TMDS (II) Jul 28 17:10:11 NVIDIA(0): Assigned Display Device: CRT-0 (==) Jul 28 17:10:11 NVIDIA(0): (==) Jul 28 17:10:11 NVIDIA(0): No modes were requested; the default mode "nvidia-auto-select" (==) Jul 28 17:10:11 NVIDIA(0): will be used as the requested mode. (==) Jul 28 17:10:11 NVIDIA(0): (II) Jul 28 17:10:11 NVIDIA(0): Validated modes: (II) Jul 28 17:10:11 NVIDIA(0): "nvidia-auto-select" (II) Jul 28 17:10:11 NVIDIA(0): Virtual screen size determined to be 1280 x 1024 (--) Jul 28 17:10:11 NVIDIA(0): DPI set to (85, 86); computed from "UseEdidDpi" X config (--) Jul 28 17:10:11 NVIDIA(0): option (==) Jul 28 17:10:11 NVIDIA(0): Enabling 32-bit ARGB GLX visuals. (--) Depth 24 pixmap format is 32 bpp Here is the complete /etc/X11/xorg.conf file as generated by nvidia-xconfig: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 256.35 (buildmeister@builder101) Wed Jun 16 19:25:59 PDT 2010 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "Module" Load "dbe" Load "extmod" Load "type1" Load "freetype" Load "glx" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" Hor

    Read the article

  • Fedora 11 System - Failed Hard Drive Removed, and Boot gets GRUB Hard Disk Error

    - by Mindful
    Greetings, I have a machine with a 120GB ATA drive that has what I thought to be non-essential data on it. I also have a 320GB SATA hard drive with the OS/Application/Files (good data I want to keep). My 120GB ATA is failing I believe, as my computer kept slowing to a halt. However, when I move the drive from BIOS my computer will not start, says "GRUB Hard Disk Error". I know that my Fedora system has an LVM setup. I am looking to just remove the 120GB drive from "the mix", and just have one hard drive. How do I recover ? Thank you. I have access to a Linux Live CD right now and can make any changes. However, it won't boot into my OS - it fails. UPDATE: here's my Grub.Conf # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd1,0) # kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 # initrd /initrd-version.img #boot=/dev/sda1 default=0 timeout=5 splashimage=(hd1,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.30.10-105.2.23.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.10-105.2.23.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.10-105.2.23.fc11.i686.PAE.img title Fedora (2.6.30.9-102.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.9-102.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.9-102.fc11.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.img title Fedora (2.6.27.21-170.2.56.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.21-170.2.56.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.21-170.2.56.fc10.i686.img title Fedora (2.6.27.19-170.2.35.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.19-170.2.35.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.19-170.2.35.fc10.i686.img title Upgrade to Fedora 10 (Cambridge) kernel /upgrade/vmlinuz preupgrade repo=hd::/var/cache/yum/preupgrade stage2=http://chi-10g-1-mirror.fastsoft.net/pub/linux/fedora/linux/releases/10/Fedora/i386/os/images/install.img ks=hd:UUID=f11769ba-29bc-46de-8c40-a949720a438e:/upgrade/ks.cfg initrd /upgrade/initrd.img title Win rootnoverify (hd0,0) chainloader +1

    Read the article

  • What issues ensue from having multiple versions of Office installed?

    - by Michael Sorens
    My ultimate question is embodied in the title but I thought it might be helpful to others if I detail what instigated my inquiry and my examination of the problem. To me, the first rule of software updates is Primum non nocere -- first, do no harm. So with my Windows 7 system containing both Office 2003 and Office 2010 I blithely proceeded to install this month's updates from Microsoft, containing updates for both versions of Office. While Microsoft officially does not recommend running multiple versions (see, for example, Running Multiple Versions of Microsoft Excel it is possible; I have had two versions installed for a year or more and have never run into an issue before. One thing that is always mentioned is installation order, i.e., the one you want to open files by default should be installed last. I wanted 2010 as my default so I had indeed installed 2003 first then, years later, 2010. So with this round of Windows updates, either it installed patches to 2010 before 2003, knocking out the file association, or the 2003 patch was more comprehensive, in the sense of touching the file association while the 2010 did not. In any case, after updates, double-clicking a .xls file opened 2003 rather than 2010. Web search indicated either: Use the file associations control panel to re-associate .xls files with the correct version of excel. I looked at this first, but it showed what seemed to be an unversioned "Excel" associated with .xls files so I did not check further. (This turned out to be an error on my part; more later.) Re-install versions in the desired order; I find this unreasonable. Run the repair option of the Office installer on the desired version; still seems more work than one should need. Run excel from the command line with "/regserver" on the one to be the default and "/unregserver" on the other. Good idea, but further search indicated that neither 2007 nor 2010 support "/regserver" contrary to some posts (e.g. Default Program With Multiple Versions Installed). Since this was a Windows Update issue and Microsoft provides free support for such, I inquired there as well, but succeeded only in getting the suggestion to uninstall all other versions, period; not acceptable to me. What worked for me was going back to the file associations control panel and manually selected the Office 2010 version of Excel. While it appeared no different in the control panel, it did fix the double-click issue. So if all it takes is this simple fix after an update, I can live with that. What I am wondering is: Has anyone seen any other problems related to having multiple versions of Office installed?

    Read the article

  • How to encrypt dual boot windows 7 and xp (bitlocker, truecrypt combo?) on sdd (recommended?)

    - by therobyouknow
    I would like to setup a dual boot Windows 7 and Windows XP laptop/notebook computer where each operation system's partition is fully encrypted. I would like to do this on a SSD - a 128Gb Crucial M4. My research Dual boot of truecrypt encrypted OSs on one drive (not possible - in Truecript 7.x at time of writing) This cannot be done on a standard Truecrypt setup - it will only support encrypting one of the operating systems. I have tried this and also read about it here on superuser.com However, I did see a solution here that uses grub4dos as the initial bootloader to chain to separate truecrypt encrypted OSs, in my case Windows 7 and Windows XP: http://yyzyyz.blogspot.co.uk/2010/06/truecrypt-how-to-encrypt-multiple.html I am not going to consider this solution as it relies upon some custom code for use in the bootloader that is provided by the author. I would prefer a solution that can be fully understood so that I can be sure that there is nothing undesirable occuring (i.e. malware or just simply bugs in the code). I would like to believe such a solution doesn't have those risks but I can't be sure. BitLocker and Truecrypt combination - possible solution? So I am now considering a combination of encryption programs: I now aim to encrypt Windows XP with Truecrypt and Windows 7 with BitLocker. Assuming Truecrypt bootloader can boot into non-Truecrypt OSs (e.g. via hitting Escape to go to another menu), then this solution may be viable. SSDs and Encryption (use fastest possible spinning hard disk instead (?)) I read on various superuser.com posts and elsewhere that current SSDs are not suited to whole drive encryption for various reasons: impact of performance algorithms that give SSDs advantage over spinning harddisks. Algorithms used in compression of data for example. Wear on the SSD, shortening its life Security issues whereby data is repeated, as indicated in some Truecrypt documentation So I am now considering not using SSD. But with the aim to have the fastest drive possible, I am considering using the Western Digital Scorpion black 2.5" 7200rpm harddisk as this appears to be top rated among spinning platter-based harddrives (don't work for Western Digital). Summary So to achieve whole drive encrypted dual boot Windows 7 and Windows XP with minimal performance impact I intend to use a combination of Truecrypt and Bitlocker on a top-rated conventional spinning platter-based harddisk. Questions Will my summary: achieve whole disk encryption of the dual-boot Windows XP, Windows 7? OR an you suggest a simpler solution, including one that only requires only Truecrypt (BitLocker not available on XP). Or another encryption tool, including paid-for? provide the highest performance. Am I correct to avoid using SDD with encryption for the reasons I discovered? Are the concerns about SSDs and encryption still very real (some articles I read go back to 2010) Thanks for your input!

    Read the article

  • I want to virtualize my workstation (Tier 1), Looking for Bare Metal Hypervisor for consumer grade components

    - by Chase Florell
    I find myself in this similar bind at least once a year. The bind whereby I'm either upgrading a motherboard, or an OS hard drive. It drives me crazy to have to reinstall Windows, Visual Studio, all my addins, reconfigure my settings etc... every single time. I have a layout and I like and I want to stick with it. My question is... Is there a Bare Metal Hypervisor on the market that will enable me to virtualize my consumer grade workstation? I really want to avoid Host/Client virtualization. Bare Metal is definitely a better way to go for my needs. Is this a good approach, or am I going to suffer some other undesirable side effects by doing this? Clarification My machine has very limited purposes. My primary use is Visual Studio 2010 Professional where I develop ASP.NET MVC Web Applications. The second piece of software that I use (that's system intensive) is Photoshop CS3. Beyond that, my applications are limited to Outlook, Internet Explorer, Firefox, Opera, Chrome, LinqPad, and various other (small) apps. Beyond this, I'm considering working on a node.js project and might run ubuntu on the same hypervisor if possible. System Specs: Gigabyte Motherboard Intel i7 920 12 GB Ram basic 500GB 7200RPM HDD for OS 4 VelociRaptors in Raid 1/0 for build disk Dual GTS250 (512MB) Graphics cards (non SLI) for quad monitors On a side note I also wouldn't be opposed to an alternative suggestion if the limitations are too great. I could install the ESXi (or Zen Server) on my box, and build a separate "thin client" to RDP into the virtual machine. It appears as though RDP supports dual monitors. Edit (Dec 9, 2011) It's been nearly a year since I first asked this question. Since then, there have been a lot of great strides in Hypervisor technology... AND MokaFive is now released for corporate use. I'd love to dig into this question a little more and find out if there is a solid BareMetal Hypervisor for workstations running consumer grade components (IE: not Dell, HP, Lenovo, Etc).

    Read the article

  • Postfix issues sending mail to addresses under domain located on server

    - by iamthewit
    I recently installed virtualmin on my nice shiny new rackspace cloud. Everything went seemlessly but I've been having some issues getting emails to send properly. The problem seems to be that the server can not send mail to email addresses where the domain is owned by my server. For example, on my server I run multiple virtual domains, lets call this one test.com. When I run the mail command from shell (mail [email protected]) I get the following back from my maillog: Oct 6 14:55:18 test postfix/pickup[8737]: DC1131612CC: uid=0 from= Oct 6 14:55:18 test postfix/cleanup[8769]: DC1131612CC: [email protected] Oct 6 14:55:18 test postfix/qmgr[8738]: DC1131612CC: [email protected], size=353, nrcpt=1 (queue active) Oct 6 14:55:18 test postfix/error[8771]: DC1131612CC: [email protected], relay=none, delay=0, delays=0/0/0/0, dsn=5.0.0, status=bounced (User unknown in virtual alias table) Oct 6 14:55:18 test postfix/cleanup[8769]: DD07D1612D1: [email protected] Oct 6 14:55:18 test postfix/bounce[8772]: DC1131612CC: sender non-delivery notification: DD07D1612D1 Oct 6 14:55:18 test postfix/qmgr[8738]: DD07D1612D1: from=<, size=2268, nrcpt=1 (queue active) Oct 6 14:55:18 test postfix/qmgr[8738]: DC1131612CC: removed Oct 6 14:55:18 test postfix/local[8773]: DD07D1612D1: [email protected], relay=local, delay=0.03, delays=0/0/0/0.03, dsn=2.0.0, status=sent (delivered to command: /usr/bin/procmail-wrapper -o -a $DOMAIN -d $LOGNAME) Oct 6 14:55:18 test postfix/qmgr[8738]: DD07D1612D1: removed when I run mail [email protected] the message is sent and received perfectly fine. I'm a bit of a noob when it comes to servers, but I pick things up fairly quickly, so please excuse any incorrect terminology and my general noobiness. Any help would be greatly appreciated, I've been googling for quite a while but I haven't found a solution yet, I'll add a copy of my main.cf file in a response below cheers guys here is the reformatted postconf, do you want the reformatted main.cf file too, or is this enough? alias_database = hash:/etc/postfix/aliases alias_maps = hash:/etc/postfix/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no mailbox_command = /usr/bin/procmail-wrapper -o -a $DOMAIN -d $LOGNAME mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man myhostname = server.test.com newaliases_path = /usr/bin/newaliases.postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sender_bcc_maps = hash:/etc/postfix/bcc sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = permit_mynetworks permit_sasl_authenticated reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/virtual

    Read the article

  • How to circumvent ISP Limiting "Unknown" traffic - (SSH)Proxy, VPN

    - by connery
    I am having issues with using a proxy/VPN, with my current ISP (Comenersol, Spain). From my point of view they limit traffic by protocol or by traffic they "know" and "dont know". I'll explain my findings so far below. Internet connection in Spain: ~400-420KByte/sec (speedtest.net) OpenVPN Server in Sweden(pfsense): 100/100Mbit. LZO Compression. TCP. Tun. Aes128 Squid Proxy server in Sweden (pfsense): 100/100 (same box as the vpn server). Plain, no encryption. Runs in stealth mode to hide the use of proxy. NOT running OpenVPN or Squid Proxy, this is my findings: When I download a file from my pfsense box in Sweden, I get maximum speed When I run speedtest.net and choose any european server (including Swedish), I get max speed When I download a torrent (with non default port above 10K), I get limited to ~100KByte/sec. Encryption is turned off If I download something through https, I get max speed Running either Squid Proxy or VPN, this is my findings When I download a file from my pfsense box in Sweden, I get ~100KByte/sec When I run speedtest.net and choose any european server (including Swedish and Spanish), I get ~100Kbyte/sec When I download a torrent, I get same limitation ~100KByte/sec When I download something through https, I get ~100KByte/sec I verify the speeds above with speedtest.net measure, firefox measure in addition to having bmon running in terminal in the background. This way I am certain that the speeds I get presented, are in fact correct. If I connect through a different ISP with VPN or Squid Proxy, I get better speeds (400KByte/sec ++) In short: Whenever I tunnel my traffic through Sweden, my SPanish ISP throttles the traffic. I thought tunneling it through Squid would solve the issue, since I then would no longer hide my traffic through encryption. This does not seem to be the case. Wget and fetch gives same result. I did not try 'nc', but I assume this would give the same result. Does anyone know how to circumvent this issue? I would very much like to be able to get full speed with Swedish ip, as this would make me able to stream TV at higher quality than today. 100KByte/sec just does not cut it quality wise. Thanks for reading. Looking forward for your help.

    Read the article

  • VIM "upgraded" to expandtab and tabstop=8 on Python files

    - by dotancohen
    After reinstalling my OS from Kubuntu 12.10 to Kubuntu 14.04, VIM has changed its behaviour when editing Python files. Though before the reinstall all file types had noexpandtab and tabstop=4 set, now in Python those values are expandtab and tabstop=8, checked also via VIM behaviour and also via asking VIM set foo?. Non-Python files retain the noexpandtab and tabstop=4 behaviour that I prefer. The .vim direcotry and .vimrc were not touched during the reinstall. It can be seen that no files in .vimrc have been touched in months (with the exception of the irrelevant .netrwhist): - bruno():~$ ls -lat ~/.vim total 68 drwxr-xr-x 85 dotancohen dotancohen 12288 Aug 25 13:00 .. drwxr-xr-x 12 dotancohen dotancohen 4096 Aug 21 11:11 . -rw-r--r-- 1 dotancohen dotancohen 268 Aug 21 11:11 .netrwhist drwxr-xr-x 2 dotancohen dotancohen 4096 Mar 6 18:31 plugin drwxr-xr-x 2 dotancohen dotancohen 4096 Mar 6 18:31 doc drwxrwxr-x 2 dotancohen dotancohen 4096 Nov 29 2013 syntax drwxrwxr-x 2 dotancohen dotancohen 4096 Nov 29 2013 ftplugin drwxr-xr-x 4 dotancohen dotancohen 4096 Nov 29 2013 autoload drwxrwxr-x 5 dotancohen dotancohen 4096 May 27 2013 after drwxr-xr-x 2 dotancohen dotancohen 4096 Nov 1 2012 spell -rw------- 1 dotancohen dotancohen 138 Aug 14 2012 .directory -rw-rw-r-- 1 dotancohen dotancohen 190 Jul 3 2012 .VimballRecord drwxrwxr-x 2 dotancohen dotancohen 4096 May 12 2012 colors drwxrwxr-x 2 dotancohen dotancohen 4096 Mar 16 2012 mytags drwxrwxr-x 2 dotancohen dotancohen 4096 Feb 14 2012 keymap Though .vimrc has been touched since the reinstall, it was only me testing to see where the problem is. How can I tell what is settingexpandtab and tabstop? Side note: I'm not even sure what I should read in the built-in help for this issue. I started with ":h plugin" but that did not help other than showing me that the following plugins are loaded (possibly relevant): standard-plugin-list Standard plugins pi_getscript.txt Downloading latest version of Vim scripts pi_gzip.txt Reading and writing compressed files pi_netrw.txt Reading and writing files over a network pi_paren.txt Highlight matching parens pi_tar.txt Tar file explorer pi_vimball.txt Create a self-installing Vim script pi_zip.txt Zip archive explorer LOCAL ADDITIONS: local-additions DynamicSigns.txt - Using Signs for different things NrrwRgn.txt A Narrow Region Plugin (similar to Emacs) fugitive.txt A Git wrapper so awesome, it should be illegal indent-object.txt Text objects based on indent levels. taglist.txt Plugin for browsing source code vimwiki.txt A Personal Wiki for Vim

    Read the article

  • Recovering data from failed Raid configuration with 4 drives and two raid sets (Asus P6T / Intel ICH10r)

    - by user56365
    I've added the complete detailed version for my question below for those who can help, but want to quickly summarize my question first. I setup two Raid arrays using (4) WD Raptors, a striped set for the OS and 1+0 set for crucial data. After booting once out of the 50 times a cable fell out, the drive wasn't recognized in the array anymore. After trying to fix it, another drive did the same. I now have two drives remaining, luckily with the parity information. I know the striped set is gone, but I need the data on the other set. Can anyone recommend anything to recover the data, or fix the two drives that doesn't allow the raid controller to recognize the drives, even though they are listed on the utility screen as still apart of the configuration but that they are not found? More Details I recently upgraded to a ASUS P6T motherboard with an Intel ICH10R raid controller and changed my previous 4 drive raid array from strictly a Raid 1+0 set to a Raid 0 for the OS/Page/Scratch drive and a Raid 1+0 set for crucial data. I never had problems after upgrading with my configuration, even when a drive died and was replaced. I managed to rebuild the array fine. Unfortunately this time around, a cable came unattached and I booted my system up until the raid status screen with the degraded error. This shouldn't have been a problem, but after I attached the drive it was no longer recognized as a member in the array. Both drives actually show up as a non-member disk. I've spent a very, very long time online trying to find information or support and haven't had much luck. After spending time trying to scan the drive for errors, damaged partition info, etc.. another drive in the set decided it didn't want to be recognized as a part of the array. At this point, I have two out of the four drives still functioning, but the Raid 1+0 array went from degraded to failed and I must find a way to retrieve that data. I think the two drives still in the array have the parity information because they show up as OS (110GB),BACKUP(80GB) and OS:1(110GB),BACKUP(80GB) under windows data management. The other two are simply 74gb Raw unallocated Is it possible recover the data using those two drive only, and which tool would I use? Could it be a simple partition table or any other error that is repairable with hard drive utilities out there? I know the Raid 0 set is done for, but I would assume because the correct drives failed in a 1+0 config to save the data I can retrieve it some how.

    Read the article

< Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >