Search Results

Search found 6907 results on 277 pages for 'smart folders'.

Page 261/277 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • Puppet gives SSL error because master is not running?

    - by Daniel Huger
    I started with two clean machines this time. My master is running 12.04 Version: 2.7.11-1ubuntu2 Depends: ruby1.8, puppetmaster-common (= 2.7.11-1ubuntu2) My client is 10.04 Version: 2.6.3-0ubuntu1~lucid1 Depends: puppet-common (= 2.6.3-0ubuntu1~lucid1), ruby1.8 To setup Puppet tutorial: http://shapeshed.com/setting-up-puppet-on-ubuntu-10-04/ To connect master and client: http://shapeshed.com/connecting-clients-to-a-puppet-master/ The first time I tried to connect master to client failed with SSL_connect error. So I did rm -rf /etc/puppet/ssl/ to remove all the keys inside ssl folders. It looked like it work.... BUT client# puppet agent --server puppet --waitforce 60 --test /usr/lib/ruby/1.8/facter/util/resolution.rb:46: warning: Insecure world writable dir /etc/condor in PATH, mode 040777 /usr/lib/ruby/1.8/puppet/defaults.rb:67: warning: Insecure world writable dir /etc/condor in PATH, mode 040777 info: Creating a new SSL key for giab10 warning: peer certificate won't be verified in this SSL session info: Caching certificate for ca warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session info: Creating a new SSL certificate request for mybox123 info: Certificate Request fingerprint (md5): XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session info: Caching certificate for mybox123 err: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed warning: Not using cache on failed catalog It cached but then it couldn't retrieve it. Let me stop here.... worrying I would mess something up. But let's check master's status. * master is not running WoW.... ??? master# service puppetmaster start * Starting puppet master [OK] master# service puppetmaster status * master is not running I think time is sync. Well, we are behind a firewall so the port to sync time is disbaled. I checked with date and they seem okay. What about master not running? Is that the cause? Any help is appreciated. Thanks! /var/lib/puppet/log/masterhttp.log [2012-06-30 00:13:25] INFO WEBrick 1.3.1 [2012-06-30 00:13:25] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:13:25] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 00:19:40] INFO WEBrick 1.3.1 [2012-06-30 00:19:40] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:19:40] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 00:28:58] INFO WEBrick 1.3.1 [2012-06-30 00:28:58] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:28:58] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 15:31:25] INFO WEBrick 1.3.1 [2012-06-30 15:31:25] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 15:31:25] WARN TCPServer Error: Address already in use - bind(2) 1 S puppet 5186 1 0 80 0 - 29410 poll_s 15:44 ? 00:00:00 /usr/bin/ruby1.8 /usr/bin/puppet master --masterport=8140 4 S root 5235 5005 0 80 0 - 2344 pipe_w 15:45 pts/0 00:00:00 grep --color=auto puppet kill -9 5186 puppet master service puppetmaster status * master is not running I always have this error, but I always ignored it. http://pastebin.com/exbpArjv What could it mean? Time sync? Package not installed? Then how could we do puppetca in the first place?

    Read the article

  • Performance issues when using SSD for a developer notebook (WAMP/LAMP stack)?

    - by András Szepesházi
    I'm a web application developer using my notebook as a standalone development environment (WAMP stack). I just switched from a Core2-duo Vista 32 bit notebook with 2Gb RAM and SATA HDD, to an i5-2520M Win7 64 bit with 4Gb RAM and 128 GB SDD (Corsair P3 128). My initial experience was what I expected, fast boot, quick load of all the applications (Eclipse takes now 5 seconds as opposed to 30s on my old notebook), overall great experience. Then I started to build up my development stack, both as LAMP (using VirtualBox with a debian guest) and WAMP (windows native apache + mysql + php). I wanted to compare those two. This still all worked great out, then I started to pull in my projects to these stacks. And here came the nasty surprise, one of those projects produced a lot worse response times than on my old notebook (that was true for both the VirtualBox and WAMP stack). Apache, php and mysql configurations were practically identical in all environments. I started to do a lot of benchmarking and profiling, and here is what I've found: All general benchmarks (Performance Test 7.0, HDTune Pro, wPrime2 and some more) gave a big advantage to the new notebook. Nothing surprising here. Disc specific tests showed that read/write operations peaked around 380M/160M for the SSD, and all the different sized block operations also performed very well. Started apache performance benchmarking with Apache Benchmark for a small static html file (10 concurrent threads, 500 iterations). Old notebook: min 47ms, median 111ms, max 156ms New WAMP stack: min 71ms, median 135ms, max 296ms New LAMP stack (in VirtualBox): min 6ms, median 46ms, max 175ms Right here I don't get why the native WAMP stack performed so bad, but at least the LAMP environment brought the expected speed. Apache performance measurement for non-cached php content. The php runs a loop of 1000 and generates sha1(uniqid()) inisde. Again, 10 concurrent threads, 500 iterations were used for the benchmark. Old notebook: min 0ms, median 39ms, max 218ms New WAMP stack: min 20ms, median 61ms, max 186ms New LAMP stack (in VirtualBox): min 124ms, median 704ms, max 2463ms What the hell? The new LAMP performed miserably, and even the new native WAMP was outperformed by the old notebook. php + mysql test. The test consists of connecting to a database and reading a single record form a table using INNER JOIN on 3 more (indexed) tables, repeated 100 times within a loop. Databases were identical. 10 concurrent threads, 100 iterations were used for the benchmark. Old notebook: min 1201ms, median 1734ms, max 3728ms New WAMP stack: min 367ms, median 675ms, max 1893ms New LAMP stack (in VirtualBox): min 1410ms, median 3659ms, max 5045ms And the same test with concurrency set to 1 (instead of 10): Old notebook: min 1201ms, median 1261ms, max 1357ms New WAMP stack: min 399ms, median 483ms, max 539ms New LAMP stack (in VirtualBox): min 285ms, median 348ms, max 444ms Strictly for my purposes, as I'm using a self contained development environment (= low concurrency) I could be satisfied with the second test's result. Though I have no idea why the VirtualBox environment performed so bad with higher concurrency. Finally I performed a test of including many php files. The application that I mentioned at the beginning, the one that was performing so bad, has a heavy bootstrap, loads hundreds of small library and configuration files while initializing. So this test does nothing else just includes about 100 files. Concurrency set to 1, 100 iterations: Old notebook: min 140ms, median 168ms, max 406ms New WAMP stack: min 434ms, median 488ms, max 604ms New LAMP stack (in VirtualBox): min 413ms, median 1040ms, max 1921ms Even if I consider that VirtualBox reached those files via shared folders, and that slows things down a bit, I still don't see how could the old notebook outperform so heavily both new configurations. And I think this is the real root of the slow performance, as the application uses even more includes, and the whole bootstrap will occur several times within a page request (for each ajax call, for example). To sum it up, here I am with a brand new high-performance notebook that loads the same page in 20 seconds, that my old notebook can do in 5-7 seconds. Needless to say, I'm not a very happy person right now. Why do you think I experience these poor performance values? What are my options to remedy this situation?

    Read the article

  • Moving users folder on Windows-7 to another partition - bad idea?

    - by Donat
    Hi, I'd like to re-submit here a question posted by Benjol on Aug 17at 5:57 "Moving users folder on Windows Vista to another partition - bad idea?" (I can't post one than one link until I earn "10 reputation" and removed my "answer" there to post my follow-up questions here). I am anxiously getting ready at long last to to carry out a clean install (using custom install option) from Vista to Windows-7 Home Premium 64bit with the free upgrade I received late October. For my Vista system I successfully set-up last Summer a multi-partitions scheme with Users and Program Data on a a different partition than the operating system (see link below, and its subsequent links in my comment for details). http://tuts4tech.net/2009/08/05/windows-7-move-the-users-and-program-files-directories-to-a-different-partition/comment-page-1/#comment-562 I was planning a similar set-up for windows 7, a little more streamlined, with OS, Program Files on C:, Users and Program Data on D:, and TV media recording on a separate partition. Reading the Question submitted by Benjol, I am second guessing too. Is moving Users and Program Data on a different partition than the default primary partition with OS and Program Files such a good idea? The couple of people I talked to at the official Microsoft Windows 7 booth at CES 2010 gave the same answer to the intention of moving the Users profile folder to another partition. In a nutshell, they all told me that they used to do this in XP and less in Vista but not anymore with Windows 7... "It is stable, after two months still no problem" I had the feeling it was a scripted answer to emphasize how Windows 7 is so stable and efficient... (Will Windows-7 system not become bugged down over the course of several months to a year or two? Only time will tell) Long story short, I share the same view than Benjol expressed with respect to being "able to backup and restore system and user data independently." I just received a 2TB usb2, eSATA external hard drive as a back-up drive, which includes NTI Shadow 4 (4.1.0.150) for back-up solution. I took note of the issue with NTUSER.DAT and I will read more about Volume Shadow Copy Service (VSS) for Windows 7. I am willing to put the effort if placing Users and Program Data on a different partition would allow to restore a fresher OS+Program image when the system gets bugged down. Questions: Is it such a bad idea? What is the "easy route" referred by Benjol in his post? Is it to just relocate folders to another partition using the Folder property tool? (It is not practical for several users and might not provide a straightforward restore process of just OS and Program Files when needed.) I am starting to learn about Windows 7 libraries. Would Windows 7 libraries be another alternative to achieve this? All this reading to decide how to organize the partition scheme for my custom system is starting to be confusing. I apologize for this lengthy Question. It is my first day here on SuperUser and I am just learning how different from a discussion thread it is. Thank you in advance for all your suggestions and comments. Donat

    Read the article

  • Backup a hosted Sharepoint

    - by David Mackintosh
    One of my customers has outsourced their Sharepoint and Exchange services to a hosted services provider. I believe it is a Sharepoint 2007 service. It is a shared hosting solution, so we do not have any kind of access to the server itself; we only have user-level and sharepoint-administrator-level access to the Sharepoint application. They have come to the point where they would like to have a copy of everything that is on the Sharepoint server. I have downloaded the Office Sharepoint Designer 2007, and it features three (!) ways to backup a Sharepoint server, none (!) of which work for me: File-Export-Personal Web Package: When selecting everything, it calculates a negative size. Barfs with No "content-type" in CGI environment error. File-Export-Sharepoint Template: barfs with a A World Wide Web browser, such as Windows Internet Explorer, is required to use this feature error. Site-Administration-Backup Web Site: wants to create the backup .cmp file on the sharepoint server itself. I don't have access to any servers on the same network so I can't redirect it to any form of the suggested \\server\place. Barfs with a The Web application at $URL could not be found. [...] error. Possibly moot because Google tells me that bad things happen using OSD to back up sites larger than 24MB (which this site is most definitely). So I called the helpdesk of the outsource provider, and got told that they recommend using OSD, but no they don't actually provide any application support for OSD (not that I blame them for that), but they could do a stsadm.exe backup and provide us with that, and OSD should be able to read the resulting cmp file. Then for authorization reasons they had my customer call them directly (since I can't authorize such an operation), and they told him that he didn't want a stsadm.exe backup, he wanted to get into an 'explorer view' and deal with things that way (they were vague). Google hasn't been much help in figuring out what an 'explorer view' is, let alone how I bring one up. The end goal of this operation is to have a backup of the site as it exists (hopefully today, but shortly anyways) in such a format that we don't need another sharepoint server to restore it to. Ie we'd like to be able to pick individual content directly out of this backup. We are not excessively concerned with things like formatting. We just want the documents. This is a fairly complex site with multiple subsites and multiple folders per subsite, so sitting there and manually downloading each file isn't really going to happen if there is a better easier way. So, my questions: Is the stsadm.exe backup what I want? If not, what do I want? If I manage to convince them that I do want the stsadm.exe backup, can I pick files out of the resulting backup file with OSD? If OSD isn't going to let me extract individual files, is there a tool I can use that can?

    Read the article

  • Bypass cache for mobile user agents, VARNISH+NGINX+W3CACHE

    - by Mike McGhee
    Right now I'm running Wordpress w/ W3 Cache on nginx with varnish front end. I'm trying to use the WP Touch Pro plugin for wordpress to display mobile sites, but it is not working. Shows the desktop theme still. I've put the mobile user agents in the rejected user agents box in w3 cache. Here is the nginx config w3 cache spit out: BEGIN W3TC Page Cache cache location ~ /wp-content/w3tc/pgcache.*html$ { expires modified 3600s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; add_header Vary "Accept-Encoding, Cookie"; } location ~ /wp-content/w3tc/pgcache.*gzip$ { gzip off; types {} default_type text/html; expires modified 3600s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; add_header Vary "Accept-Encoding, Cookie"; add_header Content-Encoding gzip; } # END W3TC Page Cache cache # BEGIN W3TC Browser Cache gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; } location ~ \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; } # END W3TC Browser Cache # BEGIN W3TC Minify core rewrite ^/wp-content/w3tc/min/w3tc_rewrite_test$ /wp-content/w3tc/min/index.php?w3tc_rewrite_test=1 last; rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; # END W3TC Minify core # BEGIN W3TC Page Cache core rewrite ^(.*\/)?w3tc_rewrite_test$ $1?w3tc_rewrite_test=1 last; set $w3tc_rewrite 1; if ($request_method = POST) { set $w3tc_rewrite 0; } if ($query_string != "") { set $w3tc_rewrite 0; } if ($http_host != "mysite.com") { set $w3tc_rewrite 0; } set $w3tc_rewrite2 1; if ($request_uri !~ \/$) { set $w3tc_rewrite2 0; } if ($request_uri ~* "(sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { set $w3tc_rewrite2 1; } if ($w3tc_rewrite2 != 1) { set $w3tc_rewrite 0; } set $w3tc_rewrite3 1; if ($request_uri ~* "(\/wp-admin\/|\/xmlrpc.php|\/wp-(app|cron|login|register|mail)\.php|\/feed\/|wp-.*\.php|index\.php)") { set $w3tc_rewrite3 0; } if ($request_uri ~* "(wp\-comments\-popup\.php|wp\-links\-opml\.php|wp\-locations\.php)") { set $w3tc_rewrite3 1; } if ($w3tc_rewrite3 != 1) { set $w3tc_rewrite 0; } if ($http_cookie ~* "(comment_author|wp\-postpass|wordpress_\[a\-f0\-9\]\+|wordpress_logged_in)") { set $w3tc_rewrite 0; } if ($http_user_agent ~* "(W3\ Total\ Cache/0\.9\.2\.4|iphone|ipod|ipad|aspen|incognito|webmate|android|dream|cupcake|froyo|blackberry9500|blackberry9520|blackberry9530|blackberry9550|blackberry\ 9800|blackberry\ 9780|webos|s8000|bada)") { set $w3tc_rewrite 0; } set $w3tc_ua ""; if ($http_user_agent ~* "(acer\ s100|android|archos5|blackberry9500|blackberry9530|blackberry9550|blackberry\ 9800|cupcake|docomo\ ht\-03a|dream|htc\ hero|htc\ magic|htc_dream|htc_magic|incognito|ipad|iphone|ipod|kindle|lg\-gw620|liquid\ build|maemo|mot\-mb200|mot\-mb300|nexus\ one|opera\ mini|samsung\-s8000|series60.*webkit|series60/5\.0|sonyericssone10|sonyericssonu20|sonyericssonx10|t\-mobile\ mytouch\ 3g|t\-mobile\ opal|tattoo|webmate|webos)") { set $w3tc_ua _high; } set $w3tc_ref ""; set $w3tc_ssl ""; set $w3tc_enc ""; if ($http_accept_encoding ~ gzip) { set $w3tc_enc _gzip; } set $w3tc_ext ""; if (-f "$document_root/wp-content/w3tc/pgcache/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl.html$w3tc_enc") { set $w3tc_ext .html; } if ($w3tc_ext = "") { set $w3tc_rewrite 0; } if ($w3tc_rewrite = 1) { rewrite .* "/wp- content/w3tc/pgcache/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl$w3tc_ext$w3tc_enc" last; } # END W3TC Page Cache core And here is what I have in my varnish vcl.. sub vcl_recv { # Add a unique header containing the client address remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Device detection set req.http.X-Device = "desktop"; if ( req.http.User-Agent ~ "iP(hone|od|ad)" || req.http.User-Agent ~ "Android" ) { set req.http.X-Device = "smart"; } elseif ( req.http.User-Agent ~ "(SymbianOS|BlackBerry|SonyEricsson|Nokia|SAMSUNG|^LG)" ) { set req.http.X-Device = "cell"; } Any help is greatly appreciated, I've been banging my head against this for 2 days..

    Read the article

  • Wordpress Permissions OS X & MAMP

    - by Matt2020
    I have installed several local versions of Wordpress for development purposes. After the install I can create posts, pages and edit admin options. However as soon as try to upload images which would be saved in wp_content/uploads I get an error: Upload Error: Unable to create directory ...../blog/wp-content/uploads/2011/05. Is its parent directory writable by the server? Looks like MAMP server runs as user _www The blog directory is owned by User1 and the group User1 _www is not in the User1 group, should it be? I do not want to chmod 777 or 765 on the directories just to get it going. Googled up a couple of references: http://codex.wordpress.org/Changing_File_Permissions in "Permission Scheme for WordPress" All files should be owned by your user (ftp) account on your web server, and should be writable by that account. On shared hosts, files should never be owned by the webserver process itself (sometimes this is www, or apache, or nobody user). Any file that needs write access from WordPress should be owned or group-owned by the user account used by the WordPress (which may be different than the server account). For example, you may have a user account that lets you FTP files back and forth to your server, but your server itself may run using a separate user, in a separate usergroup, such as dhapache or nobody. If WordPress is running as the FTP account, that account needs to have write access, i.e., be the owner of the files, or belong to a group that has write access. In the latter case, that would mean permissions are set more permissively than default (for example, 775 rather than 755 for folders, and 664 instead of 644). User and group are User1 (which is admin). Running "ps aux | grep httpd" is running as _www So I think this means Wordpress is running as user _www. So the advice seems contradictory: "files should never be owned by the webserver process" i.e. _www but then later it says "Any file that needs write access from WordPress should be owned or group-owned by the user account used by the WordPress" So isn't this _www again? Another search found this url http://dancingengineer.com/computing/2009/07/how-to-install-wordpress-on-mac-os-x-leopard States Which says: My preferred way to do this is to change the group of the wordpress directory and its contents to _www and give write permissions to the group. Keep the owner as your "username". $ cd /Users/"username"/Sites $ sudo chown -R username:_www wordpress_directory $ sudo chmod -R g+w wordpress_directory However, when I tried this, it did not work for automatic upgrades to newer versions of WordPress although it worked for automatically updating the .htaccess file for pretty permalinks. It is not entirely clear to me what should be done. This last suggestion seems to be saying change the group from User1 to _www and give the group write access, but Wordpress upgrades won't work. Is this the right solution? I would have thought there would be a clear way to set this up on OS X 10.6? Be great if there was a plugin that could run a script for each of the main OS's that Wordpress runs on.

    Read the article

  • Bad disks in ancient server

    - by Joel Coel
    I have a 1998-era Netware 3.12 server that runs everything on our campus: general ledger, purchasing, payroll, student information, grades, you name it. The server has an Adaptec RAID controller with two volumes: RAID 1, 2 17GB scsi disks, Seagate ST318417W RAID 5, 3 4GB scsi disks, 2 Seagate ST34573W and 1 ST34572W. We are currently in the early stages of a project to replace this system, but you don't just jump into a new system like that and so I need to keep this server running until at least November 2011. This week we had not one but two hard drives fail. Thankfully they are from different volumes and we're able to keep running for the moment, but given the close nature of these failures I have serious doubts that I'll be able to avoid catastrophic failure from this server through the November target as is without restoring the RAID redundancy — it'll only take one more drive failure anywhere and I'm completely hosed. We are fortunate enough to have exact match "spares" lying around for both drives, but the spares are in unknown condition. I tried swapping just them in, but the RAID controller isn't smart enough to handle this and it renders the system unbootable. As for the RAID controller itself, there is utility I can get into during POST via a Ctrl-A shortcut, but I can't do much useful from there. To actually manage volumes I must first boot in to Netware, at which point I can use CI/O Array Management Software Version 2.0 to actually look at volume information. I suspect that the normal way to manage things is to boot from a special floppy with the controller software on it, but that floppy is long gone. Going through the options in the RAID software, I think the only supported way to replace a disk in an existing RAID volume is to physically add the disk, boot up and configure it as a "spare" for a volume, force the volume to use the spare to replace an existing down disk (and at this point I'm only guessing) so that the down disk becomes the spare, repair the volume, remove the spare from the volume, and then shut down and remove the disk. Then start all over for the other failed disk. All this amounts to a lot of downtime, assuming I can even make it work and that my spares are any good. As for finding reliable spares, I have no clue where to even begin looking to find a new 4GB scsi drive, or even which exact scsi system I'm looking for, as it's gone through a few different iterations over time. Another option is to migrate this to a virtual machine (hyper-v), but all previous attempts we've made in this area have failed to get very far. When this machine was installed I was just graduating from high school, and so it requires lower level knowledge of netware and dos than I ever developed, or if I did have since forgotten (I'm not exactly a dos neophyte, either). Part of my problem is this is a high-use server, and taking it down for a few days to figure things out isn't gonna fly very well. As for the question, I'm looking for anything that might be helpful in this situation: a recommendation on a place to find good spares from this era, personal experience repairing RAID volumes using a similar controller or building a hyper-v vm from an old netware server, a line on a floppy with better software for the RAID controller, recommendation on a good Novell consultant in Nebraska that would be able to put things right, a whole other option I haven't considered yet, etc. Update: For backups, we have good (recently verified via restore) backups of the data only -- nothing for the software that actually runs things. Update 2: Just a progress report that I currently have a working Netware 3.12 install in VMWare Virtual Server 2.0, thanks largely to the guide I found here: http://cerbulescubogdan.blogspot.com/2010/11/novell-netware-312-on-vmware.html The next steps are preparing empty netware volumes to match the additional volumes on my existing server, taking a dump of everything on the C:\ drive and netware volumes on my existing server, and figuring out from that information what modules need added to netware, installing my licenses (we do still have that disk, if it's any good), and moving data over. I have approval to bring the server down for a week after the first of the year (sadly not before), so, aside from creating empty volumes, the rest of the work will have to wait until then. Final Update (Jan 5, 2011): I was able to get spares working in both raid arrays without data loss this week. Both are now listed by the controller as "FAULT TOLLERANT" (yay!). I was also able to build on the progress from my last update and now have a functional "spare" server in VMWare Server 2.0. The spare can run and use our erp software, but I can't put it into production because I can't (yet) print from that box (and I have no idea why). Even so, this VM will do in a pinch if I have no other choice, and between it and the repaired RAID arrays I'm comfortable pushing on until I can junk the machine in November.

    Read the article

  • Why is my mdadm raid-1 recovery so slow?

    - by dimmer
    On a system I'm running Ubuntu 10.04. My raid-1 restore started out fast but quickly became ridiculously slow (at this rate the restore will take 150 days!): dimmer@paimon:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdc1[2] sdb1[1] 1953513408 blocks [2/1] [_U] [====>................] recovery = 24.4% (477497344/1953513408) finish=217368.0min speed=113K/sec unused devices: <none> Eventhough I have set the kernel variables to reasonably quick values: dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_min 1000000 dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_max 100000000 I am using 2 2.0TB Western Digital Hard Disks, WDC WD20EARS-00M and WDC WD20EARS-00J. I believe they have been partitioned such that their sectors are aligned. dimmer@paimon:/sys$ sudo parted /dev/sdb GNU Parted 2.2 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00M (scsi) Disk /dev/sdb: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 (parted) unit s (parted) p Number Start End Size File system Name Flags 1 2048s 3907028991s 3907026944s ext4 (parted) q dimmer@paimon:/sys$ sudo parted /dev/sdc GNU Parted 2.2 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00J (scsi) Disk /dev/sdc: 2000GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 I am beginning to think that I have a hardware problem, otherwise I can't imagine why the mdadm restore should be so slow. I have done a benchmark on /dev/sdc using Ubuntu's disk utility GUI app, and the results looked normal so I know that sdc has the capability to write faster than this. I also had the same problem on a similar WD drive that I RMAd because of bad sectors. I suppose it's possible they sent me a replacement with bad sectors too, although there are no SMART values showing them yet. Any ideas? Thanks. As requested, output of top sorted by cpu usage (notice there is ~0 cpu usage). iowait is also zero which seems strange: top - 11:35:13 up 2 days, 9:40, 3 users, load average: 2.87, 2.58, 2.30 Tasks: 142 total, 1 running, 141 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3096304k total, 1482164k used, 1614140k free, 617672k buffers Swap: 1526132k total, 0k used, 1526132k free, 535416k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 45 root 20 0 0 0 0 S 0 0.0 2:17.02 scsi_eh_0 1 root 20 0 2808 1752 1204 S 0 0.1 0:00.46 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.17 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/1 ... dmesg errors, definitely looking like hardware: [202884.000157] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [202884.007015] ata5.00: failed command: FLUSH CACHE EXT [202884.013728] ata5.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 [202884.013730] res 40/00:00:ff:59:2e/00:00:35:00:00/e0 Emask 0x4 (timeout) [202884.033667] ata5.00: status: { DRDY } [202884.040329] ata5: hard resetting link [202889.400050] ata5: link is slow to respond, please be patient (ready=0) [202894.048087] ata5: COMRESET failed (errno=-16) [202894.054663] ata5: hard resetting link [202899.412049] ata5: link is slow to respond, please be patient (ready=0) [202904.060107] ata5: COMRESET failed (errno=-16) [202904.066646] ata5: hard resetting link [202905.840056] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [202905.849178] ata5.00: configured for UDMA/133 [202905.849188] ata5: EH complete [203899.000292] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [203899.007096] ata5.00: failed command: IDENTIFY DEVICE [203899.013841] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [203899.013843] res 40/00:00:ff:f9:f6/00:00:38:00:00/e0 Emask 0x4 (timeout) [203899.041232] ata5.00: status: { DRDY } [203899.048133] ata5: hard resetting link [203899.816134] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [203899.826062] ata5.00: configured for UDMA/133 [203899.826079] ata5: EH complete [204375.000200] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204375.007421] ata5.00: failed command: IDENTIFY DEVICE [204375.014799] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204375.014800] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204375.044374] ata5.00: status: { DRDY } [204375.051842] ata5: hard resetting link [204380.408049] ata5: link is slow to respond, please be patient (ready=0) [204384.440076] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204384.449938] ata5.00: configured for UDMA/133 [204384.449955] ata5: EH complete [204395.988135] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204395.988140] ata5.00: failed command: IDENTIFY DEVICE [204395.988147] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204395.988149] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204395.988151] ata5.00: status: { DRDY } [204395.988156] ata5: hard resetting link [204399.320075] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204399.330487] ata5.00: configured for UDMA/133 [204399.330503] ata5: EH complete

    Read the article

  • Howto Nginx + git-http-backend + fcgiwrap (Debian Squeeze)

    - by brainsqueezer
    I am trying to setup git-http-backend with Nginx but after 24 hours wasting time and reading everything I could I think this config should work but doesn't. server { listen 80; server_name mydevserver; access_log /var/log/nginx/dev.access.log; error_log /var/log/nginx/dev.error.log; location / { root /var/repos; } location ~ /git(/.*) { gzip off; root /usr/lib/git-core; fastcgi_pass unix:/var/run/fcgiwrap.socket; include /etc/nginx/fastcgi_params2; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param DOCUMENT_ROOT /usr/lib/git-core/; fastcgi_param SCRIPT_NAME git-http-backend; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param GIT_PROJECT_ROOT /var/repos; fastcgi_param PATH_INFO $1; #fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; } } Content of /etc/nginx/fastcgi_params2 fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REMOTE_USER $remote_user; # required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; but config seems not working $ git clone http://mydevserver/git/myprojectname/ Cloning into myprojectname... warning: remote HEAD refers to nonexistent ref, unable to checkout. and I can request an unexistant project and I will get the same answer $ git clone http://mydevserver/git/thisprojectdoesntexist/ Cloning into thisprojectdoesntexist... warning: remote HEAD refers to nonexistent ref, unable to checkout. If I change root to /usr/lib I will get a 403 error and this will be reported to nginx error log: 2011/11/23 15:52:46 [error] 5224#0: *55 FastCGI sent in stderr: "Cannot get script name, is DOCUMENT_ROOT and SCRIPT_NAME set and is the script executable?" while reading response header from upstream, client: 198.168.0.4, server: mydevserver, request: "GET /git/myprojectname/info/refs HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "mydevserver" My main trouble is with the correct root value with this configuration. Maybe there are some permissions problems. Notes: /var/repos/ is owned by www-data and contains folders bit git bare repos. All this works perfectly using ssh. If I go with my browser to http://mydevserver/git/myproject/info/refs it is answered by git-http-backend asking me to send a command. /var/run/fcgiwrap.socket has 777 permissions.

    Read the article

  • What NAS setup for two-way syncing over the internet?

    - by Jamse
    I have family living a few hours away and have a lot of files that I would like to share - especially lots of folders of digital photos, but also documents etc. - partially so they can see them, partially so I can have access when I visit them and partially for backup / redundancy purposes. My current hard drives on my main machine are getting pretty full anyway, and I have a MythTV box where my music is currently stored, so I was thinking of getting a NAS anyway. And at the other end my family have a few computers, so they would probably benefit from a NAS too. My general idea (though I'm willing to shift on this if there are any bright ideas about other ways of achieving my objectives) is to get a matching pair of NASs and have them sync over the internet. (To cut down on bandwidth use I would get them in sync locally to start with.) Having read around as best I can it seems that syncing over the internet is generally only a feature on quite high end units. However, I have seen that QNAP seem to feature this on their TS-110 and TS-210 units, which might work (they call it "remote replication"). They seem pretty reasonably priced for what they are, but of course with buying 2 of them and then adding the drives (say 1TB or 2TB each) I'd be looking at about £400 total. So, I'm looking for recommendations really. I don't want to spend more than the QNAPs would cost me, but any other ideas would be most appreciated. I am comfortable with technology and tinkering around, but I don't have as much time for that as I would like, so I guess I would favour solutions that require less tinkering rather than more (even though that's less fun!). Any thoughts would be welcome, as would any comments from people who have used the QNAP boxes for this. Thanks in advance. Some specifications: Two-way syncing. Changes made at either end should be synced to the other. There shouldn't be one unit that is effectively a read-only mirror of the other. Not real time. The syncing doesn't need to be real time - if it updated, say, daily overnight that would be fine. Set and forget. I would prefer minimal user interaction once set up - it would be great if syncs were scheduled and automatic. OS independence. I am running Windows XP plus an Ubuntu-based MythTV box. At the other end there are Windows 7 and Windows XP machines, plus a networked TV set top box which I think can play files off the network. Machine independence. I would favour a system that is self-contained, i.e. not reliant on any particular PC being switched on. If the system had enough else going for it I could perhaps work around it at this end, where I only have one PC that's used as such, but it would be harder at the other where there are at least two PCs that might be accessing the files. Notifications. I guess things like getting an email notification if the syncing fell over for any reason would be useful, though it's not a deal breaker. Update I've been digging some more and it looks like QNAP's Remote Replication function is actually just Rsync, so only really suitable for one-way syncing. I've posted on their forum to double check, but I think that's the case. In which case, I think the focus of my question is now either: do any reasonably-priced NASs support bidirectional syncing over the internet?, or has anyone had any luck installing onto NASs for this purpose? (Also, updated question to clarify that I'm after two-way syncing.)

    Read the article

  • Need help identiying a nasty rootkit in Windows

    - by goofrider
    I have a nasty rootkit that not tools seem to be able to idenity. I know for sure it's a rootkit, but I can figure out which rootkit it is. Here's what I gathered so far: It creates multiple copies of itself in %HOME%\Local Settings\Temp with names like Q.EXE, IAJARZ.exe, etc., and install them as hidden services. These EXE have SysInternals identifiers in them so they're definitely rootkits. It hooked very deep in the system, including file read/write, security policies, registry read/write, and possibly WinSock/TCP/IP. When going to Sophos.com to download their software, the rootkit inject something called Microsoft Ajax Tootkit into the page, which injects code into the email submission form in order to redirect it. (EDIT: I might have panicked. Looks like Sophos does use an AJAZ email form, their form is just broken on Chrome so it looked like a mail form injection attack, the link is http://www.sophos.com/en-us/products/free-tools/virus-removal-tool/download.aspx ) Super-Antispyware found a lot of spyware cookies, in the name of .kaspersky.2o7.net, etc. (just chedk 2o7.net, looks like it's a legit ad company) I tried comparing DNS lookup from the infected systems and from system in other physical locations, no DNS redirections it seems. I used dd to copy the MBR and compared it with the MBR provided by ms-sys package, no differences so it's not infecting MBR. No antivirus or rootkit scanner be able to identify it. Most of them can't even find it. I tried scanning, in-situ (normal mode), in safe mode, and boot to linux live CD. Scanners used: Avast, Sophos anti rootkit, Kasersky TDSSKiller, GMER, RootkitRevealer, and many others. Kaspersky reported some unsigned system files that ought to be signed (e.g. tcpip.sys), and reported a number of MD5 mismatches. But otherwise couldn't identify anything based on signature. When running Sysinternal RootkitRevealer and Sophos AntiRootkit, CPU usage goes up to 100% and gets stucked. The Rootkit is blocking them. When trying running/installing HiJackThis, RootkitRevealer and some other scanners, it tells me system security policy prevent running/installing it. The list of malicious acitivities go on and on. here's a sample of logs from all my scans. In particular, aswSnx.SYS, apnenfno.sys and PROCMON20.SYS has a huge number of hooks. It's hard to tell if the rootkit replaced legit program files like aswSnx.SYS (from Avast) and PROCMON20.SYS (from Sysinternal Process Monitor). I can't find whether apnenfno.sys is from a legit program. Help to identify it is appreciated. Trend Micro RootkitBuster ------ [HIDDEN_REGISTRY][Hidden Reg Value]: KeyPath : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\sptd\Cfg Root : 586bfc0 SubKey : Cfg ValueName : g0 Data : 38 23 E8 D0 BF F2 2D 6F ... ValueType : 3 AccessType: 0 FullLength: 61 DataSize : 32 [HOOKED_SERVICE_API]: Service API : ZwCreateMutant Image Path : C:\WINDOWS\System32\Drivers\aswSnx.SYS OriginalHandler : 0x8061758e CurrentHandler : 0xaa66cce8 ServiceNumber : 0x2b ModuleName : aswSnx.SYS SDTType : 0x0 [HOOKED_SERVICE_API]: Service API : ZwCreateThread Image Path : c:\windows\system32\drivers\apnenfno.sys OriginalHandler : 0x805d1038 CurrentHandler : 0xaa5f118c ServiceNumber : 0x35 ModuleName : apnenfno.sys SDTType : 0x0 [HOOKED_SERVICE_API]: Service API : ZwDeleteKey Image Path : C:\WINDOWS\system32\Drivers\PROCMON20.SYS OriginalHandler : 0x80624472 CurrentHandler : 0xa709b0f8 ServiceNumber : 0x3f ModuleName : PROCMON20.SYS SDTType : 0x0 HiJackThis ------ O23 - Service: JWAHQAGZ - Sysinternals - www.sysinternals.com - C:\DOCUME~1\jeff\LOCALS~1\Temp\JWAHQAGZ.exe O23 - Service: LHIJ - Sysinternals - www.sysinternals.com - C:\DOCUME~1\jeff\LOCALS~1\Temp\LHIJ.exe Kaspersky TDSSKiller ------ 21:05:58.0375 3936 C:\WINDOWS\system32\ati2sgag.exe - copied to quarantine 21:05:59.0217 3936 ATI Smart ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:05:59.0342 3936 C:\WINDOWS\system32\BUFADPT.SYS - copied to quarantine 21:05:59.0856 3936 BUFADPT ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:05:59.0965 3936 C:\Program Files\CrashPlan\CrashPlanService.exe - copied to quarantine 21:06:00.0152 3936 CrashPlanService ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:00.0246 3936 C:\WINDOWS\system32\epmntdrv.sys - copied to quarantine 21:06:00.0433 3936 epmntdrv ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:00.0464 3936 C:\WINDOWS\system32\EuGdiDrv.sys - copied to quarantine 21:06:00.0526 3936 EuGdiDrv ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:00.0604 3936 C:\Program Files\Common Files\Macrovision Shared\FLEXnet Publisher\FNPLicensingService.exe - copied to quarantine 21:06:01.0181 3936 FLEXnet Licensing Service ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0321 3936 C:\Program Files\AddinForUNCFAT\UNCFATDMS.exe - copied to quarantine 21:06:01.0430 3936 OTFSDMS ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0492 3936 C:\WINDOWS\system32\DRIVERS\tcpip.sys - copied to quarantine 21:06:01.0539 3936 Tcpip ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0601 3936 C:\DOCUME~1\jeff\LOCALS~1\Temp\TULPUWOX.exe - copied to quarantine 21:06:01.0664 3936 HKLM\SYSTEM\ControlSet003\services\TULPUWOX - will be deleted on reboot 21:06:01.0664 3936 C:\DOCUME~1\jeff\LOCALS~1\Temp\TULPUWOX.exe - will be deleted on reboot 21:06:01.0664 3936 TULPUWOX ( UnsignedFile.Multi.Generic ) - User select action: Delete 21:06:01.0757 3936 C:\WINDOWS\system32\Drivers\usbaapl.sys - copied to quarantine 21:06:01.0866 3936 USBAAPL ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0913 3936 C:\Program Files\VMware\VMware Player\vmware-authd.exe - copied to quarantine 21:06:02.0443 3936 VMAuthdService ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:02.0443 3936 vmount2 ( UnsignedFile.Multi.Generic ) - skipped by user 21:06:02.0443 3936 vmount2 ( UnsignedFile.Multi.Generic ) - User select action: Skip 21:06:02.0459 3936 vstor2 ( UnsignedFile.Multi.Generic ) - skipped by user 21:06:02.0459 3936 vstor2 ( UnsignedFile.Multi.Generic ) - User select action: Skip

    Read the article

  • File Server - Storage configuration: RAID vs LVM vs ZFS something else... ?

    - by privatehuff
    We are a small company that does video editing, among other things, and need a place to keep backup copies of large media files and make it easy to share them. I've got a box set up with Ubuntu Server and 4 x 500 GB drives. They're currently set up with Samba as four shared folders that Mac/Windows workstations can see fine, but I want a better solution. There are two major reasons for this: 500 GB is not really big enough (some projects are larger) It is cumbersome to manage the current setup, because individual hard drives have different amounts of free space and duplicated data (for backup). It is confusing now and that will only get worse once there are multiple servers. ("the project is on sever2 in share4" etc) So, I need a way to combine hard drives in such a way as to avoid complete data loss with the failure of a single drive, and so users see only a single share on each server. I've done linux software RAID5 and had a bad experience with it, but would try it again. LVM looks ok but it seems like no one uses it. ZFS seems interesting but it is relatively "new". What is the most efficient and least risky way to to combine the hdd's that is convenient for my users? Edit: The Goal here is basically to create servers that contain an arbitrary number of hard drives but limit complexity from an end-user perspective. (i.e. they see one "folder" per server) Backing up data is not an issue here, but how each solution responds to hardware failure is a serious concern. That is why I lump RAID, LVM, ZFS, and who-knows-what together. My prior experience with RAID5 was also on an Ubuntu Server box and there was a tricky and unlikely set of circumstances that led to complete data loss. I could avoid that again but was left with a feeling that I was adding an unnecessary additional point of failure to the system. I haven't used RAID10 but we are on commodity hardware and the most data drives per box is pretty much fixed at 6. We've got a lot of 500 GB drives and 1.5 TB is pretty small. (Still an option for at least one server, however) I have no experience with LVM and have read conflicting reports on how it handles drive failure. If a (non-striped) LVM setup could handle a single drive failing and only loose whichever files had a portion stored on that drive (and stored most files on a single drive only) we could even live with that. But as long as I have to learn something totally new, I may as well go all the way to ZFS. Unlike LVM, though, I would also have to change my operating system (?) so that increases the distance between where I am and where I want to be. I used a version of solaris at uni and wouldn't mind it terribly, though. On the other end on the IT spectrum, I think I may also explore FreeNAS and/or Openfiler, but that doesn't really solve the how-to-combine-drives issue.

    Read the article

  • Where to download replacement "Explorerframe.DLL" Files for x64 Windows 7 Pro?

    - by Ben Franchuk
    After posting this question, I did some research to reveal what the problem likely was, and found what I need to fix. Following this is the original post, then my updated question. A few months ago I ended up requiring to change my computer's SID to fix a problem it had been having- Instead of fixing the problem, though, it screwed up my at-the-time current install of windows, to the point of me needing to do a fresh install. As I am in possession of an OEM copy of Windows 7 Pro 64 Bit, I successfully reinstalled over the dead copy with that (all the files that were on the computer previous to this windows install were put in a Windows.old folder). Everything installed and worked absolutely fine, except for one thing. The problem I am experiencing is that, in some Windows Explorer windows, the explorer pane doesn't show. Instead, it simply shows a white area where the pane would show. This makes some software not usable, I recently realized; Software such as Cubase, which use just the explorer pane to select file save locations, cannot save at all as the pane itself is... not operational. Below is a screenshot of this problem as it occurs in cubase; ...and again as it shows in UTorrent in the save location selector window. The highlighted area is where the sidebar would NORMALLY be. Pardon my scribbling over some of the things in the window- I would personally rather the internet did not get a glimpse of my files. I have yet to find a common reason why the pane works in some applications when they pull explorer, and others not. I have yet to see it go away, and the software affected by it has been affected since I reinstalled my copy of windows. Initially, I was able to live with it as I can type out save locations in the file name bar to navigate, but with software like Cubase, I do not have this option. Reinstalling windows again is NOT an option. Here's the updated question. After posting this question originally, I did some research on the problem in question, and it turns out that this is extremely easily fixable via replacing the file "ExplorerFrame.DLL" which is located in the System32 and SystemWOW64 Folders, in the windows folder, on the C:\ drive. As I quite frequently customize my computer, this is a normal thing for me to do and I know exactly how to safely and properly replace this file. The only problem is that I cannot for the life of me find a download of this file that actually works with my computer. I tried a couple from some different sites but they all caused explorer to not restart (I was given an error when starting the application from Task Manager) and was forced to revert to the broken .DLL files. Since there are 2 separate "ExplorerFrame.DLL" files; one for 64 bit and the other for 32 bit, I am assuming that I will need to download 2 separate versions to replace the corrupted ones. Where can I acquire these files? I am currently running Windows 7 Professional x64 Bit.

    Read the article

  • Windows 8, NVIDIA graphics recognition fails

    - by Roy Grubb
    I just installed Windows 8 Pro OEM 64-bit (clean install) and it won't properly recognize my graphics adapter. When I installed Win8, it automatically installed the BasicDisplay.sys driver dated 6/21/2006. 6.2.9200.16384 (win8_rtm.120725-1247). Hardware - Mobo:MSi G41M-P33 Combo CPU:Intel CoreDuo 6600 Graphics:NVIDIA GeForce 9400GT *OS* - Windows 8 Pro 64-bit OEM The graphics adapter worked fine in Windows XP. The PC is a generic box, bought locally and its mobo failed recently, so I replaced it with the G41M. Microsoft wouldn't let me re-activate Windows XP with a different mobo, so I installed Win8, which appears to work except as described next. Win8 only partially recognizes the graphics adapter and won't allow NVIDIA latest driver installer to see that it's an NVIDIA card. As a result, OpenGL doesn't work, and this is needed by the software I most use. Other than that the graphics look OK. When I say 'partially recognizes', I mean that via the Control Panel, I can see that the adapter is described as NVIDIA, but the driver remains stuck at Microsoft Basic Display Adapter no matter what I try, including "Update driver..." in adapter properties. Display Screen Resolution Advanced Settings Adapter shows: Adapter Type: Microsoft Basic Display Adapter Chip Type: NVIDIA DAC Type: NVIDIA Corporation Bios Information: G27 Board - p381n17 Don't know what this means ... no mention of 9400GT Total Available Graphics Memory: 256 MB Dedicated Video Memory: 0 MB In fact the adapter has 512MB on-board video memory. System Video Memory: 0 MB Shared System Memory: 256 MB And Control Panel Device Manager Display adapters just shows Microsoft Basic Display Adapter. No other graphics adapter, and no unknown device or yellow question mark. What I have tried so far: 1. Cleared CMOS and reset. Updated BIOS and all mobo drivers as follows: 1st I used Driver Reviver to see if any driver updates were required. It found some but I didn't use that to get the drivers. Then I switched to MSi's own mobo driver utility Live Update 5. This also showed the board needed to update several so I used it to fetch the new drivers. After that it showed that everything was up to date and I checked with Driver Reviver again, which also reported no drivers now needed updating. Rebooted. Went to the NVIDIA site to get the latest graphics adapter driver. Their auto-detect "Option 2: Automatically find drivers for my NVIDIA products" said "The NVIDIA Smart Scan was unable to evaluate your system hardware. Please use Option 1 to manually find drivers for your NVIDIA products." So I downloaded 310.70-desktop-win8-win7-winvista-64bit-international-whql.exe, which lists 9400 GT under supported products, but when I run it, it says: "NVIDIA Installer cannot continue This graphics driver could not find compatible graphics hardware." Connected the display to the on-board Intel graphics (G41 Intel Express), removed the NVIDIA card and rebooted, changed to internal graphics in CMOS. Again it installs the MS Basic Display Adapter, and can't properly run my s/w that needs OpenGL. It runs on other machines with Intel Express graphics (WinXP and 7) Shut down and pulled out the power cord. Held start button to discharge all capacitors. Removed and re-inserted NVIDIA adapter in PCI-E slot and made sure properly seated. Connected the monitor to the card, screwed plug to socket. Reconnected power cord. Started and checked in BIOS that Primary Graphics Adapter was set to PCI-E. Started Windows. Uninstalled MS Basic Display Adapter in Device Manager. Screen blanks briefly, reappears. No Graphics adapter entry was then visible in Device Manager. Restarted PC. MS Basic Display Adapter Visible again in Device Manager. Clicked in Device Manager View Show hidden devices. No other graphics adapter appears, no unknown devices. Rebooted. Tried Scan for Hardware changes. None detected. Tried right-click on MS Basic Display Adapter Properties Driver Update Driver... Search automatically. It replied that it had determined driver was up to date. I checked that there were no graphic driver-related entries in Programs and Features that I could delete (none). Searched for any other drivers with nvidia in their name and deleted them, just keeping the 306.97 installer exe file. Did a Windows Update. Ran GPU-Z which shows (main items): Microsoft Basic Display Adapter GPU G72 BIOS 5.72.22.76.88 Device ID 10DE - 01D5 DDR2 Bus Width 32 Bit Memory size 64MB Driver Version nvlddmkm 6.2.9200.16384 (ForceWare 0.00) / Win8 64 NVIDIA SLI Unknown in the drop-down at the foot, "Microsoft Basic Display Adapter" is the only option If I swap hard disks in that machine to one with a Ubuntu 10.4 installation (originally installed on the same PC), lspci shows "VGA compatible controller as NVIDIA Corporation Device 01d5 (rev a1) (prog-if 00 [VGA controller])" and "kernel driver in use: nvidia" I'm out of ideas for new things to try and would be really grateful of suggestions. Thanks!

    Read the article

  • What NAS setup for syncing over the internet?

    - by Jamse
    I have family living a few hours away and have a lot of files that I would like to share - especially lots of folders of digital photos, but also documents etc. - partially so they can see them, partially so I can have access when I visit them and partially for backup / redundancy purposes. My current hard drives on my main machine are getting pretty full anyway, and I have a MythTV box where my music is currently stored, so I was thinking of getting a NAS anyway. And at the other end my family have a few computers, so they would probably benefit from a NAS too. My general idea (though I'm willing to shift on this if there are any bright ideas about other ways of achieving my objectives) is to get a matching pair of NASs and have them sync over the internet. (To cut down on bandwidth use I would get them in sync locally to start with.) Having read around as best I can it seems that syncing over the internet is generally only a feature on quite high end units. However, I have seen that QNAP seem to feature this on their TS-110 and TS-210 units, which might work (they call it "remote replication"). They seem pretty reasonably priced for what they are, but of course with buying 2 of them and then adding the drives (say 1TB or 2TB each) I'd be looking at about £400 total. So, I'm looking for recommendations really. I don't want to spend more than the QNAPs would cost me, but any other ideas would be most appreciated. I am comfortable with technology and tinkering around, but I don't have as much time for that as I would like, so I guess I would favour solutions that require less tinkering rather than more (even though that's less fun!). Any thoughts would be welcome, as would any comments from people who have used the QNAP boxes for this. Thanks in advance. Some specifications: Two-way syncing. Changes made at either end should be synced to the other. There shouldn't be one unit that is effectively a read-only mirror of the other. Not real time. The syncing doesn't need to be real time - if it updated, say, daily overnight that would be fine. Set and forget. I would prefer minimal user interaction once set up - it would be great if syncs were scheduled and automatic. OS independence. I am running Windows XP plus an Ubuntu-based MythTV box. At the other end there are Windows 7 and Windows XP machines, plus a networked TV set top box which I think can play files off the network. Machine independence. I would favour a system that is self-contained, i.e. not reliant on any particular PC being switched on. If the system had enough else going for it I could perhaps work around it at this end, where I only have one PC that's used as such, but it would be harder at the other where there are at least two PCs that might be accessing the files. Notifications. I guess things like getting an email notification if the syncing fell over for any reason would be useful, though it's not a deal breaker.

    Read the article

  • How do I make an external hard drive keep the same drive letter permanently?

    - by andygrunt
    I have a desktop PC (2002 vintage) running Windows XP that I turn on about 2 or 3 times per week. I have a mains powered 250Gb Western Digital hard disk connected to it via USB. I always turn the hard disk on before the PC so it's up and running as the PC boots. When I first connected the external hard disk, the PC assigned it a letter ('i' if it matters) and I've installed software to it, created shortcuts to various files and folders on the disk using that letter. For years everything was fine then I would boot the PC and the hard disk was assigned a different letter. I'd then have to go into 'my computer/manage/disk management' and manually change the letter back to 'i'. If I then rebooted the PC, the hard disk would usually still be 'i' but after the next reboot would be some other random letter and I have to manually change it back to 'i'. This would go on for some time then there'd be periods when the it would always be 'i' then, for no apparent reason (no new devices added, for example), the drive letter would start changing again. At the moment it's in random drive letter mood so I thought I'd ask the following question... How do I assign the external hard disk to be 'i' permanently? Answer: Thanks Molly that seems to have done the trick (after a little fiddling) - slightly disappointed there wasn't a way to do it within Windows without installing something else though. For anyone else trying this, it wasn't completely straightforward so here's what happened with me. I installed USBDLM as per the instructions on its website. I guessed that I had to assign the first USB letter to i so replaced the 'Letter1=' lines to 'Letter=I' in the ini file. To test it, I rebooted the PC only to find it came back up with the display set to 640x480 in 16 colours. After some investigation, I re-installed the display drivers and rebooted and set the display back to its usual setting. The external hard disk now gets set to 'i' but I found I had to re-apply sharing status to it so it was seen from my laptop which is on the same network. The end result of all this is that it now does what I wanted although it does act as though the hard drive has just been plugged in a few seconds after the Windows desktop appears i.e. the little box appears with a progress bar as it searches through the contents of the 'new' hard drive and I eventually get a dialogue box saying 'This disk or device contains more than one type of content. What do you want Windows to do?' and lists options such as play media files, print the pictures or open folder to view the files. This is a tiny pain I wish didn't happen but not exactly a huge price to pay. Other than that - it seems to work fine :) Looks like a spoke too soon... Every time I reboot, I have to re-share the 'i' drive (which I didn't have to do before) so it can be seen by my laptop on the same network. Any ideas how to make that permanent?

    Read the article

  • Lockdown users on Windows Server 2012

    - by el.severo
    I set up a Active Directory on a server machine with Windows Server 2012 and I'd like to create some users with limitations like Windows Steady State does in Windows XP (locally). Seen already the Windows SteadyState Handbook (with Windows Server 2008), but I'd like to know if anyone has tried this before, the limitations are the following: 1. Prevent locked or roaming user profiles that cannot be found on the computer from logging on 2. Do not cache copies of locked or roaming user profiles for users who have previously logged on to this computer 3. Do not allow Windows to compute and store passwords using LAN Manager Hash values 4. Do not store usernames or passwords used to log on to the Windows Live ID or the domain 5. Prevent users from creating folders and files on drive C:\ 6. Lock profile to prevent the user from making permanent changes 7. Remove the Control Panel, Printer and Network Settings from the Classic Start menu 8. Remove the Favorites icon 9. Remove the My Network Places icon 10. Remove the Frequently Used Program list 11. Remove the Shared documents folder from My Computer 12. Remove control Panel icon 13. Remove the Set Program Access and Defaults icon 14. Remove the Network Connection(Connect To)icon 15. Remove the Printers and Faxes icon 16. Remove the Run icon 17. Prevent access to Windows Explorer features: Folder Options, Customize Toolbar, and the Notification Area 18. Prevent access to the taskbar 19. Prevent access to the command prompt 20. Prevent access to the registry editor 21. Prevent access to the Task Manager 22. Prevent access to Microsoft Management Console utilities 23. Prevent users from adding or removing printers 24. Prevent users from locking the computer 25. Prevent password changes (also requires the Control Panel icon to be removed) 26. Disable System Tools and other management programs 27. Prevent users from saving files to the desktop 28. Hide A Drive 29. Hide B Drive 30. Hide C Drive 31. Prevent changes to Internet Explorer registry settings 32. Empty the Temporary Internet Files folder when Internet Explorer is closed 33. Remove Internet Options 34. Remove General tab in Internet Options 35. Remove Security tab in Internet Options 36. Remove Privacy tab in Internet Options 37. Remove Content tab in Internet Options 38. Remove Connections tab in Internet Options 39. Remove Programs tab in Internet Options 40. Remove Advanced tab in Internet Options 41. Set a home page (Internet Explorer) 42. Restrict the possibility to change desktop image 43. Restrict the possibility to change wallpaper 44. Restrict usb flash drives Any suggestions for this? UPDATE: As @Dan suggested me I'd like to specify that would be applied to a educational scenario where students can login from a computer and want to add some restrictions to them.

    Read the article

  • Paying great programmers more than average programmers

    - by Kelly French
    It's fairly well recognized that some programmers are up to 10 times more productive than others. Joel mentions this topic on his blog. There is a whole blog devoted to the idea of the "10x productive programmer". In years since the original study, the general finding that "There are order-of-magnitude differences among programmers" has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000). Fred Brooks mentions the wide range in the quality of designers in his "No Silver Bullet" article, The differences are not minor--they are rather like the differences between Salieri and Mozart. Study after study shows that the very best designers produce structures that are faster, smaller, simpler, cleaner, and produced with less effort. The differences between the great and the average approach an order of magnitude. The study that Brooks cites is: H. Sackman, W.J. Erikson, and E.E. Grant, "Exploratory Experimental Studies Comparing Online and Offline Programming Performance," Communications of the ACM, Vol. 11, No. 1 (January 1968), pp. 3-11. The way programmers are paid by employers these days makes it almost impossible to pay the great programmers a large multiple of what the entry-level salary is. When the starting salary for a just-graduated entry-level programmer, we'll call him Asok (From Dilbert), is $40K, even if the top programmer, we'll call him Linus, makes $120K that is only a multiple of 3. I'd be willing to be that Linus does much more than 3 times what Asok does, so why wouldn't we expect him to get paid more as well? Here is a quote from Stroustrup: "The companies are complaining because they are hurting. They can't produce quality products as cheaply, as reliably, and as quickly as they would like. They correctly see a shortage of good developers as a part of the problem. What they generally don't see is that inserting a good developer into a culture designed to constrain semi-skilled programmers from doing harm is pointless because the rules/culture will constrain the new developer from doing anything significantly new and better." This leads to two questions. I'm excluding self-employed programmers and contractors. If you disagree that's fine but please include your rationale. It might be that the self-employed or contract programmers are where you find the top-10 earners, but please provide a explanation/story/rationale along with any anecdotes. [EDIT] I thought up some other areas in which talent/ability affects pay. Financial traders (commodities, stock, derivatives, etc.) designers (fashion, interior decorators, architects, etc.) professionals (doctor, lawyer, accountant, etc.) sales Questions: Why aren't the top 1% of programmers paid like A-list movie stars? What would the industry be like if we did pay the "Smart and gets things done" programmers 6, 8, or 10 times what an intern makes? [Footnote: I posted this question after submitting it to the Stackoverflow podcast. It was included in episode 77 and I've written more about it as a Codewright's Tale post 'Of Rockstars and Bricklayers'] Epilogue: It's probably unfair to exclude contractors and the self-employed. One aspect of the highest earners in other fields is that they are free-agents. The competition for their skills is what drives up their earning power. This means they can not be interchangeable or otherwise treated as a plug-and-play resource. I liked the example in one answer of a major league baseball team trying to field two first-basemen. Also, something that Joel mentioned in the Stackoverflow podcast (#77). There are natural dynamics to shrink any extreme performance/pay ranges between the highs and lows. One is the peer pressure of organizations to pay within a given range, another is the likelyhood that the high performer will realize their undercompensation and seek greener pastures.

    Read the article

  • Synapse and string problems with HTTPSend in Delphi 2010

    - by Mick
    I have been trying to get to the bottom of this problem off and on for the past 2 days and I'm really stuck. Hopefully some smart folks can help me out. The issue is that I have a function that I call in a thread that downloads a file (using Synapse libraries) from a website that is passed to it. However, I've found that every once in a while there are sites where it will not pull down a file, but wget or Firefox/IE will download it without issue. Digging into it, I've found some curious things. Here is the relevant code: uses //[..] HTTPSend, blcksock; //[..] type TMyThread = class(TThread) protected procedure Execute; override; private { Private declarations } fTheUrl: string; procedure GetFile(const TheUrl: string); public property thrd_TheUrl: string read fTheUrl write fTheUrl; end; implementation [..] procedure TMyThread.GetFile(const TheUrl: string); var HTTP: THTTPSend; success: boolean; sLocalUrl: string; IsSame : boolean; begin HTTP := THTTPSend.Create; try HTTP.UserAgent := 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 2.0.50727)'; HTTP.ProxyHost := 'MYPROXY.COM'; HTTP.ProxyPort := '80'; sLocalUrl := 'http://web.archive.org/web/20071212205017/energizer.com/usbcharger/download/UsbCharger_setup_V1_1_1.exe'; IsSame := SameText(sLocalUrl, sTheUrl); //this equals True when I debug /// /// /// THIS IS WHERE THE ISSUE BEGINS /// I will comment out 1 of the following when debugging /// HTTP.HTTPMethod('GET', sLocalUrl); // ----this works and WILL download the file HTTP.HTTPMethod('GET', sTheUrl); // --- this always fails, and HTTP.ResultString contains "Not Found" success := SysUtils.UpperCase(HTTP.ResultString) = 'OK'; if HTTP.ResultCode > 0 then success := True; //this is here just to keep the value around while debugging finally HTTP.Free; end; end; procedure TMyThread.Execute begin //fTheURL contains this value: http://web.archive.org/web/20071212205017/energizer.com/usbcharger/download/UsbCharger_setup_V1_1_1.exe GetFile(fTheUrl); end; The problem is that when I assign a local variable to the function and give it the URL directly, everything works. However, when passing the variable into the function, it fails. Anyone have any ideas? HTTP.HTTPMethod('GET', sLocalUrl); // ----this works and WILL download the file HTTP.HTTPMethod('GET', sTheUrl); // --- this always fails, and HTTP.ResultString contains "Not Found" I'm using the latest version of Synapse from their SVN repository (version from 2 days ago). NOTE: The file I am attempting to download is known to have a virus, the program I am writing is meant to download malicious files for analysis. So, don't execute the file once you download it. However, I'm using this URL b/c this is the one I can reproduce the issue with.

    Read the article

  • Getting JAX-WS client work on Weblogic 9.2 with ant

    - by michuk
    I've recently had lots of issues trying to deploy a JAX-WS web servcie client on Weblogic 9.2. It turns out there is no straightforward guide on how to achieve this, so I decided to put together this short wiki entry hoping it might be useful for others. Firstly, Weblogic 9.2 does not support web servcies using JAX-WS in general. It comes with old versions of XML-related java libraries that are incompatible with the latest JAX-WS (similar issues occur with Axis2, only Axis1 seems to be working flawlessly with Weblogic 9.x but that's a very old and unsupported library). So, in order to get it working, some hacking is required. This is how I did it (note that we're using ant in our legacy corporate project, you probably should be using maven which should eliminate 50% of those steps below): Download the most recent JAX-WS distribution from https://jax-ws.dev.java.net/ (The exact version I got was JAXWS2.2-20091203.zip) Place the JAX-WS jars with the dependencies in a separate folder like lib/webservices. Create a patternset in ant to reference those jars: Include the patternset in your WAR-related goal. This could look something like: (not the flatten="true" parameter - it's important as Weblogic 9.x is by default not smart enough to access jars located in a different lcoation than WEB-INF/lib inside your WAR file) In case of clashes, Weblogic uses its own jars by default. We want it to use the JAX-WS jars from our application instead. This is achieved by preparing a weblogic-application.xml file and placing it in META-INF folder of the deplotyed EAR file. It should look like this: javax.jws. javax.xml.bind. javax.xml.crypto. javax.xml.registry. javax.xml.rpc. javax.xml.soap. javax.xml.stream. javax.xml.ws. com.sun.xml.api.streaming.* Remember to place that weblogic-application.xml file in your EAR! The ant goal for that may look similar to: <jar destfile="${warfile}" basedir="${wardir}"/> <ear destfile="${earfile}" appxml="resources/${app.name}/application.xml"> <fileset dir="${dist}" includes="${app.name}.war"/> <metainf dir="resources/META-INF"/> </ear> Also you need to tell weblogic to prefer your WEB-INF classes to those in distribution. You do that by placing the following lines in your WEB-INF/weblogic.xml file: true And that's it for the weblogic-related configuration. Now only set up your JAX-WS goal. The one below is going to simply generate the web service stubs and classes based on a locally deployed WSDL file and place them in a folder in your app: Remember about the keep="true" parameter. Without it, wsimport generates the classes and... deletes them, believe it or not! For mocking a web service I suggest using SOAPUI, an open source project. Very easy to deploy, crucial for web servcies intergation testing. We're almost there. The final thing is to write a Java class for testing the web service, try to run it as a standalone app first (or as part of your unit tests) And then try to run the same code from withing Weblogic. It should work. It worked for me. After some 3 days of frustration. And yes, I know I should've put 9 and 10 under a single bullet-point, but the title "10 steps to deploy a JAX-WS web service under Web logic 9.2 using ant" sounds just so much better. Please, edit this post and improve it if you find something missing!

    Read the article

  • NHibernate which cache to use for WinForms application

    - by chiccodoro
    I have a C# WinForms application with a database backend (oracle) and use NHibernate for O/R mapping. I would like to reduce communication to the database as much as possible since the network in here is quite slow, so I read about second level caching. I found this quite good introduction, which lists the following available cache implementations. I'm wondering which implementation I should use for my application. The caching should be simple, it should not significantly slow down the first occurrence of a query, and it should not take much memory to load the implementing assemblies. (With NHibernate and Castle, the application already takes up to 80 MB of RAM!) Velocity: uses Microsoft Velocity which is a highly scalable in-memory application cache for all kinds of data. Prevalence: uses Bamboo.Prevalence as the cache provider. Bamboo.Prevalence is a .NET implementation of the object prevalence concept brought to life by Klaus Wuestefeld in Prevayler. Bamboo.Prevalence provides transparent object persistence to deterministic systems targeting the CLR. It offers persistent caching for smart client applications. SysCache: Uses System.Web.Caching.Cache as the cache provider. This means that you can rely on ASP.NET caching feature to understand how it works. SysCache2: Similar to NHibernate.Caches.SysCache, uses ASP.NET cache. This provider also supports SQL dependency-based expiration, meaning that it is possible to configure certain cache regions to automatically expire when the relevant data in the database changes. MemCache: uses memcached; memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Basically a distributed hash table. SharedCache: high-performance, distributed and replicated memory object caching system. See here and here for more info My considerations so far were: Velocity seems quite heavyweight and overkill (the files totally take 467 KB of disk space, haven't measured the RAM it takes so far because I didn't manage to make it run, see below) Prevalence, at least in my first attempt, slowed down my query from ~0.5 secs to ~5 secs, and caching didn't work (see below) SysCache seems to be for ASP.NET, not for winforms. MemCache and SharedCache seem to be for distributed scenarios. Which one would you suggest me to use? There would also be a built-in implementation, which of course is very lightweight, but the referenced article tells me that I "(...) should never use this cache provider for production code but only for testing." Besides the question which fits best into my situation I also faced problems with applying them: Velocity complained that "dcacheClient" tag not specified in the application configuration file. Specify valid tag in configuration file," although I created an app.config file for the assembly and pasted the example from this article. Prevalence, as mentioned above, heavily slowed down my first query, and the next time the exact same query was executed, another select was sent to the database. Maybe I should "externalize" this topic into another post. I will do that if someone tells me it is absolutely unusual that a query is slowed down so much and he needs further details to help me.

    Read the article

  • Are PackageMaker installations with preinstall scripts broken on Snow Leopard?

    - by Stu Thompson
    Everything worked on 10.5, but now my PackageMaker installation project is broken. I've been fighting a problem for a few days now, and either Snow Leopard (OS X 10.6.1) has broken PackageMaker installations I am lacking a very, very basic tidbit of knowledge To narrow down the problem, I've gotten to this point: Create a new PackageMaker installation Have it install a jpeg image into my home directoy Define a preinstall script that does nothing #/bin/sh exit 0 Run the above...and watch it fail with the below error message like clock work Sep 14 15:09:45 manoa installd[5620]: PackageKit: ----- Begin install ----- Sep 14 15:09:45 manoa installd[5620]: PackageKit: request=PKInstallRequest <1 packages, destination=/> Sep 14 15:09:45 manoa installd[5620]: PackageKit: packages=(\n "PKLeopardPackage <file://localhost/Users/stu/Desktop/asdf.pkg>"\n) Sep 14 15:09:46 manoa installd[5620]: PackageKit: Extracting /Users/stu/Desktop/asdf.pkg (destination=/var/folders/Hb/HbXJFyEpFaupt5QyLN-pTk+++TI/-Tmp-/PKInstallSandbox-tmp/Root/~, uid=501) Sep 14 15:09:46 manoa installd[5620]: PackageKit: Executing script "./preinstall" in /private/tmp/PKInstallSandbox.cmlS2H/Scripts/test.test.5year_header.pkg.PFrHNB Sep 14 15:09:46 manoa installd[5620]: PackageKit: *** launch path not accessible Sep 14 15:09:46 manoa installd[5620]: PackageKit: Install Failed: PKG: pre-install scripts for "test.test.5year_header.pkg"\nError Domain=PKInstallErrorDomain Code=112 UserInfo=0x100149430 "An error occurred while running scripts from the package “asdf”." {\n NSFilePath = "./preinstall";\n NSLocalizedDescription = "An error occurred while running scripts from the package \U201casdf\U201d.";\n NSURL = "file://localhost/Users/stu/Desktop/asdf.pkg";\n PKInstallPackageIdentifier = "test.test.5year_header.pkg";\n} Sep 14 15:09:46 manoa Installer[5614]: install:didFailWithError:Error Domain=PKInstallErrorDomain Code=112 UserInfo=0x1195917c0 "An error occurred while running scripts from the package “asdf”." Sep 14 15:09:46 manoa Installer[5614]: Install failed: The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance. Sep 14 15:09:47 manoa Installer[5614]: IFDInstallController 144040 state = 7 Sep 14 15:09:47 manoa Installer[5614]: Displaying 'Install Failed' UI. Sep 14 15:09:47 manoa Installer[5614]: 'Install Failed' UI displayed message:'The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance.'. There is no file in /private/tmp/PKInstallSandbox.cmlS2H/Scripts/test.test.5year_header.pkg.PFrHNB/, which makes me think the problem is with PackageMaker, and not me. But I'm new to the world of OS X software installation, so doubts remain. So, the question: Is PackageMaker with a preinstall script broken on OS X 10.6? Or is there some requirement regarding preinstall scripts that I do not understand?

    Read the article

  • UIWebView leak? Can someone confirm?

    - by Shaggy Frog
    I was leak-testing my current project and I'm stumped. I've been browsing like crazy and tried everything except chicken sacrifice. I just created a tiny toy project app from scratch and I can duplicate the leak in there. So either UIWebView has a leak or I'm doing something really silly. Essentially, it boils down to a loadRequest: call to a UIWebView object, given an URLRequest built from an NSURL which references a file URL, for a file in the app bundle, which lives inside a folder that Xcode is including by reference. Phew. The leak is intermittent but still happens ~75% of the time (in about 20 tests it happened about 15 times). It only happens on the device -- this does not leak in the simulator. I am testing targeting both iPhone OS 3.1.2 and 3.1.3, on an original (1st Gen) iPod Touch that is using iPhone OS 3.1.3. To reproduce, just create a project from scratch. Add a UIWebView to the RootViewController's .xib, hook it up via IBOutlet. In the Finder, create a folder named "html" inside your project's folder. Inside that folder, create a file named "dummy.html" that has the word "Test" in it. (Does not need to be valid HTML.) Then add the html folder to your project in Xcode by choosing "Create Folder References for any added folders" Add the following to viewDidLoad NSString* resourcePath = [[NSBundle mainBundle] resourcePath]; NSString* filePath = [[resourcePath stringByAppendingPathComponent:@"html"] stringByAppendingPathComponent:@"dummy.html"]; NSURL* url = [[NSURL alloc] initFileURLWithPath:filePath]; NSURLRequest* request = [NSURLRequest requestWithURL:url]; // <-- this creates the leak! [browserView loadRequest:request]; [url release]; I've tried everything from setting delegate for the UIWebView and implementing UIWebViewDelegate, to not setting a delegate in IB, to not setting a delegate in IB and explicitly setting the web view's delegate property to nil, to using alloc/init instead of getting autoreleased NSURLRequests (and/or NSURLs)... I tried the answer to a similar question (setting the shared URL cache to empty) and that did not help. Can anyone help?

    Read the article

  • VBScript Out of String space

    - by MalsiaPro
    I got the following code to capture information for files on a specified drive, I ran the script againts a 600 GB hard drive on one of our servers and after a while I get the error Out of String space; "Join". Line 34, Char 2 For this code, file script.vbs: Option Explicit Dim objFS, objFld Dim objArgs Dim strFolder, strDestFile, blnRecursiveSearch ''Dim strLines Dim strCsv ''Dim i '' i = 0 ' 'Get the commandline parameters ' Set objArgs = WScript.Arguments ' strFolder = objArgs(0) ' strDestFile = objArgs(1) ' blnRecursiveSearch = objArgs(2) '######################################## 'SPECIFY THE DRIVE YOU WANT TO SCAN BELOW '######################################## strFolder = "C:\" strDestFile = "C:\InformationOutput.csv" blnRecursiveSearch = True 'Create the FileSystemObject Set objFS=CreateObject("Scripting.FileSystemObject") 'Get the directory you are working in Set objFld = objFS.GetFolder(strFolder) 'Open the csv file Set strCsv = objFS.CreateTextFile(strDestFile, True) '' 'Write the csv file '' Set strCsv = objFS.CreateTextFile(strDestFile, True) strCsv.WriteLine "File Path,File Size,Date Created,Date Last Modified,Date Last Accessed" '' strCsv.Write Join(strLines, vbCrLf) 'Now get the file details GetFileDetails objFld, blnRecursiveSearch '' 'Close and cleanup objects '' strCsv.Close '' 'Write the csv file '' Set strCsv = objFS.CreateTextFile(strDestFile, True) '' For i = 0 to UBound(strLines) '' strCsv.WriteLine strLines(i) '' Next 'Close and cleanup objects strCsv.Close Set strCsv = Nothing Set objFld = Nothing Set strFolder = Nothing Set objArgs = Nothing '---------------------------SCAN SPECIFIED LOCATION------------------------------- Private Sub GetFileDetails(fold, blnRecursive) Dim fld, fil dim strLine(4) on error resume next If InStr(fold.Path, "System Volume Information") < 1 Then If blnRecursive Then 'Work through all the folders and subfolders For Each fld In fold.SubFolders GetFileDetails fld, True If err.number <> 0 then LogError err.Description & vbcrlf & "Folder - " & fold.Path err.Clear End If Next End If 'Now work on the files For Each fil in fold.Files strLine(0) = fil.Path strLine(1) = fil.Size strLine(2) = fil.DateCreated strLine(3) = fil.DateLastModified strLine(4) = fil.DateLastAccessed strCsv.WriteLine Join(strLine, ",") if err.number <> 0 then LogError err.Description & vbcrlf & "Folder - " & fold.Path & vbcrlf & "File - " & fil.Name err.Clear End If Next End If end sub Private sub LogError(strError) dim strErr 'Write the csv file Set strErr = objFS.CreateTextFile("C:\test\err.log", false) strErr.WriteLine strError strErr.Close Set strErr = nothing End Sub RunMe.cmd wscript.exe "C:\temp\script\script.vbs" How can I avoid getting this error? The server drives are quite a bit <???? and I would imagine that the CSV file would be at least 40 MB. Edit by Guffa: I commented out some lines in the code, using double ticks ('') so you can see where.

    Read the article

  • Help to Install STLPort in VC6

    - by Yan Cheng CHEOK
    I try to follow the following steps to install STLPort in VC6 1) I change C:\Program Files\Microsoft Visual Studio\VC98\Bin\VCVARS32.BAT content set INCLUDE=%MSVCDir%\ATL\INCLUDE;%MSVCDir%\INCLUDE;%MSVCDir%\MFC\INCLUDE;%INCLUDE% set LIB=%MSVCDir%\LIB;%MSVCDir%\MFC\LIB;%LIB% to (The directory C:\STLport-5.2.1\lib is not exsits?) set INCLUDE=C:\STLport-5.2.1\stlport;%MSVCDir%\ATL\INCLUDE;%MSVCDir%\INCLUDE;%MSVCDir%\MFC\INCLUDE;%INCLUDE% set LIB=C:\STLport-5.2.1\lib;%MSVCDir%\LIB;%MSVCDir%\MFC\LIB;%LIB% 2) I C:\>cd C:\STLport-5.2.1\build\lib C:\STLport-5.2.1\build\lib>"c:\Program Files\Microsoft Visual Studio\VC98\bin\VCVARS32.BAT" Setting environment for using Microsoft Visual C++ tools. C:\STLport-5.2.1\build\lib>c:\STLport-5.2.1\configure.bat msvc6 STLport Configuration Tool for Windows The system cannot find the path specified. The system cannot find the path specified. The system cannot find the path specified. The system cannot find the path specified. The system cannot find the path specified. Setting compiler: Microsoft Visual C++ 6.0 The system cannot find the path specified. The system cannot find the path specified. The system cannot find the path specified. The system cannot find the path specified. The system cannot find the path specified. Setting platform: Windows XP The system cannot find the path specified. Done configuring STLport. Go to build/lib folder and type "nmake clean install" to build and install STLport to the "lib" and "bin" folders. Go to build/test/unit folder and type nmake clean install to build unit tests and install them in bin folder. 3) The code within configure.bat, which give me The system cannot find the path specified is REM initially create/overwrite config.mak echo # STLport Configuration Tool for Windows > build\Makefiles\nmake\config.mak echo # >> build\Makefiles\nmake\config.mak echo # config.mak generated with command line: >> build\Makefiles\nmake\config.mak echo # configure %1 %2 %3 %4 %5 %6 %7 %8 %9 >> build\Makefiles\nmake\config.mak echo # >> build\Makefiles\nmake\config.mak What I shall change them to? How can I continue the build? Thanks.

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >