Search Results

Search found 24117 results on 965 pages for 'write through'.

Page 736/965 | < Previous Page | 732 733 734 735 736 737 738 739 740 741 742 743  | Next Page >

  • Windows 7, going crazy with environment variables

    - by roymustang86
    So, I am trying to learn java. I installed the JDK and proceeded to write a few programs. Each time, I have to give the path to javac.exe to compile the .java file. SO, I decided to tweak the %PATH% variable. And no matter what I change it to, it doesn't work. when I do an echo %PATH%, I get 'Program' is not recognized as an internal or external command, operable program or batch file. This is my Path variable contents : C:\app\product\11.1.0\client_1\bin;%CommonProgramFiles%\Microsoft Shared\Windows Live;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;"C:\Program Files (x86)\Common Files\Roxio Shared\DLLShared\";"C:\Program Files\Broadcom\Broadcom 802.11";"C:\Program Files (x86)\Common Files\Roxio Shared\OEM\DLLShared\";"C:\Program Files (x86)\Common Files\Roxio Shared\OEM\DLLShared\";"C:\Program Files (x86)\Common Files\Roxio Shared\OEM\12.0\DLLShared\";"C:\Program Files (x86)\Roxio\OEM\AudioCore\";"C:\Program Files (x86)\Intel\Services\IPT\" How do I work around this? the double quotes were not there before, I added it thinking the space was the problem.

    Read the article

  • Fix bad superblock on logical partition

    - by Chris
    I was following http://www.howtoforge.com/linux_resi...xt3_partitions and when i reboot and run: root@Microknoppix:/home/knoppix# fsck -n /dev/sda7 fsck from util-linux-ng 2.17.2 e2fsck 1.41.12 (17-May-2010) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda7 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> so i ran e2fsck with all the block numbers that you need (forget exactly what tool i used to find where the superblocks are hidden) no dice then i ran testdisk and had it look for the superblock, no results anyone have any ideas? fdisk -l for reference: root@Microknoppix:/home/knoppix# fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x97646c29 Device Boot Start End Blocks Id System /dev/sda1 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 38912 312046593 f W95 Ext'd (LBA) /dev/sda5 64 326 2104320 82 Linux swap / Solaris /dev/sda6 * 327 2938 20972544 83 Linux /dev/sda7 2938 38912 288968672+ 83 Linux To be honest it looks like I lost it... Next step if that happens is to dump the partition to an image file and hope i can find or write some software to parse through the data looking for known file headers, i think.

    Read the article

  • Understanding RedHats recommended tuned profiles

    - by espenfjo
    We are going to roll out tuned (and numad) on ~1000 servers, the majority of them being VMware servers either on NetApp or 3Par storage. According to RedHats documentation we should choose the virtual-guestprofile. What it is doing can be seen here: tuned.conf We are changing the IO scheduler to NOOP as both VMware and the NetApp/3Par should do sufficient scheduling for us. However, after investigating a bit I am not sure why they are increasing vm.dirty_ratio and kernel.sched_min_granularity_ns. As far as I have understood increasing increasing vm.dirty_ratio to 40% will mean that for a server with 20GB ram, 8GB can be dirty at any given time unless vm.dirty_writeback_centisecsis hit first. And while flushing these 8GB all IO for the application will be blocked until the dirty pages are freed. Increasing the dirty_ratio would probably mean higher write performance at peaks as we now have a larger cache, but then again when the cache fills IO will be blocked for a considerably longer time (Several seconds). The other is why they are increasing the sched_min_granularity_ns. If I understand it correctly increasing this value will decrease the number of time slices per epoch(sched_latency_ns) meaning that running tasks will get more time to finish their work. I can understand this being a very good thing for applications with very few threads, but for eg. apache or other processes with a lot of threads would this not be counter-productive?

    Read the article

  • How to test nginx proxy timeouts

    - by mkorszun
    Target: I would like to test all Nginx proxy timeout parameters in very simple scenario. My first approach was to create really simple HTTP server and put some timeouts: Between listen and accept to test proxy_connect_timeout Between accept and read to test proxy_send_timeout Between read and send to test proxy_read_timeout Test: 1) Server code (python): import socket import os import time import threading def http_resp(conn): conn.send("HTTP/1.1 200 OK\r\n") conn.send("Content-Length: 0\r\n") conn.send("Content-Type: text/xml\r\n\r\n\r\n") def do(conn, addr): print 'Connected by', addr print 'Sleeping before reading data...' time.sleep(0) # Set to test proxy_send_timeout data = conn.recv(1024) print 'Sleeping before sending data...' time.sleep(0) # Set to test proxy_read_timeout http_resp(conn) print 'End of data stream, closing connection' conn.close() def main(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('', int(os.environ['PORT']))) s.listen(1) print 'Sleeping before accept...' time.sleep(130) # Set to test proxy_connect_timeout while 1: conn, addr = s.accept() t = threading.Thread(target=do, args=(conn, addr)) t.start() if __name__ == "__main__": main() 2) Nginx configuration: I have extended Nginx default configuration by setting explicitly proxy_connect_timeout and adding proxy_pass pointing to my local HTTP server: location / { proxy_pass http://localhost:8888; proxy_connect_timeout 200; } 3) Observation: proxy_connect_timeout - Even though setting it to 200s and sleeping only 130s between listen and accept Nginx returns 504 after ~60s which might be because of the default proxy_read_timeout value. I do not understand how proxy_read_timeout could affect connection at so early stage (before accept). I would expect 200 here. Please explain! proxy_send_timeout - I am not sure if my approach to test proxy_send_timeout is correct - i think i still do not understand this parameter correctly. After all, delay between accept and read does not force proxy_send_timeout. proxy_read_timeout - it seems to be pretty straightforward. Setting delay between read and write does the job. So I guess my assumptions are wrong and probably I do not understand proxy_connect and proxy_send timeouts properly. Can some explain them to me using above test if possible (or modifying if required).

    Read the article

  • iptables configuration under ubuntu

    - by aioobe
    I'm following a tutorial on setting up a dns-tunnel. I've run into the following instruction: Now you need to enable forwarding on this server. I use iptables to implement masquerading. There are many HOWTOs about this (a simple one, for example). On Debian, the configuration file for iptables is in /var/lib/iptables/active. The relevant bit is: *nat :PREROUTING ACCEPT [6:1596] :POSTROUTING ACCEPT [1:76] :OUTPUT ACCEPT [1:76] -A POSTROUTING -s 10.0.0.0/8 -j MASQUERADE COMMIT Restart iptables: /etc/init.d/iptables restart The problem is that I don't have any /var/lib/iptables/active. (I'm on ubuntu.) How can I accomplish this? I suspect that I should just interact with the iptables command somehow but I have no clue what to write. Best would probably be if I could put the commands in a script somehow I suppose. (A side-note. If I execute a few iptables-commands it wont be there for ever, right? The rules will be discarded on reboot?)

    Read the article

  • Missing whole disk device in OpenSolaris

    - by Jeff Mc
    I have begun experimenting with Solaris and ZFS as a NAS. All was going very smoothly until I had a drive failure. When I replaced the drive, I no longer have a device file mapped to the whole disk. /dev/dsk/c7t3d0 does not exist but c7t2d0 and c7t4d0 both do. Also the sd@3,0:wd file under the /devices/ tree is non-existent. Do I have to prepare/partition the disk somehow to cause the whole disk device to exist? Here are a few outputs that might be useful. jeffmc@ats-ds2:/dev/dsk$ zpool status pool: datapool state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: none requested config: NAME STATE READ WRITE CKSUM datapool DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c7t2d0 ONLINE 0 0 0 c7t3d0 UNAVAIL 0 0 0 cannot open mirror-1 ONLINE 0 0 0 c7t4d0 ONLINE 0 0 0 c7t5d0 ONLINE 0 0 0 jeffmc@ats-ds2:/dev/dsk$ zpool replace datapool c7t3d0 cannot open 'c7t3d0': no such device in /dev/dsk must be a full path or shorthand device name jeffmc@ats-ds2:/dev/dsk$ sudo format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@0,0 1. c7t1d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@1,0 2. c7t2d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@2,0 3. c7t3d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@3,0 4. c7t4d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@4,0 5. c7t5d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@5,0

    Read the article

  • Running phpmyadmin xampp Ubuntu 12.10

    - by Luigi Tiburzi
    I know it is a common problem and there are many solutions on the web but I'm trying everything and anything is working, I can't have phpmyadmin running on my machine. I installed XAMPP through: sudo tar xvfz ./Downloads/xampp-linux-1.8.1.tar.gz -C /opt then I did the chmod trick supposed to make an end to access issues and I change the default location to my php projects from /var/www to Dropbox/php. Then I started XAMPP in the usual way: sudo /opt/lampp/lampp start When I tried to run one of my php projects the output on the web is fine but if for example I try to write localhost on my browser I get: It works and not the usual XAMPP interface and most of all when I try to access localhost/phpmyadmin I get the login page, insert username (root) and password and I get: You don't have permission to access /phpmyadmin/index.php on this server. Apache/2.2.22 (Ubuntu) Server at localhost Port 80 I tried the Required all granted trick and some others but nothing is working. I even tried to uninstall phpmyadmin and reinstall it but this is not working too. I don't know hot to proceed. Thanks for your help.

    Read the article

  • CLOSE_WAIT sockets burst - perhaps because of iptables settings?

    - by Fabrizio Giudici
    I have an Ubuntu 12.04 server virtual box where basically the installed software and configuration are the default ones, plus the installation of a jetty 6 server which servers a few websites. To keep things simple I didn't install apache httpd and used iptables for exposing jetty (which runs on the 8080 port) to the port 80. These are the results of /sbin/iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere localhost tcp dpt:http redir ports 8080 REDIRECT tcp -- anywhere Ubuntu-1104-natty-64-minimal tcp dpt:http redir ports 8080 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere localhost tcp dpt:http redir ports 8080 REDIRECT tcp -- anywhere Ubuntu-1104-natty-64-minimal tcp dpt:http redir ports 8080 Chain POSTROUTING (policy ACCEPT) target prot opt source destination I must confess I have a shallow comprehension of how iptables works, in particular for the different kind of chains. This thing works, but sometimes I have an explosion of sockets that stay permanently in CLOSE_WAIT state. I know about what this state means, but since I didn't write the code that manages servlets (they are handled by jetty) I can't fix the problem by patching my code. Eventually the amount of CLOSE_WAIT sockets builds up and makes the server not responsive, so I have to restart jetty. I've looked around for similar problems wth CLOSE_WAIT, and only found cases related to the programmer's code, or problems with Tomcat, not Jetty. I was wondering whether they could be related to a partially broken iptables configuration (the alternative is a bug in Jetty 6, but I first want to exclude other possible causes). Thanks.

    Read the article

  • AWS own email domain and some generic questions

    - by John Brunner
    I'm getting started with Amazon Web Services and I have a few question I'm not sure about. As every (company) webpage I want to use an "[email protected]" email adress, but how is that done? I looked up at godaddy.com (for domain registration), the offer me an email adress like I want, but for 3 dollars per month. Is this possible with AWS? Because at AWS you have just a complex domain which is not very userfriendly or serious. Also I want to host my dynamic webpage on the amazon cloud, but I'm not sure if I'm doing that right. I've read many guides, and all I know is that I have to purchase a Elastic Compute Cloud, and a Simple Storage Service... and every guide is working with the basic linux package, why not Windows? Is it more expensive? I just want to host a mySQL Server for the dynamic webpage, which is reached over a normal domain. And one last question, if I sign up for an AWS account it asks me for an email account. But I found it a little bit unserious to write there my free-webmailer-adress... How is it done the normal way? Thanks in advance! Best regards, john.

    Read the article

  • Two servers, two domains, one ip. mod_proxy beginner

    - by Gutsav
    I run two virtual web servers (both running apache2 on debian). I have just one external IP, but two domains, and I want a domain going to each of the servers. I've understood that I need a Reverse Proxy, and I enabled both the mod_proxy and the mod_proxy_http modules on the "primary server". Do I need to enable anything on the "secondary server"? I also understood that I need to write some things in a virtual host file, but what? On the primary server, I have a virtual host file for one of the domains, and some for subdomains. I want domain1.tld to go to the primary server (port 80 is forwarded to it, so that works) and domain2.tld to go to the other server (internal ip 192.168.0.x). No ports needs to be forwarded to it, right? So, what to add and in which virtual host file? Or a new one? Other questions suggest adding ProxyPass and ProxyPassReverse, but I'm lost anyway, and I just don't understand the apache documentation. Thanks in advance

    Read the article

  • Redirect port / port 10000 to https apache

    - by Hamid Elaosta
    I have been reading around and trying different configurations to get a request to my server on port 10000 to redirect a http to a https request. For some reason I can't figure out how to make it happen when i use port 10000 although i can set a rewrite rule for port 80 (implicit) to do it: All I want is a request as follows: http://127.0.0.1:10000 to redirect me to https://127.0.0.1:10000 but it needs to be written so that it also works when accessed via my domain name externally. My current, vhost, the last of many different attempts is currently set as follows, but it doesn't seem to work at all: <VirtualHost *:10000> RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_POST}%{REQUEST_URI} ErrorLog "/var/log/httpd/webmin-redirect_error_log.log" CustomLog "/var/log/httpd/webmin-redirect_access_log.log" common </VirtualHost> I'v also tried a few other things but nothing seems to work, any help would be appreciated. EDIT: I already have a re-write in my httpd.conf that redirects port 80 to https. If I access port 10000 externally it is redirected to https, but from the lan "http://192.168.0.2:10000" it doesnt.

    Read the article

  • Redeploy using Active Directory

    - by Noam Gal
    I am trying to use group policy to deploy our msi through AD. For some strange reason, when I overwrite the msi with a newer version, and then go to the policy, and click on "Redeploy Application", the application gets uninstalled on the users' machines, and all reg keys, binaries and shortcuts are gone from them. The "Add/Remove Programs" still contain the application entry. I have managed to create a minimal vdproj that does nothing but write its current Product Version to a registry key, and created two versions of it (1.0.0 and 1.1.0). I still face the same problems when using this msi in my AD environment. I did check that my Package Codes and Product Codes are different for both versions, and that the Upgrade Codes are identical. I also checked the RemovePreviousVersion to true. Checking with some other msi (firefox 3.0.0 and 3.6.3) I downloaded from a site specifically for AD deploy, it worked just as expected (first installing the 3.0.0, then I over-written the msi, and clicked on "Redeploy", and the users got 3.6.3 after the next log-off-log-on). What am I missing here?

    Read the article

  • Why does running "$ sudo chmod -R 664 . " cause me to get access denied on all affected directories?

    - by Codemonkey
    I have a project folder which has messy permissions on all files. I've had the bad tendency of setting everything to octal permissions 777 because it solved all non security related issues. Then FTP uploads, files created by text editors etc. has their own set of permissions making everything a mess. I've decided to take myself together and start using the permissions the way they were meant to be used. I figured 664 was a good default for all my files and folders, and I'd just remove permissions for others on private files, and add +x for executable files. The second I changed my project folder to 664 however: $ sudo chmod -R 664 . $ ls ls: cannot open directory .: Permission denied Which makes no sense to me. I have read/write permissions, and I'm the owner of the project folder. The leftmost part of ls -l in my project folder looks like this: -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 5 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 3 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 4 codemonkey codemonkey ... drw-rw-r-- 5 codemonkey codemonkey ... I assume this has something to do with the permissions on the directories, but what?

    Read the article

  • Redirection of outbound UDP port NTP.

    - by pboin
    For my residential service, I changed ISPs to Zoom/Armstrong. Just after that, my NTP daemons stopped working. I dug deep and diagnosed the problem: Unprivileged ports are getting out. When i run 'ntpdate' for example, I go out on a high, unprivleged port, and get a response on UDP 123. That's fine. The 'ntpd' daemon though, expects to go out on 123 and get its reply there as well. This must be a common problem, because it's directly addressed in the NTP troubleshooting guide. Just to see what would happen, I wrote a detailed email to the general support address at Armstrong. They replied almost immediately with a complete technical answer! They have everything <1024 blocked, except for a few ports to support outbound VPN. So, the question: Can I use IPtables to essentially re-write my outbound UDP 123 up to 2123 or something like that? If I do, does there need to be a corresponding 2123-123 rule to translate the reply? This seems like NAT, but with ports, not addresses. True, I could run ntpdate from cron, but that loses all of the adjustment smarts of NTP.

    Read the article

  • How to have a shell script available everywhere I SSH to

    - by aib
    I have a shell script which I simply cannot do without: bar from Theiling Online I use SSH a lot and on a variety of *nix servers. However, I am not a system administrator and usually don't have the time or privileges to install it on every server I connect to. It is apparently a very portable sh script and has command line options to export itself as a shell function, which got me thinking: Could I use one of OpenSSH's subjectively obscure features to export it everywhere I go? My first thought was to assign the source to an environment variable like BAR = "cat -v" and then execute it on the other side as `$BAR`, but 1) I can't even get the cat example to to work locally, 2) I don't know how to put the script's actual multiline source into an environment variable and 3) I have yet to see a machine with PermitUserEnvironment enabled. I guess I could even do with an ssh option to write a file called ~/bar at logon, but a more volatile solution would be better. Calling wget http://.../bar at logon would be unacceptable. Any ideas? P.S. Putty-specific solutions, though I doubt any would exist, are also fine.

    Read the article

  • Multidimensional data table?

    - by ShreevatsaR
    [Apologies if this sort of question is off-topic for SuperUser. Please redirect to the right place if so.] There is a 3-dimensional array of values. (That is, instead of a table/2-dimensional array with values in a grid, the values can be thought of in a cube instead.) Is there a way to display this "cube" interactively, ideally on a webpage? Specifically, given the data, it would work something like this: the user selects two of the 3 variables. He then sees a "stack" of tables, one for each value of the third variable (cross-sections, in other words). By selecting the appropriate table from the stack, he can see the (i,j,k) value he wants. The "technology" for displaying such a thing (stacked tables, rotation, etc.) already exists, so this seems the sort of thing that someone ought to have written already. To be clear: I don't need sophisticated graphics necessarily, just the ability to select from cross-sections of variables. But I have no experience with (say, for displaying on a webpage) what web gadgets exist, so I'm clueless how to even search for one. (Google searches like "multidimensional data visualization" didn't throw up anything useful. Google Spreadsheets can do a few kinds of charts which can be embedded on a webpage, but I cannot tell if this is one of them.) [I can imagine how it ought to work for higher dimensions. For four-dimensions, instead of selecting just a stack, you'd first select an (i,j) from an "outer table", which would show all (k,l) values for that (i,j). For higher dimensions, inductively: you select (i,j), and then repeat what you'd do with 2 fewer dimensions.] So has this been written? Is this easy to write? Where ought one to look for such a thing?

    Read the article

  • What can cause a kernel hang on redhat 4?

    - by Ivan Buttinoni
    I've to solve a nasty problem on a ten machine "cluster": randomly one of these machine hang during an hard computation, sometime still ping sometime not. The problem was described me at the phone, I've still no touch/see these machine, so I can't be more precise. It seem there's no (real) keyboard or monitor linked to them, so I haven't nothing about keyboard led or messages on monitor. Don't worry, what I really need is some suggestion where to search the problem, some suggestions on what can cause a kernel hang on a working machine. I also see this post, but seem same need on a different situation. My ideas since now: - HW problem (ram, cpu, fan etc.) - bad autofs configuration - bad nfs(?) configuration - presence of a trojan/hacker/etc - /dev/"swap" linked to /dev/zero - kernel out of memory(??) - kernel bugged In other words I try to imagine what kind of envent can occour that can crash the kernel insted of the application that generate the event. What hang have YOU experienced before? Write it to me! TIA

    Read the article

  • apt-mirror does not mirror the i18n directory

    - by Fred
    I need to setup a local Ubuntu mirror so the whole network doesn't need to hit remote servers in order to update and install new packages. Following a brief tutorial found here, I managed to get a server up and running that correctly mirrors packages from the main and restricted categories. However, when I call apt-get update on a client, I get a couple of errors such as : Ign http://192.168.1.18 karmic/main Translation-fr Ign http://192.168.1.18 karmic/restricted Translation-fr Checking back on the server, I see that apt-mirror only took the binary-amd64 directory of the mirror, and didn't take i18n that would provide Translation-fr. The manpage for apt-mirror doesn't say anything about i18n, and Google is of no help either. How do I properly mirror i18n? My current mirror.list file is as follows : ############# config ################## # # set base_path /var/spool/apt-mirror # # if you change the base path you must create the directories below with write privileges # # set mirror_path $base_path/mirror # set skel_path $base_path/skel # set var_path $base_path/var # set cleanscript $var_path/clean.sh # set defaultarch <running host architecture> # set postmirror_script $var_path/postmirror.sh set run_postmirror 0 set nthreads 20 set _tilde 0 # ############# end config ############## deb http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive karmic main restricted deb http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive karmic-updates main restricted clean http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive

    Read the article

  • Acer recovery disks not bootable?

    - by user13743
    We got a new Acer laptop with Vista installed at work. As it's getting ready to go out in the field, we wanted to do a burn-in test on it. We made the recovery DVDs before we ran the test. Part of the burn-in was bonnie++, which does a destructive read/write test of the hard drive. The machine passed with flying colors, but after trying to boot to the recovery DVD to being re-installing the system, the machine began to try PXE boot after a while. After doing some googling, it appears these 'recovery' disks expect a certain recovery partition to exist on the hard drive, and are in fact not bootable at all, and are useless in absence of the recovery partition. Is this the case, and is this "The Way Things Are" with all PC manufacturers and Windows Vista+ nowadays? How do I get my hands on actual bootable DVDs? I've emailed Acer support. I see an option on their site to purchase recovery disks, but I have the suspicion that these are the same non-bootable disks that I burned on the new system. Will Acer provide actual boot disks?

    Read the article

  • EFS Remote Encryption

    - by Apoulet
    We have been trying to setup EFS across our domain. Unfortunately Reading/Writing file over network share does not work, we get an "Access Denied" error. Another worrying fact is that I managed to get it working for 1 machine but no other would work. The machines are all Windows 2008R2, running as VM under ESXi host. According to: http://technet.microsoft.com/en-us/library/bb457116.aspx#EHAA We setup the involved machine to be trusted for delegation The user are not restricted and can be trusted for delegation. The users have logged-in on both side and can read/write encrypted files without issues locally. I enabled Kerberos logging in the registry and this is the relevant logs that I get on the machine that has the encrypted files. In order for all certificate that the user possess (Only Key Name changes): Event ID 5058: Audit Success, "Other System Events" Key file operation. Subject: Security ID: {MyDOMAIN}\{MyID} Account Name: {MyID} Account Domain: {MyDOMAIN} Logon ID: 0xbXXXXXXX Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: Not Available. Key Name: {CE885431-9B4F-47C2-8415-2D766B999999} Key Type: User key. Key File Operation Information: File Path: C:\Users\{MyID}\AppData\Roaming\Microsoft\Crypto\RSA\S-1-5-21-4585646465656-260371901-2912106767-1207\66099999999991e891f187e791277da03d_dfe9ecd8-31c4-4b0f-9b57-6fd3cab90760 Operation: Read persisted key from file. Return Code: 0x0[/code] Event ID 5061: Audit Faillure, "System Intergrity" [code]Cryptographic operation. Subject: Security ID: {MyDOMAIN}\{MyID} Account Name: {MyID} Account Domain: {MyDOMAIN} Logon ID: 0xbXXXXXXX Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: RSA Key Name: {CE885431-9B4F-47C2-8415-2D766B999999} Key Type: User key. Cryptographic Operation: Operation: Open Key. Return Code: 0x8009000b Could this be related to this error from the CryptAcquireContext function NTE_BAD_KEY_STATE 0x8009000BL The user password has changed since the private keys were encrypted. The problem is that the users I using at the moment can not change their password.

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • Flash drive suddenly died. Why? Can I recover it?

    - by mg
    Hi, I have a flash drive that I used not too much but, after few month of inactivity, it died. I know that flash drives have a limited write cycles but I am sure that this is not the problem. I tried to create a new partition table and format the drive nothing worked. This is the output of mkfs.ext2. marco@pinguina:~$ sudo LANG=en.UTF-8 mkfs.ext2 -v -c /dev/sdc1 [sudo] password for marco: mke2fs 1.41.11 (14-Mar-2010) fs_types for mke2fs.conf resolution: 'ext2', 'default' Calling BLKDISCARD from 0 to 4001431552 failed. Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 244320 inodes, 976912 blocks 48845 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=1002438656 30 block groups 32768 blocks per group, 32768 fragments per group 8144 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Running command: badblocks -b 4096 -X -s /dev/sdc1 976911 badblocks: Input/output error during ext2fs_sync_device Checking for bad blocks (read-only test): done Block 0 in primary superblock/group descriptor area bad. Blocks 0 through 2 must be good in order to build a filesystem. Aborting.... Is there something I can do to recover it?

    Read the article

  • apache adress based access control

    - by stijn
    I have an apache instance serving different locations, eg https://host.com/jira https://host.com/svn https://host.com/websvn https://host.com/phpmyadmin Each of these has access control rules based on ip adres/hostname. Some of them use the same configuration though, so I have to repeat the same rules each time: Order Deny,Allow Deny from All Allow from 10.35 myhome.com mycollegueshome.com Is there a way to make these reusable so that I don't have to change each instance everytime something changes? Ie, can I write this once, then use it for a couple of locations? Using SetEnvIf maybe? It would be nice if I could do something like this pseudo-config: <myaccessrule> Order Deny,Allow Deny from All Allow from 10.35 myhome.com mycollegueshome.com </myaccessrule> <Proxy /jira*> AccessRule = myaccessrule </Proxy> <Location /svn> AccessRule = myaccessrule </Location> <Directory /websvn> AccessRule = myaccessrule </Directory>

    Read the article

  • How do I enable additional debugging output from Ansible and Vagrant?

    - by Brian Lyttle
    I'm investigating Ansible for server and application provisioning. My application is currently provisioned with shell scripts in Vagrant. Rather than rewrite my scripts I've taken a sample and attempted to deploy it. It appears to deploy fine, but I've seeing a failure message after what looks like a series of successful steps: » vagrant provision ~/vm/blvagrant 1 ? [default] Running provisioner: ansible... PLAY [web-servers] ************************************************************ GATHERING FACTS *************************************************************** ok: [192.168.9.149] TASK: [install python-software-properties] ************************************ ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [add nginx ppa if it ubuntu 10.04 and up] ******************************* ok: [192.168.9.149] => {"changed": false, "item": "", "repo": "ppa:nginx/stable", "state": "present"} TASK: [update apt repo] ******************************************************* ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [install nginx] ********************************************************* ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [copy fixed init for nginx] ********************************************* ok: [192.168.9.149] => {"changed": false, "gid": 0, "group": "root", "item": "", "mode": "0755", "owner": "root", "path": "/etc/init.d/nginx", "size": 2321, "state": "file", "uid": 0} TASK: [service nginx] ********************************************************* ok: [192.168.9.149] => {"changed": false, "item": "", "name": "nginx", "state": "started"} TASK: [write nginx.conf] ****************************************************** ok: [192.168.9.149] => {"changed": false, "gid": 0, "group": "root", "item": "", "mode": "0644", "owner": "root", "path": "/etc/nginx/nginx.conf", "size": 1067, "state": "file", "uid": 0} PLAY RECAP ******************************************************************** 192.168.9.149 : ok=8 changed=0 unreachable=0 failed=0 Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. How do I go about getting additional debug information? I've already added ansible.verbose = true to my vagrant config which results in the dictionaries being displayed within the output above.

    Read the article

  • GVIM hangs when saving through GVFS' FTP

    - by Lie Ryan
    I loved Gnome's Nautilus and FTP integration and being able to mount a remote FTP directory as a regular bookmark/directory, and double clicking any remote files to open in any unmodified program. I also loved editing text files with GVim. However, if I double clicked file on Nautilus to open a text file in Gvim, then saving a file will take about 10 seconds and GVim will hang for that amount of time. The major irritant is that I cannot continue editing while the text editor is waiting for the write to finish, this delay interrupted my workflow and thought process and saving becomes a painful process. The other problem is that I don't think simply uploading a file should take that much time. I'm aware of GVim's internal FTP support, but they are not as well integrated with Nautilus's FTP. So a few question: Is there a way to make GVim or GVFS to save in background while I continue editing? Why is GVFS so slow? Is there any way to set GVFS to use a single persistent FTP connection instead of creating a new FTP connection each time? I'm on Gentoo Linux x86-64.

    Read the article

< Previous Page | 732 733 734 735 736 737 738 739 740 741 742 743  | Next Page >