Search Results

Search found 20226 results on 810 pages for 'window parent'.

Page 518/810 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • Squid3 not caching simple request and response

    - by Nick Spacek
    Hi folks, I've pared down my squid.conf to try to figure this out: http_port 80 accel defaultsite=host.to.cache cache_peer ip.to.cache parent 80 0 no-query originserver acl our_sites dstdomain host.to.cache http_access allow our_sites refresh_pattern . 1 20% 4320 Requests are being proxied correctly, so that's a start. Here's a request: GET http://host.to.cache/path?some_param=true Accept: */* Accept-Charset: ISO-8859-1,utf-8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en Connection: keep-alive Host: host.to.cache User-Agent: myuseragent And the response: Connection: keep-alive Content-Length: 585 Content-Type: application/xml Date: Thu, 06 Jan 2011 18:33:11 GMT Via: 1.0 localhost (squid/3.0.STABLE19) X-Cache: MISS from localhost X-Cache-Lookup: MISS from localhost:80 The response has no caching-related headers, but I thought that refresh_pattern would set a default behavior for responses without caching-related headers. For my test, I wanted to cache everything for one minute at minimum. Am I missing something obvious? I did take a peek at this question: Squid isn't caching ...and ran through the page here: http://www.mnot.net/cache_docs/ briefly, but didn't see anything relevant (not to say that there isn't, I could have missed something). Thanks for any help.

    Read the article

  • How do I prevent spawning of zombie-like apache2 processes on Dreamhost VPS?

    - by Jonathan Hayward
    I have had a website for months or longer on a DreamHost VPS, and I have had vague memories on, in initial setup, having to turn off some customized Apache under /dh to get a standard Apache 2.x to work with. Things have been going along on an even keel, when I started making some changes lately and I found that when I tried to bounce Apache (/usr/sbin/apachectl restart), it couldn't bind to port 80, and my site had been converted from a big literature site to a small parking site. I tried to see what was listening on 80, and it was a DreamHost-customized Apache that had spawned. I killed all of them, restarted Apache, and changed the parent directory under /dh to mode 000. That was a day or two ago. I was bouncing Apache again in trying to get a new site to load under HTTPS, and I found that once again DreamHost's apache had spawned, from the directory I set to mode 000, and once again converted my site to a parking page. I renamed the directory, but I am very skeptical of whether I have permanently killed the DreamHost-customized Apache. Besides duct tape options like a crontab to kill and delete each minute, how can I permanently turn off the Apache processes that are spawning from a location under /dh and interfering with standard Apache? What should I be doing that I am not? Can DreamHost's technical support stop the interference? Thanks,

    Read the article

  • Removing/modifying LDAP objectclasses/attributes using olc

    - by Foezjie
    I'm having trouble using openldap's olc to modify a schema without shutting down the server. To test some things out, I made the following schema: objectIdentifier tests orgUlyssisOID:4 objectIdentifier testAttribute tests:1 objectIdentifier testObjectClass tests:2 attributeType ( testAttribute:1 NAME 'attr1' DESC 'attribuut 1' SYNTAX '1.3.6.1.4.1.1466.115.121.1.40' ) attributeType ( testAttribute:2 NAME 'attr2' DESC 'attribuut 2' SUP userPassword SINGLE-VALUE ) objectclass ( testObjectClass:1 NAME 'class1' DESC 'objectclass 1' SUP top STRUCTURAL MUST (attr1 $ attr2 ) ) And added it to a new schema called test. (cn={9}test.ldif in cn=schema). Now I can't seem to figure out how to delete class1 from that schema. I use the following LDIF (and tried lots of variations too, to no avail) dn : cn={9}test,cn=schema,cn=config changetype: modify delete: olcObjectClasses olcObjectClasses: ( testObjectClass:1 NAME 'class1' DESC 'objectclass 1' SUP top STRUCTURAL MUST ( attr1 $ attr2 ) ) Running ldapmodify -x -W -D cn=admin,cn=config -f test.ldif -d 0 gives no output. -d 1 gives this: ldap_create ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP localhost:389 ldap_new_socket: 4 ldap_prepare_socket: 4 ldap_connect_to_host: Trying 127.0.0.1:389 ldap_pvt_connect: fd: 4 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({i) ber: ber_flush2: 38 bytes to sd 4 ldap_result ld 0x7f2a8ccf3430 msgid 1 wait4msg ld 0x7f2a8ccf3430 msgid 1 (infinite timeout) wait4msg continue ld 0x7f2a8ccf3430 msgid 1 all 1 ** ld 0x7f2a8ccf3430 Connections: * host: localhost port: 389 (default) refcnt: 2 status: Connected last used: Mon Sep 10 11:29:57 2012 ** ld 0x7f2a8ccf3430 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x7f2a8ccf3430 request count 1 (abandoned 0) ** ld 0x7f2a8ccf3430 Response Queue: Empty ld 0x7f2a8ccf3430 response count 0 ldap_chkResponseList ld 0x7f2a8ccf3430 msgid 1 all 1 ldap_chkResponseList returns ld 0x7f2a8ccf3430 NULL ldap_int_select read1msg: ld 0x7f2a8ccf3430 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 12 contents: read1msg: ld 0x7f2a8ccf3430 msgid 1 message type bind ber_scanf fmt ({eAA) ber: read1msg: ld 0x7f2a8ccf3430 0 new referrals read1msg: mark request completed, ld 0x7f2a8ccf3430 msgid 1 request done: ld 0x7f2a8ccf3430 msgid 1 res_errno: 0, res_error: <>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_free_connection 1 1 ldap_send_unbind ber_flush2: 7 bytes to sd 4 ldap_free_connection: actually freed So no real indication of an error. Where am I doing it wrong? Bonus question: If I have some entries of a certain objectclass, can I modify it (add/remove attributeTypes) without removing the entries? Thanks in advance for all help.

    Read the article

  • background jobs and ssh connections

    - by petrelharp
    This question has come up quite a lot (really a lot), but I'm finding the answers to be generally incomplete. The general question is "Why does/doesn't my job get killed when I exit/kill ssh?", and here's what I've found. The first question is: How general is the following information? The following seems to be true for modern Debian linux, but I am missing some bits; and what do others need to know? All child processes, backgrounded or not of a shell opened over an ssh connection are killed with SIGHUP when the ssh connection is closed only if the huponexit option is set: run shopt huponexit to see if this is true. If huponexit is true, then you can use nohup or disown to dissociate the process from the shell so it does not get killed when you exit. If huponexit is false, which is the default on at least some linuxes these days, then backgrounded jobs will not be killed on normal logout. But even if huponexit is false, then if the ssh connection gets killed, or drops (different than normal logout), then backgrounded processes will still get killed. This can be avoided by disown or nohup as in (2). There is some distinction between (a) processes whose parent process is the terminal and (b) processes that have stdin, stdout, or stderr connected to the terminal. I don't know what happens to processes that are (a) and not (b), or vice versa. Final question: How can I avoid behavior (3)? In other words, by default in Debian backgrounded processes run along merrily by themselves after logout but not after the ssh connection is killed. I'd like the same thing to happen to processes regardless of whether the connection was closed normally or killed. Or, is this a bad idea?

    Read the article

  • Under FreeBSD, can a VLAN interface have a smaller MTU than the primary interface?

    - by larsks
    I have a system with two physical interfaces, combined into a LACP aggregation group. That LACP channel has two VLANs, one untagged (the "native vlan") and one using VLAN tagging. This gives us: lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=19b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4> ether 00:25:90:1d:fe:8e inet 10.243.24.23 netmask 0xffffff00 broadcast 10.243.24.255 media: Ethernet autoselect status: active laggproto lacp laggport: em1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> laggport: em0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> vlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=3<RXCSUM,TXCSUM> ether 00:25:90:1d:fe:8e inet 10.243.16.23 netmask 0xffffff80 broadcast 10.243.16.127 media: Ethernet autoselect status: active vlan: 610 parent interface: lagg0 Is it possible to set a 9K MTU on lagg0 while preserving the 1500 byte MTU on vlan0? Normally I would simply try this out, but this is actually on a vendor-supported platform and I am loathe to make changes "behind the back" of their administration interface. This system is roughly FreeBSD 7.3.

    Read the article

  • nginx with ssl: I get a 403 and log "directory index of '...dir...' is forbidden" log message. works fine with unencrypted connection

    - by user72464
    As mentioned in the title, I had nginx working fine with my rails app, until I tried to add the ssl server. The unencrypted connection still works but the ssl always returns me a 403 page with the following line in the error log: directory index of "/home/user/rails/" is forbidden, client: [my ip], server: _, request: "GET / HTTP/1.1", host: "[server ip]" Below my nginx.conf server block: server { listen 80; listen 443 ssl; ssl_certificate /etc/ssl/server.crt; ssl_certificate_key /etc/ssl/server.key; client_max_body_size 4G; keepalive_timeout 5; root /home/user/rails; try_files $uri/index.html $uri.html $uri @app; location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://0.0.0.0:8080; } error_page 500 502 503 504 /500.html; location = /500.html { root /home/user/rails; } } the /home/user/rails directory and it's parent have all read to all rights. and they belong to the user nginx. the certificate and key file have the following rights: -rw-r--r-- 1 nginx root 830 Nov 8 09:09 server.crt -rw--w---- 1 nginx root 887 Nov 8 09:09 server.key any clue?

    Read the article

  • Where to download replacement "Explorerframe.DLL" Files for x64 Windows 7 Pro?

    - by Ben Franchuk
    After posting this question, I did some research to reveal what the problem likely was, and found what I need to fix. Following this is the original post, then my updated question. A few months ago I ended up requiring to change my computer's SID to fix a problem it had been having- Instead of fixing the problem, though, it screwed up my at-the-time current install of windows, to the point of me needing to do a fresh install. As I am in possession of an OEM copy of Windows 7 Pro 64 Bit, I successfully reinstalled over the dead copy with that (all the files that were on the computer previous to this windows install were put in a Windows.old folder). Everything installed and worked absolutely fine, except for one thing. The problem I am experiencing is that, in some Windows Explorer windows, the explorer pane doesn't show. Instead, it simply shows a white area where the pane would show. This makes some software not usable, I recently realized; Software such as Cubase, which use just the explorer pane to select file save locations, cannot save at all as the pane itself is... not operational. Below is a screenshot of this problem as it occurs in cubase; ...and again as it shows in UTorrent in the save location selector window. The highlighted area is where the sidebar would NORMALLY be. Pardon my scribbling over some of the things in the window- I would personally rather the internet did not get a glimpse of my files. I have yet to find a common reason why the pane works in some applications when they pull explorer, and others not. I have yet to see it go away, and the software affected by it has been affected since I reinstalled my copy of windows. Initially, I was able to live with it as I can type out save locations in the file name bar to navigate, but with software like Cubase, I do not have this option. Reinstalling windows again is NOT an option. Here's the updated question. After posting this question originally, I did some research on the problem in question, and it turns out that this is extremely easily fixable via replacing the file "ExplorerFrame.DLL" which is located in the System32 and SystemWOW64 Folders, in the windows folder, on the C:\ drive. As I quite frequently customize my computer, this is a normal thing for me to do and I know exactly how to safely and properly replace this file. The only problem is that I cannot for the life of me find a download of this file that actually works with my computer. I tried a couple from some different sites but they all caused explorer to not restart (I was given an error when starting the application from Task Manager) and was forced to revert to the broken .DLL files. Since there are 2 separate "ExplorerFrame.DLL" files; one for 64 bit and the other for 32 bit, I am assuming that I will need to download 2 separate versions to replace the corrupted ones. Where can I acquire these files? I am currently running Windows 7 Professional x64 Bit.

    Read the article

  • Adding 2nd DC to the domain from a different subnet over VPN.

    - by EagerToLearn
    I'm in the process of adding a second DC to our domain and just want to make sure I have all the steps right before proceeding. Info: DC1 is 2008 R2 Standard. DC2 is 2008 R2 Standard. Network1 is 192.168.39.x/24 Network2 is 10.0.0.x/24 VPN is Sonicwall. The 2 DC's will be at two different sites, but the networks are connected by hardware VPN. (Sonicwall). The main DC server will be on the 192.168.39.0/24 network. The 2nd DC will be on 10.0.0.0/24. Here are the steps I plan to take; please let me know if I'm missing anything. Part 1: AD Sites and Services on DC1, create a new site and subnet for DC2. (Or should I create a new one for both?) (Can I use the default IPSiteLink and not change anything in there other than refresh timer?) Part 2: Point the DNS of DC2 to DC1. Run /forestprep and /domainprep (on both, or just DC1?). Dcpromo and select "Additional Domain Controller for Existing Domain". Then continue with normal steps with default locations for databases. EDIT: Didn't realize this was like reddit and required two skipped lines to skip one :P EDIT 2: When DCPromo-ing DC2, do I need to have "Append primary and connection specific DNS" and "Append parent suffixes of the primary DNS suffix" checked?

    Read the article

  • What's wrong with this vcl config for varnish-cache as load balancer?

    - by dabito
    I have the current configurations active on my default.vcl varnish file on the machine that balances the load for other two machines (the other two machines also have varnish active). My intention is to have this server do only the load balancing and the other machines do the processing and also their own caching. My problem is that even with the config testing (not even a stress test or anything, just a few requests a minute) I get the guru meditation error and have to restart varnish. This is the default.vcl for the load balancing server: backend vader { .host = "app1.server.com"; .probe = { .url = "/"; .interval = 10s; .timeout = 4s; .window = 5; .threshold = 3; } } backend malgus { .host = "app2.server.com"; .probe = { .url = "/"; .interval = 10s; .timeout = 4s; .window = 5; .threshold = 3; } } director dooku round-robin { { .backend = vader; } { .backend = malgus; } } sub vcl_recv { if (req.http.host ~ "^balancer.server.com$") { set req.backend = dooku; } } Am I doing something wrong or missing something? EDIT: This is varnishlog's output: 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345839995 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345839998 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345840001 1.0 0 Backend_health - malgus Still sick 4--X--- 0 3 5 0.000000 3.846876 0 Backend_health - vader Still sick 4--X--- 0 3 5 0.000000 3.839194 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345840004 1.0 14 SessionOpen c 10.150.7.151 38272 :80 14 ReqStart c 10.150.7.151 38272 458200540 14 RxRequest c GET 14 RxURL c / 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: dooku-dev.excelsior.com 14 RxHeader c Connection: keep-alive 14 RxHeader c User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 14 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 14 RxHeader c Accept-Encoding: gzip,deflate,sdch 14 RxHeader c Accept-Language: en-US,en;q=0.8,es-419;q=0.6,es;q=0.4 14 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 14 RxHeader c Cookie: SESSa87d6c6da0c61037a9169122dc5e4a19=HR_0Srhgc-uDArT3aJFzOBy31FtzneTXg38byr1eGMU; __atuvc=4%7C33 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c / 14 Hash c dooku-dev.excelsior.com 14 VCL_return c hash 14 VCL_call c pass pass 14 FetchError c no backend connection 14 VCL_call c error deliver 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Retry-After: 5 14 TxHeader c Content-Length: 418 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Fri, 24 Aug 2012 20:26:44 GMT 14 TxHeader c X-Varnish: 458200540 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 418 14 ReqEnd c 458200540 1345840004.916415691 1345840004.965190172 0.020933390 0.048741817 0.000032663 14 SessionClose c error 14 StatSess c 10.150.7.151 38272 0 1 1 0 1 0 256 418 14 SessionOpen c 10.150.7.151 38273 :80 14 ReqStart c 10.150.7.151 38273 458200541 14 RxRequest c GET 14 RxURL c /favicon.ico 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: dooku-dev.excelsior.com 14 RxHeader c Connection: keep-alive 14 RxHeader c Accept: */* 14 RxHeader c User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 14 RxHeader c Accept-Encoding: gzip,deflate,sdch 14 RxHeader c Accept-Language: en-US,en;q=0.8,es-419;q=0.6,es;q=0.4 14 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 14 RxHeader c Cookie: SESSa87d6c6da0c61037a9169122dc5e4a19=HR_0Srhgc-uDArT3aJFzOBy31FtzneTXg38byr1eGMU; __atuvc=4%7C33 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c /favicon.ico 14 Hash c dooku-dev.excelsior.com 14 VCL_return c hash 14 VCL_call c pass pass 14 FetchError c no backend connection 14 VCL_call c error deliver 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Retry-After: 5 14 TxHeader c Content-Length: 418 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Fri, 24 Aug 2012 20:26:45 GMT 14 TxHeader c X-Varnish: 458200541 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 418 14 ReqEnd c 458200541 1345840005.226389885 1345840005.226457834 0.000026941 0.000043154 0.000024796 14 SessionClose c error 14 StatSess c 10.150.7.151 38273 0 1 1 0 1 0 256 418

    Read the article

  • Cannot delete a SharePoint web application

    - by Vijay
    What I have? I have normal web application and it has 3 site collections with name, "PDirectory". Other than this I have only Central administration web application in the farm. What I want? I want to delete that web application, "PDirectory". What problem am I facing? I am not able to delete the web application. I get below error when I try to delete it but, the site collections got deleted! Error: An object in the SharePoint administrative framework, "SPWebApplication Name=XXX Parent=SPWebService", could not be deleted because other objects depend on it. Update all of these dependants to point to null or different objects and retry this operation. The dependant objects are as follows: SPFarm Name=SharePoint_Config SPFarm Name=SharePoint_Config at Microsoft.SharePoint.Administration.SPConfigurationDatabase.DeleteObject(Guid id) at Microsoft.SharePoint.Administration.SPConfigurationDatabase.DeleteObject(SPPersistedObject obj) at Microsoft.SharePoint.Administration.SPPersistedObject.Delete() at Microsoft.SharePoint.Administration.SPWebApplication.Delete() at Microsoft.SharePoint.ApplicationPages.DeleteWebApplicationPage.BtnSubmit_Click(Object sender, EventArgs e) at System.Web.UI.WebControls.Button.OnClick(EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) Can somebody tell me how I can delete this web application? Thanks in advance!

    Read the article

  • Encrypted folders and files stay encrypted when copied

    - by user66126
    Hi all, I just tried out Windows 7 feature to encrypt a folder. I found that when I access the encrypted folder from another computer (the parent folder of the encrypted folder is shared) I can see the files there but I could not open it (which is good). But when I copy the file to another folder outside the encrypted folder (regardless it is on the same remote computer or to the computer from where I am accessing the files) then I can open the file without any problem. This might be how it works ... but that's not what I need. My question is: Can I encrypt a folder (and all files inside), access those files (create, edit) seamlessly while I am logged in normally to the computer ... but make the files stays encrypted when they were copied to another directory outside the encrypted folder? Regardless they were copied to the same computer or another computer or uploaded to a remote server. If this is not a feature that Windows natively support, is there third party software that does that? Thank you.

    Read the article

  • Inheriting file ownership on linux

    - by John Hunt
    We have an ongoing problem here at work. We have a lot of websites set up on shared hosts, our cms writes many files to these sites and allows users of the sites to upload files etc.. The problem is that when a user uploads a file on the site the owner of that file becomes the webserver and therefore prevents us being able to change permissions etc via FTP. There are a few work arounds, but really what we need is a way to set a sticky owner if that's possible on new files and directories that are created on the server. Eg, rather than php writing the file as user apache it takes on the owner of the parent directory. I'm not sure if this is possible (I've never seen it done.) Any ideas? We're obviously not going to get a login for apache to the server, and I doubt we could get into the apache group either. Perhaps we need a way of allowing apache to set at least the group of a file, that way we could set the group to our ftp user in php and set 664 and 775 for any files that are written? Cheers, John.

    Read the article

  • kill a hung mount process

    - by John P
    I have a virtual machine drive that ran out of space, so I shutdown the VM, extended the volume using lvextend. After resizing the partition (ext3), I ran e2fsck on it, and it found and corrected errors. Unfortunately, when I ran efsck one more time, there were more errors that had to be fixed. I went through 3 rounds of e2fsck before I decided to try mounting it to clean up some space manually. I tried mounting it, but the mount process hung. I tried to "kill -9" the mount process, but that did not kill it. I killed the parent process, but that did not kill it either. Any ideas on how to kill a rogue mount process? Some evidence: ps -l 13292 F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 4 R 0 13292 1 99 85 0 - 17964 - ? 11:27 mount /dev/mapper/xen7-123p3 /tmp/p3/ lsof -p 13292 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mount 13292 root cwd DIR 9,2 4096 25264129 /root mount 13292 root rtd DIR 9,2 4096 2 / mount 13292 root txt REG 9,2 61656 2916434 /bin/mount mount 13292 root mem REG 9,2 144776 31457282 /lib64/ld-2.5.so mount 13292 root mem REG 9,2 1718232 31457284 /lib64/libc-2.5.so mount 13292 root mem REG 9,2 23360 31457291 /lib64/libdl-2.5.so mount 13292 root mem REG 9,2 43808 31457783 /lib64/libblkid.so.1.0 mount 13292 root mem REG 9,2 247496 31457331 /lib64/libsepol.so.1 mount 13292 root mem REG 9,2 95464 31457337 /lib64/libselinux.so.1 mount 13292 root mem REG 9,2 154640 31457491 /lib64/libdevmapper.so.1.02 mount 13292 root mem REG 9,2 17936 31457472 /lib64/libuuid.so.1.2 mount 13292 root mem REG 9,2 56438208 12684878 /usr/lib/locale/locale-archive mount 13292 root 0u CHR 136,11 0t0 13 /dev/pts/11 (deleted) mount 13292 root 1u CHR 136,11 0t0 13 /dev/pts/11 (deleted) mount 13292 root 2u CHR 136,11 0t0 13 /dev/pts/11 (deleted) umount -f /tmp/p3/ umount2: Invalid argument umount: /tmp/p3/: not mounted

    Read the article

  • Snapshotting single disk of running Hyper-V VM

    - by modelnine
    I'm currently somewhat at a loss of how to create a snapshot of a single virtual hard-disk of a running Hyper-V VM. Generally, creating a differential disk while a server is shut down is no problem (i.e., call the new-vhd cmdlet and pass a ParentPath, then update the VHD-binding of the respective VM-device), but while the host is running, all I can find is checkpointing the VM as a whole (which creates snapshots of all attached disks), and leaves the VM-state in a form which isn't easily processable by external tools (i.e., it requires reading additional meta-data from the VM). Generally, what'd I'd like to happen for a single-disk snapshot (in my understanding) is: Pause the VM Rename current disk to some other name which specifies it as a base-snapshot Create a new VHD which has the renamed VHD as parent path and is marked as "current" Swap the VHD for the VM for the snapshotted hard-disk to the newly created differential VHD Resume the VM Is there any means to do this programatically? Update: I've seen that this is actually possible with SCSI-disks, i.e. pause the VM, remove the SCSI disk, make the snapshot, reattach the SCSI disk at the same position, resume the VM. And, the VM resumes properly. But: is something similar also possible with G1 machines for the boot disk which is always IDE?

    Read the article

  • Ubuntu Server, 2 Ethernet Devices, Same Gateway - Want to force internet traffic through 1 device (or at least allow it to work!)

    - by Chris Drumgoole
    I have a Ubuntu 10.04 Server with 2 ethernet devices, eth0 and eth1. eth0 has a static IP of 192.168.1.210 eth1 has a static IP if 192.168.1.211 The DHCP server (which also serves as the internet gateway) sits at 192.168.1.1. The issue I have right now is when I have both plugged in, I can connect to both IPs over SSH internally, but I can't connect to the internet from the server. If I unplug one of the devices (e.g. eth1), then it works, no problem. (Also, I get the same result when I run sudo ifconfig eth1 down). Question, how can I configure it so that I can have both devices eth0 and eth1 play nice on the same network, but allow internet access as well? (I am open to either enforcing all inet traffic going through a single device, or through both, I'm flexible). From my google searching, it seems I could have a unique (or not popular) problem, so haven't been able to find a solution. Is this something that people generally don't do? The reason I want to make use of both ethernet devices is because I want to run different local traffic services on on both to split the load, so to speak... Thanks in advance. UPDATE Contents of /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet dhcp # The secondary network interface #auto eth1 #iface eth1 inet dhcp (Note: above, I commented out the last 2 lines because I thought that was causing issues... but it didn't solve it) netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.1.0 192.168.1.1 255.255.255.0 UG 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 UPDATE 2 I made a change to the /etc/network/interfaces file as suggested by Kevin. Before I display the file contents and the route table, when I am logged into the server (through SSH), I can not ping an external server, so this is the same issue I was experiencing that led to me posting this question. I ran a /etc/init.d/networking restart after making the file changes. Contents of /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet dhcp address 192.168.1.210 netmask 255.255.255.0 gateway 192.168.1.1 # The secondary network interface auto eth1 iface eth1 inet dhcp address 192.168.1.211 netmask 255.255.255.0 ifconfig output eth0 Link encap:Ethernet HWaddr 78:2b:cb:4c:02:7f inet addr:192.168.1.210 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::7a2b:cbff:fe4c:27f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6397 errors:0 dropped:0 overruns:0 frame:0 TX packets:683 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:538881 (538.8 KB) TX bytes:85597 (85.5 KB) Interrupt:36 Memory:da000000-da012800 eth1 Link encap:Ethernet HWaddr 78:2b:cb:4c:02:80 inet addr:192.168.1.211 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::7a2b:cbff:fe4c:280/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5799 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:484436 (484.4 KB) TX bytes:1184 (1.1 KB) Interrupt:48 Memory:dc000000-dc012800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:635 errors:0 dropped:0 overruns:0 frame:0 TX packets:635 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:38154 (38.1 KB) TX bytes:38154 (38.1 KB) netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • Can we do a DNSSEC 101? [closed]

    - by PAStheLoD
    Please share your opinions, FAQs, HOWTOs, best practices (or links to the one you think is the best) and your fears and thoughts about the whole migration (or should I just call it a new piece of tech?). Is DNSSEC just for DNS providers (name server operators)? What ought John Doe to do, who hosts johndoe.com at some random provider (GoDaddy, DreamHost and such)? Also, what if the provider's name server doesn't do automatic signing magic, can John do it manually? In a fire-and-forget way, without touching KSKs and ZSKs rollovers and updating and headaches?) Does it bring any change regarding CERT records? Do browsers support it? How come it became so complex? Why didn't they just merged it with SSL? DKIM is pretty straightforward, IANA/IETF could've opted for something like that. (Yes I know that creating a trust anchor would be still problematic, but browsers are already full of CA certs. So, they could've just let anyone get a cert for a domain for shiny green padlocks, or just generate one for a poor blue lock, put it into a TXT record, encrypt the other records and let the parent zone sign the whole for you with its cert.) Thanks! And for disclosure (it seemed like the customary thing to do around here), I've asked the same on the netsec subreddit.

    Read the article

  • Changing the View of a Folder When It Is Opened from Finder

    - by user60044
    When I was running OS 10.4 I had all my folders beautifully organized with position and which ones should be opened in View mode and which one in Icon, etc. When I transitioned to 10.6 I found that OS ignored all that information and imposes an awful consequence of changing the view for some folder and that view moving up to all parent folders which don't have their views locked. The only way I can think of to get this functionality back is to either write a program that given a root directory will enumerate it setting the views based upon a static template I have. The other way I thought I could accomplish this is by Folder Actions. Aside from the fact that I don't know AppleScript it seems folder actions cannot be inherited. Since I have 1000s of folders involved here, all being inherited from a single root, I could not possibly manually add that action to each of those folders. What I would like to have is a folder action that whenever I open the folder from Finder, then if it contains any JPG, GIF, etc. type image files that it will automatically open that folder with a view of Icon with a reasonable size to support the number of images. If the folder contains only folders, then to open that folder in the list view. Does anyone have anything that could do this for me? Thank You, -- Mark

    Read the article

  • How do I configure IIS to allow access to network resources for PHP scripts?

    - by Dereleased
    I am currently working on a PHP front-end that joins together a series of applications running on separate servers; many of these applications generate files that I need access to, but these files (for various reasons) reside on their parent servers. If I, from the command line, issue a bit of script such as: <?php var_dump(glob("\\\\machine-name\\some\\share\\*")); I will get the full contents of that directory, proving that there's no problem programmatically with PHP reading the contents of a UNC share. However, if I try to execute the same script from the web server, I get an empty array -- more specifically, if I use more explicitly functions designed to "open" a directory like it was a file, I get access errors. I believe this to be a permissions issue, but I am not a server/network administrator type, so I'm not sure what I need to do to correct this and get my script running, and the links I've checked out have not been a terrible amount of help, perhaps due to my background, or lack thereof as far as IIS is concerned, coupled with the fact that we are not actually using .NET for this. Relevant Stats: Windows Server 2008 Standard SP2 IIS 7.0 PHP 5.2.9 I will be connecting to two types of servers: a few other nearly-identical Server 2008 machines, and a machine running embedded XP. Links that have not been particularly helpful but maybe I am just misreading: http://support.microsoft.com/?id=306158 http://support.microsoft.com/kb/207671/EN-US/ http://support.microsoft.com/kb/280383/

    Read the article

  • Master does not appear to be a git repository error

    - by EmmyS
    I've inherited a position and instructions for creating a new git repository. Unfortunately I've run into problems and no one here knows what to do. Hoping someone can help me out. Here are the instructions I was left: Create a new repository: For these steps you need to be in the gitosis-admin repository, if you don't have it, in a suitable parent folder do: git clone [email protected]:gitosis-admin.git Edit gitosis.conf file - in gitosis-admin root, under [group base-repo] section, add the name of the new repo to the end of the "writable =" section. Commit change and push back to gitosis-admin master. For the next commands, my_new_project represents the name of your project mkdir my_new_project cd my_new_project git init Copy in any files you want to use to start the repo git commit -a -m "Initializing new repository" git remote add origin [email protected]:my_new_project.git git push master git push master:qa So I did 1 and 2, with no problem. It created a local folder on my machine called gitosis-admin. I edited the gitosis.conf file as indicated. But when I try to do step 3 (which I assume is git push gitosis-admin master) bash tells me that fatal: 'master' does not appear to be a git repository fatal: The remote end hung up unexpectedly What am I doing wrong?

    Read the article

  • How to perform this Windows 7 permissions change on many files via GUI or command line

    - by hippietrail
    After using my external hard drive on another Windows 7 computer to tweak photos with Windows Live Photo Gallery then upload them to Facebook I found the modified images were now not visible on the original Windows 7 computer. I'm not sure if the things I tried to get it working subsequently changed anything, but I do know this is the sequence of actions that makes the permissions of the modified files match those of the unmodified files: Right click on broken image file, select "Properties" On the "Security" tab press the "Advanced" button In the "Permissions" tab press the "Continue" button with the shield icon on it Tick the box marked "Include inheritable permissions from this object's parent Click the "Remove" button to remove the only current entry "Type: Allow, Name: Administrators (XYZ\Administrators), Permission: Full control, Inherited From: OK on the "Permissions" tab. OK on the "Security" tab. Now this same procedure does not work at the folder level. It results in "access denied" dialogs. I'm looking for some way to perform this exact modification on all the images I edited on the other computer. I'm happy to use the Windows GUI in Explorer or any other included tools. I'm happy to use the Windows command line. I'd prefer not to use a third-party tool since I'd have to be satisfied it's not doing anything else. I'm not looking for a different way to change permissions to other settings to make an external drive full of photos editable on multiple computers. At least not in this question.

    Read the article

  • Permissions Issue with Files Generated by PerfMon

    - by SvrGuy
    We are trying to implement some data logging to CSV files using a Data Collector Set in PerfMon (on a windows Server 2008R2 system). The issue we are running into is that we (seemingly) can't control the permissions being set on the log files created by perfmon. What we want is for the log files created by perfmon to have Everyone:F permissions (Full Control for Everyone). So, we have a directory structure setup where all logs go into a folder: c:\vms\PerfMonLogs\%MACHINENAME% (e.g. c:\vms\PerfMonLogs\EvaluationG2) In the above example, c:\vms\PerfMonLogs\EvaluationG2 has permissions Everyone:F (below is the icacls for this directory) EVALUATIONG2/ Everyone:(OI)(CI)(F) NT AUTHORITY\SYSTEM:(OI)(CI)(F) BUILTIN\Administrators:(OI)(CI)(F) BUILTIN\Performance Log Users:(OI)(R) When the data collector set runs, it creates new sub folders and files within c:\vms\PerfMonLogs\EvaluationG2, e.g. (C:\vms\PerfMonLogs\EVALUATIONG2\M11d26y2012N3) Each of these directories and files has the following permissions: M11d26y2012N3 NT AUTHORITY\SYSTEM:(OI)(CI)(F) BUILTIN\Administrators:(OI)(CI)(F) BUILTIN\Performance Log Users:(OI)(R) So these new folders and not simply inheriting permissions from the parent folder (don't know why). Now, we tried adding Everyone:F using the security tab on the collector set (No dice). Any ideas? How do we control the permissions on the log files generated by perfmon data collector set?

    Read the article

  • Ways of file copy

    - by Tim
    I sometimes found that when using simple right-click and copy-and-paste, some files/directories are not copied completely or not at all, because of various reasons, such as some saved webpage files/directories have some strange characters in their names or their names are too long. For example, in Windows 7, I save this webpage http://www.howtogeek.com/howto/windows-vista/working-around-windows-vistas-shrink-volume-inadequacy-problems/ completely in a very deep directories whose parent directories may have long names, I cannot copy its top ancestry directory, as Windows complains the filename for the saved webpage directory is too long. In Ubuntu, sometimes I can save a file with some special character such as newline under some directory. But when I copy that directory, it will say the file name has some special character and I will have to manually remove the character. Such cases are complained in both Windows and Ubuntu. I was wondering what some better ways to accomplish the copy job in both Windows and Ubuntu. For example, will archiving all to be copied into a single archive help? If yes how to do that? Thanks and regards!

    Read the article

  • What is the bash syntax to create a new directory in the directory above?

    - by mozerella
    I aim to make a script for mogrify. The mogrify command will resize images in a directory and put the resized images into a directory on the same directory level, with the same name as the work directory, but with a suffix (_a). The new directory will be moved to another collection later on. Something like this, #!/bin/bash mkdir ../n_a for file in *{.JPG|.jpg}; do mogrify -path ../n_a -resize 1200x1200 -quality 96;done I'm guessing ../ denotes the parent dir when working in a child directory, but I need help here. Edit: "n" needs to be replaced with the syntax for the working directory name. Sorry there was a typo as well third script line, should have read n not x Edit2: This script does exactly what I need and it's silent. #!/bin/bash DEST="../${PWD##*/}_a" mkdir -p $DEST mogrify -path $DEST -resize 1200x1200 -quality 96 *.jpg *.JPG thanks to vgoff for the correct PWD syntax and cesareriva http://www.cesareriva.com/archives/722 for showing me the DEST function. Something else: ${PWD##*/}_a is not caring for spaces in the directory name and the script fails. An empty dir is created in the same dir as the images. Found it out now, it needs quotations on the $DEST too, presumably to help mkdir create the dir with a space in the name, and mogrify to write the files to the right place, like this #!/bin/bash DEST="../${PWD##*/}_a" mkdir -p "$DEST" mogrify -path "$DEST" -resize 1200x1200 -quality 96 *.jpg *.JPG

    Read the article

  • How to automate downloading files?

    - by Damon
    I got a book which had a pass to access digital versions of hi-res scans of much of the artwork in the book. Amazing! Unfortunately the presentation of all the these are 177 pages of 8 images each with links to zip files of jpgs. It is extremely tedious to browse, and I would love to be able to get all the files at once rather than sitting and clicking through each one separately. archive_bookname/index.1.htm - archive_bookname/index.177.htm each of those pages have 8 links each to the files linking to files such as <snip>/downloads/_Q6Q9265.jpg.zip, <snip>/downloads/_Q6Q7069.jpg.zip, <snip>/downloads/_Q6Q5354.jpg.zip. that don't quite go in order. I cannot get a directory listing of the parent /downloads/ folder. Also, the file is behind a login-wall, so doing a non-browser tool, might be difficult without knowing how to recreate the session info. I've looked into wget a little but I'm pretty confused and have no idea if it will help me with this. Any advice on how to tackle this? Can wget do this for me automatically?

    Read the article

  • Read access to Active Directory property (uSNChanged)

    - by Tom Ligda
    I have an issue with read access to the uSNChanged property when doing LDAP searches. If I do an LDAP search with a user that is a member of the Domain Admins group (UserA), I can see the uSNChanged property for every user. The problem is that if I do an LDAP search with a user (UserB) that is not a member of the Domain Admins group, I can see the uSNChanged property for some users (UserGroupA) and not for some users (UserGroupB). When I look at the users in UserGroupA and compare them to the users in UserGroupB, I see a crucial difference in the "Security" tab. The users in UserGroupA have the "Include inheritable permissions from this object's parent" unchecked. The users in UserGroupB have that option checked. I also noticed that the users in UserGroupA are users that were created earlier. The users in UserGroupB are users created recently. It's difficult to quantify, but I estimate the border between creation time between the users in UserGroupA and UserGroupB is about 6 months ago. What can cause the user creation to default to having that security property checked as opposed to unchecked? A while back (maybe around 6 months ago?) I changed the domain functional level from Windows Server 2003 to Windows Server 2008 R2. Would that have had this effect? (I can't exactly downgrade the domain functional level to test it out.) Is this security property actually the cause of the issue with read access to the uSNChanged property on LDAP searches? It seems correlated, but I'm not sure about causation. What I want in the end is for all authenticated users to have read access to the uSNChanged property for all users when doing an LDAP search. I would also be OK if I could grant read access for that property to an AD group. Then I can control access by adding members to the group.

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >