Search Results

Search found 13011 results on 521 pages for 'catch block'.

Page 355/521 | < Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >

  • Adding a transaction ID to ruby-on-rails logs

    - by Blue Warrior NFB
    We have a RoR app (rails version 3.2.15 right now). As it has been getting busier, the log-files it's producing are becoming less and less useful for troubleshooting. When they come in like this, it's not a problem: Started GET "/accounts/28088166/kittens/22894/rendered_png?file_id=5d3eaec77954a489b5ddd75143091767&kitten_store_id=9970569bbacf7b6dbeb4eb9295960d69&size=large" for 172.16.202.30 at 2013-11-12 13:45:00 +0000 Processing by KittenController#rendered_png as HTML Parameters: {"file_id"="5d3eaec77954a489b5ddd75143091767", "kitten_store_id"="9970569bbacf7b6dbeb4eb9295960d69", "size"="large", "kitten_cam_id"="280941", "id"="kjlak357aw479607t"} Rendered text template (0.0ms) Sent data (1.8ms) Completed 200 OK in 1037.4ms (Views: 1.4ms | ActiveRecord: 98.4ms) Short request, quickly assembled, all the relevant log-lines are in one block. However, not all of our code renders in 1037ms. There are a few calls that can exceed several seconds, and during that time several of these quicker ones can come in. When that happens, its very, very hard to identify which log-lines belong to which GET. Sent data (4.1ms) Completed 200 OK in 767.4ms (Views: 3.2ms | ActiveRecord: 72.2ms) Completed 200 OK in 2338.0ms (Views: 0.2ms | ActiveRecord: 0.0ms) Ooookaaaay... which goes to what? Is it possible to add something like a transaction-ID to these log-lines? The log-spam would be interspersed, but at least grep-magic would give me the unified entries that I need.

    Read the article

  • Backing up 80G hard drive 1G per day

    - by barrycarter
    I want to securely backup my 80G HD, but doing a complete backup takes forever and slows down my machine, so I want to backup just 1G per day. Details: % First hurdle: on the first day, I want to backup the "first" 1G of the hard drive. Of course, there really is no "first" 1G on a hard drive. % After 80 days, I'll have my whole HD backed up... assuming none of my files ever change, which of course they do. So the backup plan/program must also catch file creation/changes as they come along. % The backups must be consistent, in that I can restore my system by restoring the backups sequentially. In other words, "dd if=/harddrive" probably won't work. % The backups should encrypt file contents AND names, but I don't see this as a major hurdle. % Once the backup has backed up everything (even changed files), it can re-backup the first 1G on my hard drive. Even though this backup is redundant, that's OK, because I always want to be backing up something (eg, if I'm backing up to optical media, the older media might start going corrupt). Is there a magic backup plan/program that does this? In reality, I want to do this for multiple machines with multiple drives each, but think that solving the above will solve the general case.

    Read the article

  • Almost All Logical Volumes Disappeared - Recovery?

    - by Alex
    We had a hard disc crash of one of two hard discs in a software raid with a LVM on top. The server is running Citrix xenserver. On the hard disk which is still intact, the volume group gets detected well, but only one LV is left. (some hashes replaced by "x") # lvdisplay --- Logical volume --- LV Name /dev/VG_XenStorage-x-x-x-x-408b91acdcae/MGT VG Name VG_XenStorage-x-x-x-x-408b91acdcae LV UUID x-x-x-x-x-x-vQmZ6C LV Write Access read/write LV Status available # open 0 LV Size 4.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@rescue ~ # vgdisplay --- Volume group --- VG Name VG_XenStorage-x-x-x-x-408b91acdcae System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 698.62 GiB PE Size 4.00 MiB Total PE 178848 Alloc PE / Size 1 / 4.00 MiB Free PE / Size 178847 / 698.62 GiB VG UUID x-x-x-x-x-x-53w0kL I could understand if a full physical volume is lost - but why only the logical volumes? Is there any explanation for this? Is there any way to recover the logical volumes? EDIT We are here in a rescue system. The problem is that the whole server does not boot (GRUB error 22) What we are trying to do is to access the root filesystem. But everything was in the LVM. We have only this: (parted) print Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 750GB 750GB primary boot, lvm And this 750GB LVM volume is exactly what we see on top.

    Read the article

  • troubleshooting postifx -> exchange connection issues

    - by Systemspoet
    I have three linux-based mail routers that run postfix and relay mail to our on-premise exchange server as well as to outlook.com, splitting the mail based on ldap atttributes. What I've observed sporadically since upgrading this spring from Exchange 2007 to 2010 is that all three of the mail relays will, for about 20 minutes, fail to connect to exchange. Postfix logs it as "lost connection with exchange.contosso.edu" ; this problem almost always occurs to all three mail relays at the same time, and lasts for slightly under 20 minutes. If I can catch it while it's occuring, and I manually do "telnet exchange.contosso.edu 25" from one mail relay and force a message through (helo, mail from, rcpt to, data, etc), then it clears that relay up. The exchange "server" is actually two machines with the HT role on them, load balanced via windows NLB. I've worked pretty hard to figure out what's happening from the postfix side and I can't see any evidence of any misbehavior. My question is, how do I attack the problem from the exchange side? Is there a connection log, or a debug setting, or something I can do to log all of the inbound connections and tell me what's causing exchange to drop them?

    Read the article

  • trouble executing php scripts with nginx

    - by lovesh
    My nginx config looks like this server { listen 80; server_name localhost; location / { root /var/www; index index.php index.html; autoindex on; } location /folder1 { root /var/www/folder1; index index.php index.html index.htm; try_files $uri $uri/ index.php?$query_string; } location /folder2 { root /var/www/folder2; index index.php index.html index.htm; try_files $uri $uri/ index.php?$query_string; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } The problem with the above setup is that i am not able to execute php files. Now as per my understanding of nginx config rules, when i am in my webroot(/) which is /var/www the value of $document_root becomes /var/www so when i request for localhost/hi.php the fastcgi_param SCRIPT_FILENAME becomes /var/www/hi.php and that is the actual path of the php script. Similarly when i request for localhost/folder1/hi.php the $document_root becomes /var/www/folder1 because this is specified as the root in folder1's location block so again the fastcgi_param SCRIPT_FILENAME becomes /var/www/folder1/hi.php. But because the above configuration does not work so there is something wrong with my understanding. Please help?

    Read the article

  • Creating a private wiki

    - by Hand-E-Food
    I want to create a simple, private wiki, but am really struggling to find what I need. I require the following features: Private wiki. Only I will read or write it. Some formatting capability: headings, bold, italic, bullets, block quotes Wiki Viewer for Windows 7. If it comes with an editor, I need to be able to hide it. Page Editor for Windows 7. Page Editor for iPhone. Synchronize by cloud but available offline in Windows. So far, my research has led me to Markdown language. I can easily edit this as plain text using Notepad++ for Windows and Elements for iPhone. I can sync these files through Dropbox and have them available offline. What I can't find is a suitable viewer for Windows. I'd prefer to steer away from using HTML due to its verbose formatting codes. Can anyone recommend a solution for me? If need be, I'll happy to make a small one-off payment for software.

    Read the article

  • iptables rules for DNS/Transparent proxy with ip exceptions

    - by SlimSCSI
    I am running a router (A Netgear WNDR3700 if that matters) with dd-wrt. For content filtering I am using OpenDNS. I wanted to make sure a user could not bypass OpenDNS by putting in their own name servers, so I have a rule to catch all DNS traffic. iptables -t nat -A PREROUTING -i br0 -p all --dport 53 -j DNAT --to $LAN_IP I did have one computer on the network I wanted to allow past OpenDNS filters. On that machine I manually set the name servers, and created another rule to allow it to pass iptables -t nat -I PREROUTING -i br0 -s 192.168.1.2 -j ACCEPT This worked well. Today, I installed a transparent proxy (squid) on the router and added these rules: iptables -t nat -A PREROUTING -i br0 -s $LAN_NET -d $LAN_NET -p tcp --dport 80 -j ACCEPT iptables -t nat -A PREROUTING -i br0 -s ! $PROXY_IP -p tcp --dport 80 -j DNAT --to $PROXY_IP:$PROXY_PORT iptables -t nat -I POSTROUTING -o br0 -s $LAN_NET -d $PROXY_IP -p tcp -j SNAT --to $LAN_IP iptables -I FORWARD -i br0 -o br0 -s $LAN_NET -d $PROXY_IP -p tcp --dport $PROXY_PORT -j ACCEPT This also works, however the 192.168.1.2 address does not get routed through squid. How can I have 192.168.1.2 (and maybe others in the future) by-pass the port 53 rules, but not the port 80 rules?

    Read the article

  • nginx won't serve an error_page in a subdirectory of the document root

    - by Brandan
    (Cross-posted from Stack Overflow; could possibly be migrated from there.) Here's a snippet of my nginx configuration: server { error_page 500 /errors/500.html; } When I cause a 500 in my application, Chrome just shows its default 500 page (Firefox and Safari show a blank page) rather than my custom error page. I know the file exists because I can visit http://server/errors/500.html and I see the page. I can also move the file to the document root and change the configuration to this: server { error_page 500 /500.html; } and nginx serves the page correctly, so it's doesn't seem like it's something else misconfigured on the server. I've also tried: server { error_page 500 $document_root/errors/500.html; } and: server { error_page 500 http://$http_host/errors/500.html; } and: server { error_page 500 /500.html; location = /500.html { root /path/to/errors/; } } with no luck. Is this expected behavior? Do error pages have to exist at the document root, or am I missing something obvious? Update 1: This also fails: server { error_page 500 /foo.html; } when foo.html does indeed exist in the document root. It almost seems like something else is overwriting my configuration, but this block is the only place anywhere in /etc/nginx/* that references the error_page directive. Is there any other place that could set nginx configuration?

    Read the article

  • Skipping nginx PHP cache for certain areas of a site?

    - by DisgruntledGoat
    I have just set up a new server with nginx (which I am new to) and PHP. On my site there are essentially 3 different types of files: static content like CSS, JS, and some images (most images are on an external CDN) main PHP/MySQL database-driven website which essentially acts like a static site dynamic PHP/MySQL forum It is my understanding from this question and this page that the static files need no special treatment and will be served as fast as possible. I followed the answer from the above question to set up caching for PHP files and now I have a config like this: location ~ \.php$ { try_files $uri =404; fastcgi_cache one; fastcgi_cache_key $scheme$host$request_uri; fastcgi_cache_valid 200 302 304 30m; fastcgi_cache_valid 301 1h; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fastcgi/php-fastcgi.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/example$fastcgi_script_name; fastcgi_param HTTPS off; } However, now I want to prevent caching on the forum (either for everyone or only for logged-in users - haven't checked if the latter is feasible with the forum software). I've heard that "if is evil" inside location blocks, so I am unsure how to proceed. With the if inside the location block I would probably add this in the middle: if ($request_uri ~* "^/forum/") { fastcgi_cache_bypass 1; } # or possible this, if I'm able to cache pages for anonymous visitors if ($request_uri ~* "^/forum/" && $http_cookie ~* "loggedincookie") { fastcgi_cache_bypass 1; } Will that work fine, or is there a better way to achieve this?

    Read the article

  • Preventing back connect in Cpanel servers

    - by Fernando
    We run a Cpanel server and someone gained access to almost all accounts using the following steps: 1) Gained access to an user account due to weak password. Note: this user didn't had shell access. 2) With this user account, he accessed Cpanel and added a cron task. The cron task was a perl script that connected to his IP and he was able to send back shell commands. 3) Having a non jailed shell, he was able to change content of most websites in server specially for users who set their folders to 777 ( Unfortunately a common recommendation and sometimes a requirement for some PHP softwares ). Is there a way to prevent this? We started by disabling cron in Cpanel interface, but this is not enough. I see a lot of other options in which an user could run this perl script. We have a firewall running and blocking uncommon outgoing ports. But he used port 80 and, well, I can't block this port as a lot of processes use them to access things, even Cpanel itself.

    Read the article

  • Transfer iptables rules to another server (almost) real time

    - by MrShunz
    I'm running 2 cPanel servers with ConfigServer Security & Firewall plugin. One of the functions of the plugin is to block via iptables (temporarily and/or permanently) IPs which fail various authentications (POP3/IMAP, SMTP, FTP, webmail, mod_security and such). Now, i'd like to push those IP blocks to the border router to drop packets as soon as possible (and doing so protecting the other machines on the network). Keep in mind that after N failed logins IP is blocked for 5 minutes, then re-allowed. If multiple bans occours in an hour IP is blocked permanently and should be unlocked "by hand". So I need a near realtime solution. What I'm looking for is a better way than firing some cronjobs both on cPanels and border router to: dump the rules to file transfer the file to border router (via scp/sftp) load the rules from the file in the border router I'm aware that I will need some scripts to parse and modify the rules as cPanels have one ethernet interface and some aliases while border router has two ehternet interfaces and some loopbacks. All machines involved use Linux. EDIT as per @pjmorse comment. The plugin consists of a bunch of perl and config files. The part I'm intrested in is a process which scans logfiles (lfd) and installs iptables rules (and sends an alert email). Fact is, it upgrades quite often (one or two times a week) and itself is 7000 lines of perl so I'm not comfortable on tampering with it.

    Read the article

  • Identifying Exchange 2010 regular process that is walking the mailbox database

    - by toongeneral
    I have an Exchange 2010 server running on a SAN-backed platform. The platform does block-level backups based on a snapshot/incremental basis, that only capture changed data. I was surprised to see a regular period of time where the data changes were happening at a high, sustained rate. Due to the way this system works, that can lead to 1.2TB of stored data per month. The regularity implied a scheduled task, but it is not a fixed interval. It is approximately every 26-32hrs. The disks were performing read operations of ~5MB/s and write operations of ~4.5MB/s, for a period of 3-4hrs. The total written data was ~55-60GB. Reading on TechNet, I am wondering if the following is causing this: http://blogs.technet.com/b/exchange/archive/2011/12/14/database-maintenance-in-exchange-2010.aspx#checksumming The somewhat restrictive thing is that the process only happens at most once every 24 hours. I was able to investigate while it was running, finding the following: the process is store.exe it is working on the mailbox database files while running, it is generating .log files (in the mailbox database folder) consistent with database changes the mailbox database is ~60GB in size, which fits with the total data changes on each iteration I have currently switched to a fixed maintenance window, as a test. It's not clear whether this is the cause, as the symptoms fit, but are not conclusive. Does anyone have any suggestions for additional troubleshooting?

    Read the article

  • Create DFS replica from a NAS drive

    - by Mark
    We have two offices, at two different locations. In one we have a NAS, with some shares. We also have a Domain Controller using Windows 2003 R2. We have setup a second Domain Controller using Windows 2003 R2 to put that in the second office. What we would also like is to replicate the NAS drive onto the second Domain Controller so in the second office they have a local copy, and that their changes are replicated back to the NAS. Is there a way to setup DFS replication to do this? Or will it only work with local folders on each Server? Update 1 Sept Base on the answer below, I think I need to add some clarification. The real issue is that the NAS which hosts the shared folder that we want to replicate is external to both servers. And we have a particular share mapped to say S: . In the replication setup it doesnt seem to accept network shares external to the server to be candidates for replication. I can understand why, I just need confirmation that DFSR will only work with block devices that are local on at least one server. Is this the case?

    Read the article

  • Windows 7 VPN only works if I connect it to itself first

    - by user1799075
    Just so you have some detail, VPN request are port forwarded from a linksys router hosting the global static IP (to the world) to the windows 7 machine. The ports have been added to the OK list. I have the incoming VPN connection setup on win 7 but the only way it will work from anywhere outside the physical machine is if I connect from itself to itself first. For example, let's say my internal static IP is 10.0.0.50 and incoming VPN server connection IP is 10.0.0.80 (both on the same machine). I can't connect via VPN from anywhere unless I first VPN from the machines .50 address back to itself on the .80 address. Once I do that, I can connect form anywhere, even my phone. It's as if once the machine reboots it thinks it should block requests on .80 until .50 connects first. BitDefender antivirus/firewall is loaded (windows firewall is off) I don't see anywhere to exclude ports in the BitDefender control panel. Maybe this initial connection opens the ports and tags them as safe because the initial request came from the same machine? Any thoughts? It's driving me nuts and I'm sick of having to drive half way across town over to the server, try to get building access and do the initial connection. Please help

    Read the article

  • Read non-blocking from multiple fifos in parallel

    - by Ole Tange
    I sometimes sit with a bunch of output fifos from programs that run in parallel. I would like to merge these fifos. The naïve solution is: cat fifo* > output But this requires the first fifo to complete before reading the first byte from the second fifo, and this will block the parallel running programs. Another way is: (cat fifo1 & cat fifo2 & ... ) > output But this may mix the output thus getting half-lines in output. When reading from multiple fifos, there must be some rules for merging the files. Typically doing it on a line by line basis is enough for me, so I am looking for something that does: parallel_non_blocking_cat fifo* > output which will read from all fifos in parallel and merge the output on with a full line at a time. I can see it is not hard to write that program. All you need to do is: open all fifos do a blocking select on all of them read nonblocking from the fifo which has data into the buffer for that fifo if the buffer contains a full line (or record) then print out the line if all fifos are closed/eof: exit goto 2 So my question is not: can it be done? My question is: Is it done already and can I just install a tool that does this?

    Read the article

  • Issues with returned mail sent to web-based email domains

    - by Beeder
    My company is having issues with returned mail that we send out to external domains. A few weeks ago we replaced a firewall and changed ISP providers and began subsequently having issues RECEIVING emails from external sources because we hadn't updated our new IPs in the DNS records. After making the necessary configuration changes and setting up SMTP forwarding over port 25 to our mail server, everything was working fine up until a few days ago when we started having mail sent out returned to us. We aren't having any trouble communicating internally (to recipients on our domain) but it seems we're having trouble with outbound messages to web-based email recipients. (@hotmail, @live, @yahoo, @gmail...etc) Currently we are running Server 2003 SP2 and exchange 2003. I'm very unfamiliar with configuring Exchange and could really use some help in narrowing down the possibilities. I did some research and am becoming suspicious of Sender ID being the culprit due to our recent IP address change and the likelihood that Sender ID is identifying us as a fake domain. Am I going in entirely the wrong direction? Any input or guidance would be infinitely appreciated. This is the message that is returned when an outbound message fails...this particular one was sent to my @live.com account for testing purposes... Your message did not reach some or all of the intended recipients. The following recipient(s) could not be reached: [email protected] on 5/17/2012 3:02 PM There was a SMTP communication problem with the recipient's email server. Please contact your system administrator. Unfortunately, messages from xx.x.xx.x weren't sent. Please contact your Internet service provider since part of their network is on our block list. I tried a reverse DNS lookup and found that we are set up as a Forward-confirmed reverse DNS. So do I just need to contact my ISP and have them correct their DNS records or is this something I can solve on our end??

    Read the article

  • ZFS & Deduplicating FLAC Data

    - by jasongullickson
    I'm experimenting with using ZFS to deduplicate a large library of FLAC files. The purpose of this is twofold: Reduce storage utilization Reduce bandwidth needed to sync the library with cloud storage Many of these files are of the same music tracks but from different physical media. This means that for the most part they are the same and usually close to the same size, which makes me think that they should benefit from block-level deduplication. However in my testing I'm not seeing good results. When I create a pool and add three of these tracks (identical songs from different source media) zpool list reports 1.00 dedupe. If I copy all of the files (make exact duplicates of the three) dedupe climbs, so I know that it is enabled and functioning, but it's not finding any duplication in the original collection of files. My first thought was that perhaps some of the variable header data (metadata tags, etc.) might be mis-aligning the bulk of the data in these files (the audio frames) but even making the header data consistent across the three files doesn't seem to have any impact on deduplication. I'm considering taking alternate routes (testing other dedupe filesystems as well as some custom code) but since we're already using ZFS and I like the ZFS replication options, I'd prefer to use ZFS dedupe for this project; but perhaps it's simply not capable of working well with this sort of data. Any feedback regarding tuning that might improve dedupe performance for this sort of dataset, or confirmation that ZFS dedupe is not the right tool for this job are appreciated.

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • notepad++ select everything between tags

    - by mcflause
    I am doing some langauge translations of an old HTML website so I am just pasting the new translations from a word document into the old files. So I have to select everything between the tags (h2,p,li, etc) and then paste the new text in... from a word file. For selecting everything in between p tags I have to select one side of the inside tag then hold shift then select the next side to highlight everything... my fingers are getting really tired, and I got 40 files (pages) total with 3 languages to do. Is there a shortcut in Notepad++ to select everything between two tags (like when you double click a word it selects the whole word). <p>This is some English that needs to be translated here. I want to just click in this area to select all of this text between these two paragraph tags.</p> <p>This would be another block of translation to do</p> <ul> <li>I want to click here and select everything between the li tags</li> </ul>

    Read the article

  • Inexpensive (used) hardware for Xen virtualization test?

    - by Jason Antman
    Virtualization is one of the areas where I could really use some experience. I also run quite a few services (web, mail, dns, etc.) out of my home. Since most of my hardware is getting a bit old (I'm running on stuff that was surplused years ago...) I decided that it's about time I start renewing some things, and also play around with virtualization a bit more. My plan is to setup a SAN box (simple iSCSI target, relatively inexpensive gigE switch), get a pair (for starters) of new servers, and start building some new stuff with Xen, specifically planning on playing with live migration and full virtualization. Does anyone have recommendations for used, older "servers" (really anything in a rack-mount form factor, I'm not too worried about things like iLO/iLOM for the test nodes) that support VT-x/AMD-V? I'm biased to HP, but it looks like they didn't make Proliants with VT-x/Vanderpool processors until G6 (for the DL360) or so, which is way out of my price range. I'm looking in the sub-$300 range (or less, if possible), used, probably Ebay. Any recommendations are greatly appreciated. Edit:And, to catch this before the comments start coming - these are personal systems. I have first-generation Proliants still in use (I got them as corporate surplus in 05, they've been running since then, and probably were running since 01 or 02 prior to being sold). I don't need anything shiny and new - I've got a bunch of old boxes, at least one complete replacement for every model in use, and that's fine for me (and easy on the wallet).

    Read the article

  • Difference between "bit-number" in filesystem and that in OS (like 32,64 bit)?

    - by learner
    I encountered 2 terms ,"FAT32", a file system and "Windows Vista 32 bit". I found that the meaning of a 32 bit OS is that that the OS deals with data in chunks of minimum size 32 bits. I don't quite understand the depth of that, but I figure ,every file in that system with that OS should have a minimum size of 32 bits. I also read that these 32 bits are used to hold data of files' location(reference) and details. Which of it is it? I have also read that 4 GB of RAM is all that is needed at the most if you're on a 32 bit OS. But I don't understand why. If there are 32 bits to hold info about files and their locations,there can be 2^32 possible combinations of it. But I have found in many places,2^32 is divided by 1024 thrice to get 4GB. Why? Did that 2^32 become equal to 2^32 bytes? And about filesystems I read a similar explanation for what 32 means in FAT32. It is supposed to mean that 32 bits are used to number file system block. Now how is different from the number before the OS?

    Read the article

  • can't save or create files in external hard disk

    - by Rodniko
    i formatted my computer and installed new win7. i connected my external hard disk (usb connector) and i have some kind of permission problem. i can't save files after opening them and right clicking and choosing "new" shows that i can only create folders. what is wrong ? why doesn't the external hd doesn't have permission and how do i cahnge it? in microsoft they probably thought: "hmmm.... how would i make it difficult for the user to use our product..." , "we will have to make the difficulty as soon as the windows is installed...." " but how would we guarantee 100% for the user to have problems? "oh yhe! block creating files and saving them, yes!" i'm so tired of those guys... my HD is half a Terra, changing ownership of all the files inside will take a couple of hours , i need other ideas... if any...

    Read the article

  • Enabling `mod_rewrite` apache, permissions issues

    - by rudolph9
    In attempting to enable mod_rewrite on the Apache2 web server installed with Mac OSX 10.7.4. Following these instruction, ultimately using the configuration to host CakePHP applications, I run into permissions issues accessing the site via a web browser when I set the directory block associated with cakephp site /etc/apache2/users/username.conf from: <Directory "/Users/username/Sites/"> Options Indexes FollowSymLinks MultiViews AllowOverride none Order allow,deny Allow from all </Directory> /etc/apache2/users/username.conf to: <Directory "/Users/username/Sites/"> Options Indexes MultiViews AllowOverride none Order allow,deny Allow from all </Directory> <Directory "/Users/username/Sites/cakephp_app/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny Allow from all </Directory> The .htaccess files are the CakePHP 2.2.2 default as follows: /Users/username/Sites/cakephp_app/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> /Users/username/Sites/cakephp_app/app/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ webroot/ [L] RewriteRule (.*) webroot/$1 [L] </IfModule> /Users/username/Sites/cakephp_app/app/webroot/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php [QSA,L] </IfModule> When performing the request via a web browser at to http://0.0.0.0/~username/cakephp_app/index.php the content of the response is Not Found The requested URL /Users/username/Sites/cakephp_app/app/webroot/ was not found on this server. Apache/2.2.21 (Unix) DAV/2 PHP/5.3.10 with Suhosin-Patch Server at 0.0.0.0 Port 80 Upon a request to http://0.0.0.0/~username/ and http://0.0.0.0/~username/cakephp_app/, added to /var/log/apache2/error_log accordingly are the following: [Tue Sep 04 22:53:26 2012] [error] [client 127.0.0.1] File does not exist: /Library/WebServer/Documents/Users, referer: http://0.0.0.0/~username/ [Tue Sep 04 22:53:26 2012] [error] [client 127.0.0.1] File does not exist: /Library/WebServer/Documents/favicon.ico What is causing the issue? Is there server program, ideally available via a homebrew script, which would make hosting CakePHP applications for testing purposes more effective and efficient?

    Read the article

  • HAProxy appsession vs cookie precedence

    - by user1139473
    I am trying to find the best solution for balancing and keeping persistence on our application behind HAProxy. Here is our basic configuration: https://gist.github.com/endzyme/1804046b23c37beba520 After playing around with taking members down and up and also reloading the haproxy (with -sf) I have noticed that appsession isn't 100% effective, it would appear that sometimes it doesn't always 'request-learn'. I also tried to add a cookie JSESSION prefix to balance in case request-learn didn't take. Unfortunately it would present scenarios where the prefix would list svr2 but it was balanced to a different server. I am assuming it's because the appsession table takes first then sticks on that before using the cookie parameter. I have not tested with using cookie as an inserted option (not prefix on existing cookie) but I am thinking it would yield similar results. My question is: Which one is checked first, appsession or cookie, and is it an immediate catch after it reads the first one, or a fall through? Also as a follow up - is it not recommended to use both in the same backend? Cookie as I understand takes less memory resources, is agnostic to reloads and has way better reliability of persistence. Appsession I assume takes less cpu resource, since it's reading not writing. (Bonus Question: is there a way to inspect appsession/cookie table map? socket show table doesn't show anything except stick-tables) Many thanks in advance, -Nick

    Read the article

  • Windows 7 delayed file delete

    - by GregoryM
    I'm stuck with a pretty rare problem that happens on Windows 7 OS only. Every time I'm deleting the file with *.exe extension through explorer, the file doesn't get deleted immediately. I'm forced to wait for around one-two minutes before the system will delete the file. The main problem is that I cannot develop in such situation, because every time I build my solution, the old executable gets 'deleted', but is still there. So the new one cannot be created by Visual Studio. This problem breaks the Steam update progress and a few other installers functionality too. Fresh installed Win7 doesn't have this kind of trouble, so I guess this must be some bad registry entries or some services. Browsing the internet for solutions I found only this: http://www.sevenforums.com/software/72091-several-minute-delay-when-deleting-any-exe-file.html. But the solution the author found is not working (change the userName :)). Is there any ideas how to find what causes this to happen? BTW: when I place the file into Trash bin, no delay occurs. When I delete file with Total Commander - no delay too. Tech details: Windows 7 x64 Ultimate. UPD: maybe some shadow copying or system restore services (though I have the system restore turned off) block the files? Can't even guess...

    Read the article

< Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >