Search Results

Search found 10384 results on 416 pages for 'plan cache'.

Page 328/416 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • Redmine does not return the web page

    - by m0skit0
    I migrated a Redmine installation from an Ubuntu machine to a Debian one (both 32-bits), and now for some reason, for some users it doesn't return the page but only a 200 OK message. Here is the flow (from Wireshark): GET /issues/142 HTTP/1.1 Host: debian:3000 User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:17.0) Gecko/20100101 Firefox/17.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Cookie: _redmine_session=BAh7DCIQX2NzcmZfdG9rZW4iMStIM1RBNTlNelZVUXlUazgrR1pUNGUvNGdEbytUZzRyMVFSUnBvNGhlSDg9Ihd0aW1lbG9nX2luZGV4X3NvcnQiEnNwZW50X29uOmRlc2MiD3Nlc3Npb25faWQiJThiMDk0MzVhOTEzYTI0MzVjOGEzYTRmNDU0NzcwMTAwIgx1c2VyX2lkaQoiFmlzc3Vlc19pbmRleF9zb3J0IgxpZDpkZXNjIg1wZXJfcGFnZWlpIgpxdWVyeXsHOg9wcm9qZWN0X2lkaQc6B2lkaQo%3D--8588c221c0642a12f396239455fb702aec14c9c9; my_wiki_session=f70ae11e1c533c86f0e039d63cf3f69c; my_wikiUserID=1; my_wikiUserName=Yasin Cache-Control: max-age=0 HTTP/1.1 200 OK Connection: Keep-Alive Date: Wed, 12 Dec 2012 16:30:16 GMT Server: WEBrick/1.3.1 (Ruby/1.8.7/2010-08-16) Content-Length: 0 As you can see, I get nothing from the server. This is mostly random because this blank page happens sometimes for some users, and for other users it almost never returns the page... I'm absolutely lost here. Any idea about what can be the cause? Thanks in advance!

    Read the article

  • ubuntu's average load never below "0.00 0.01 0.05"

    - by Karma Fusebox
    I have several ubuntu 12.04 VMs running on a ubuntu 12.04 KVM host. Those of the virtual machines that are totally idle with no services (except syslog and the other "small" standard stuff of a fresh installation) show a constant load of "0.00 0.01 0.05" in top/htop as average 1/5/15. When there are "real" applications running, the load averages behave perfectly normal but they never fall below the mentioned values. While this doesn't affect performance at all and could easily be ignored, it screws up the monitoring graphs in a very annoying way: (Notice how load15 behaves nicely if 0.05 for a short time in the right half of the pic) Unfortunately I don't know what diagnostic outputs might be helpful for you, so here's some default stuff: # top top - 16:31:01 up 1:05, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 62 total, 1 running, 61 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 99.2%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019464k total, 73452k used, 946012k free, 6140k buffers Swap: 0k total, 0k used, 0k free, 22504k cached . # free -m total used free shared buffers cached Mem: 995 72 923 0 6 21 -/+ buffers/cache: 43 951 Swap: 0 0 0 . # iostat -x /dev/vda Linux 3.2.0-32-virtual (vm3) 11/15/2012 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 0.25 0.00 0.65 0.20 0.24 98.66 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util vda 0.14 0.12 0.51 0.22 6.74 1.46 22.50 0.02 23.26 20.64 29.30 7.63 0.56 Need something else? Has anyone ever seen this behavior? Might this be a bug in kvm/ubuntu/kernel 3.x in the end? Thanks a lot!

    Read the article

  • Upgrade to Q9550 or i7 920 on a budget?

    - by evan
    I'm planning to upgrade my computer and torn between maxing out the system I have or investing in the X58 architecture. I'm currently using a E6600 Core 2 Duo with 4GB of RAM (800mhz) on an Asus PK5-E motherboard which I built two years ago. My original plan was that one day I'd upgrade machine to 8GB (1066mhz, the max the PK5-E allows) and to the Core 2 QuadQ9550 to give the machine a good four years of life. However, that was before the i7 came out. I use my computer mainly for software development , which I do inside Virtual Machines, and the i7 seems ideal for that because it no longer is limited by the speed of the FSB? And when I looked into it, getting 8GB DDR3 RAM isn't much more expensive than the 8GB of DDR2 and the i7 920 is comparable in price to the Q9550, which doesn't make much sense to me? So the question is it worth swapping the motherboard out for around $250 and upgrading all three components or using that money on SSD or 10rpm drive for the existing system's OS/Apps/Virtual Machine drive? Or just put the $250 towards a completely new machine in a year or two? Would the i7 really give that much of boost compared to the Q9550 for what I'd be using it for? Thanks in advance for your input!!!

    Read the article

  • Gaming blew fuse: how to overcome?

    - by George Tomlinson
    I've been gaming for a while now. When playing certain games this PC goes into overdrive. The fan/fans start/s to sound like a jet engine it/they get/s so busy. Also I have smelt burning when this has happened. The fuse blew on the 4 socket adapter I was using recently. On the following thread someone said this could be due to the PSU not being strong enough to handle the load, in what it seems could be a related issue someone had, although the person who posted this question did say that blowing a fan on their PC stopped it crashing in that case: http://www.tomshardware.co.uk/answers/id-2047543/gtx-650-overheating-issue.html. This is exactly what they said: Your GPU isn't overheating. 70+ before it would shutdown and cause a restart. Make sure your PSU is strong enough to handle your new system at load and possibly run Memtest to check your RAM (although not BSOD'ing and just shutting down points to the PSU). This (the PSU part) makes more sense to me than it being to do with dust etc, since it seems a more plausible explanation of why the fuse blew. The PC has no problems except when playing certain games: i.e. TERA Rising and WoW with add-ons (I think WoW is ok as long as I don't have more than 1 add-on (Healers Have To Die)). I'm just wondering if anyone knows or can suggest what I might be able to do to be able to play these games without this problem occurring. The PC's spec is this: Display: NVIDIA GeForce GTX 650 8GB RAM (6 available) Processor: AMD FX (tm) - 8120 Eight-Core Processor - 3.1 GHz, 4 Cores, 8 Logical Processors I have read on another post that forcing vsync in the Nvidia Control Panel helped with what seems could be a similar problem, so I plan to see if that solves it, God permitting.

    Read the article

  • Thousands of visits a day from untraceable traffic to website - Serious issue

    - by kel
    At the end of January we noticed a spike in traffic to what JetPack stats says was home/archive page and what Google was classifying as going to /gaming/ which is an archive list in WordPress. This started off as ~3,000 unique visitors and jumped up to 65,000 unique visitors in one day, again all to the "home" page. This happened over a course of a couple of weeks and we thought we were getting attacked. The traffic then dropped off for a few days but then came back but came back as only about ~15,000 uniques a day and has been like that every day since. We came to the conclusion that something wasn't tracking right somewhere and this is legitimate traffic and brushed it off. Now here comes the problem, Google AdSense has just disabled our account for "invalid clicks". We are trying to figure out where this traffic is coming from and stop it if it's not legitimate or figure out a way to track it correctly. Specs for the site: Dedicated server running CentOS 6 with nginx, php-fpm and MySQL. The site is built in WordPress and we use CloudFlare and W3 Total Cache. Analytics being used are Google Analytics, Quantcast, Alexa and Compete. Any kind of help would be awesome. UPDATE: I'm finding more people with the same type of problem and there doesn't seem to be a solution. http://netmeg.com/bot-attack/ http://stkywll.com/2012/03/02/annoying-cyborgs-attach-distort-analytics/ After looking at the access logs I noticed they were all CloudFlare IP's. I looked into that and found out CloudFlare acts as a proxy and there was a way to fix the logs in nginx. They are coming from many different ISP's in the US. They are going to /games/ or /gaming/ (/games/ redirects to /gaming/) and all seem to have the same user agent of Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0).

    Read the article

  • Bypass network stack. Which options do we have? Pros and cons of each option [on hold]

    - by javapowered
    I'm writing trading application. I want to bypass network stack in Linux but I don't know how this can be done. I'm looking for complete list of options with pros and cons of each of them. The only option I know - is to buy solarflare network card which supports OpenOnLoad. What other options should I consider and what is pros and cons of each of them? Well the question is pretty simple - what is the best way to bypass network stack? upd: OpenOnload It achieves performance improvements in part by performing network processing at user-level, bypassing the OS kernel entirely on the data path. Intel DDIO to allow Intel® Ethernet Controllers and adapters to talk directly with the processor cache of the Intel® Xeon® processor E5. What's key difference between these techologies? Do they do roughly the same things? I much better like Intel DDIO because it's much easy to use, but OpenOnload required a lot of installation and tuning. If good OpenOnload application is much faster than good Intel DDIO application?

    Read the article

  • i7-980X at 70% speed

    - by Buxley
    Hi we bought a nice computer to use to solve optimization problems. Intel i7-980X@3,33 GHz with 12 GB of Team Group 1600 MHz DDR3 RAM. When we use Gurobi the computer uses all 12 cores at maximum in the beginning of the solve. However after a while (about 8 hrs) it all cores jump between 65 and 85% When I solve the same models on an i7 930 all cores are at a near 100% level even after longer solution times. We first thought that the Hard disk was the bottleneck since Gurobi writes out nodefiles after the memorylimit is used. However since the new computer have 12 GB of RAM we put the memorylimit to 7 GB so the solver only used the RAM and still with the same performance in the processor. Any ideas about the bottleneck? As I said earlier it works at 100% for the first hours or so . Thanks very much for any answers! Our plan was to overclock it but we can't even get it to work at normal speed yet!

    Read the article

  • Can applications use all of the memory in Windows 8?

    - by Barleyman
    Windows 7 (and Windows Vista) have a built-in limit of not being able to use the last 25% of RAM. You will get a low memory warning when you get close to the limit. Even if you disable that warning, applications will run out of memory and crash since the OS will refuse to allocate memory from that last 25%. That was fine when Vista was designed, when machines had 1 GB of total memory, but is pretty daft for today's 8 GB machines. Yes, the system will run cache, etc. on that extra 2 GB, but running out of memory when you have "merely" 2 GB left.... NB: this has nothing to do with the page file. If you limit the page file to a sensible size like 2 GB, you will still see this behavior. The system will cram the page file to the last byte while refusing to touch that 1/4th of the RAM. Does Windows 8 change this behavior? Is there now some fixed minimum free RAM requirement, like 512 MB, or is it still 25%? Can you actually adjust the low memory limit?

    Read the article

  • What are ways to prevent files with the Right-to-Left Override Unicode character in their name (a malware spoofing method) from being written or read?

    - by galacticninja
    What are ways to avoid or prevent files with the RLO (Right-to-Left Override) Unicode character in their name (a malware method to spoof filenames) from being written or read in a Windows PC? More info on the RLO unicode character here: http://www.fileformat.info/info/unicode/char/202e/index.htm http://en.wikipedia.org/wiki/Bi-directional_text Info on the RLO unicode character when used by malware: http://www.ipa.jp/security/english/virus/press/201110/E_PR201110.html Mirror link: http://webcache.googleusercontent.com/search?q=cache:KasmfOvbVJ8J:www.ipa.jp/security/english/virus/press/201110/E_PR201110.html+&cd=1&hl=en&ct=clnk You can try this RLO character test webpage: http://www.fileformat.info/info/unicode/char/202e/browsertest.htm The RLO character is also already pasted in the 'Input Test' field in that webpage. Try typing there and notice that the characters you're typing are coming out in their reverse orders (right-to-left, instead of left-to-right). In filenames, the RLO character can be specifically positioned in the filename to spoof or masquerade as having a filename or file extension that is different than what it actually has. (Will still be hidden even if 'Hide extensions for known filetypes' is unchecked.) The only info I can find that has info on how to prevent files with the RLO character from being run is from the Information Technology Promotion Agency, Japan website: http://www.ipa.jp/security/english/virus/press/201110/E_PR201110.html (Mirror link). They adviced to use the Local Security Policy settings manager to block files with the RLO character in its name from being run. Can anyone recommend any other good solutions to prevent files with the RLO character in their names from being written or being read in the computer, or a way to alert the user if a file with the RLO character is detected? My OS is Windows 7, but I'll be looking for solutions for Windows XP, Vista and 7, or a solution that will work for all those OSes, to help people using those OSes too.

    Read the article

  • Mac OS X - Time Machine backup fails verification - What can I do to save the history?

    - by usermac75
    Hi, How do I make Time Machine to make a new complete backup without losing older versions of backed up files? Verbose: I am using the Time Machine backup on my OS X (Snow Leopard) to backup the whole computer to an external drive. I especially like the "history", i.e. the feature that allows you to restore the older version of a file. Problem: I had some data corruption on my external backup drive, I repaired it with the System Tool for doing that, it found some faults. I had the disk tool repair the external drive. After that, the external drive was OK and I could use Time Machine again. I let Time Machine do one more backup. Now I made a verification according to http://superuser.com/questions/47628/verifying-time-machine-backups, namely along sudo diff -qr . $HOME/Desktop 2>&1 | tee $HOME/timemachine-diff.log However: After doing the command above, several differences and missing files were reported, approx. 200 files in sum. Whereas some of the missing files were cache or excluded directories, the differences do bother me, especially as some important documents from me are listed as differing. How can I make sure that the data on the external drive is synced correctly? Is it possible to have Time Machine to do a complete new backup without losing the history? Or to have Time Machine compare all files for differences and re-write all files that are different? Or can I set some flags on the files that do not match to have them copied again? (like the archive-flag in Windows/Dos). I'd rather not touch the files because I would like to keep the date of last change/date of creation) Thank you for your thoughts!

    Read the article

  • Trouble Letting Users Get to Certain Sites through Squid Proxy

    - by armani
    We have Squid running on a RHEL server. We want to block users from getting to Facebook, other than a couple specific sites, like our organization's page. Unfortunately, I can't get those specific pages unblocked without allowing ALL of Facebook through. [squid.conf] # Local users: acl local_c src 192.168.0.0/16 # HTTP & HTTPS: acl Safe_ports port 80 443 # File containing blocked sites, including Facebook: acl blocked dst_dom_regex "/etc/squid/blocked_content" # Whitelist: acl whitelist url_regex "/etc/squid/whitelist" # I do know that order matters: http_access allow local_c whitelist http_access allow local_c !blocked http_access deny all [blocked_content] .porn_site.com .porn_site_2.com [...] facebook.com [whitelist] facebook.com/pages/Our-Organization/2828242522 facebook.com/OurOrganization facebook.com/media/set/ facebook.com/photo.php www.facebook.com/OurOrganization My biggest weakness is regular expressions, so I'm not 100% sure about if this is all correct. If I remove the "!blocked" part of the http_access rule, all of Facebook works. If I remove "facebook.com" from the blocked_content file, all of Facebook works. Right now, visiting facebook.com/OurOrganization gives a "The website declined to show this webpage / HTTP 403" error in Internet Explorer, and "Error 111 (net::ERR_TUNNEL_CONNECTION_FAILED): Unknown error" in Chrome. WhereGoes.com tells me the URL redirects for that URL goes like this: facebook.com/OurOrganization -- [301 Redirect] -- http://www.facebook.com/OurOrganization -- [302 Redirect] -- https://www.facebook.com/OurOrganization I tried turning up the debug traffic out of squid using "debug_options ALL,6" but I can't narrow anything down in /var/log/access.log and /var/log/cache.log. I know to issue "squid -k reconfigure" whenever I make changes to any files.

    Read the article

  • Second CPU missing of Dual Core

    - by Zardoz
    My Lenovo T61 has a dual core CPU. I just noticed that under Ubuntu 10.10 only one CPU is recognized. I know that once both CPUs worked. Not sure since when the second CPU is missing. Maybe since the last kernel update. Currently I am using linux-image-2.6.35-23-generic (for x86_64). What can I do to enable the second CPU again? Here the ouput of /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T8100 @ 2.10GHz stepping : 6 cpu MHz : 800.000 cache size : 3072 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 4189.99 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: Any help is welcome. I really need that CPU power for my work here.

    Read the article

  • Slow DB Performance. Seems to be memory related.

    - by David
    I am seeing a pooorly performing web app with a SQL 2005 backend. The db is on a w2k3 machine with 4GB RAM. When I run perfmon on it I see the following. Page life expectancy is low. Consistently under 300 while the Buffer cache hit ratio is always 99% +. The target server memory is always 1618304 and the total server memory is always a number just below that. So it seems that it isn't grabbing enough of the available memory. I have AWE enabled, with the lock pages right for the SQL service account and have set a maximum of 2.25Gb... but it doesn't go near that. When I restart the SQL service the page life expectancy goes much higher, 1000+, and the total target memory starts at 0 and slowly works its way back up to the previous limit. Then it hits the limit and the page life expectancy goes back down massively to <300. So I'm guessing there is something limiting the amount of memory. Any ideas on what that would be and how I can fix it?

    Read the article

  • Hyper-V and attaching physical disks

    - by Mike Christiansen
    So, I'm looking at rebuilding my home server. My current setup is the following Windows 7 Ultimate 1TB Boot Drive (my smallest drive) Windows Dynamic Spanned volume, continaing 1x 1TB drive, 2x 2TB drives, totalling 5TB. I am upgrading to a hardware RAID controller, and I would like to run Hyper-V server core. However, I want to retain the ability to join my "file server" to a homegroup, so I must use Windows 7. I know VHDs can only be like 127GB or something, so I obviously need to directly connect disks to my Windows 7 machine. Here is my plan: Server Core 2008 R2 (Hyper-V) 1TB Boot Drive (storing VHDs for boot drives of VMs) - possibly in a RAID 1 with my other 1TB drive 5x 2TB drives (1x 2TB drive hot spare), totalling 10TB, directly attached to a Windows 7 VM, for use of homegroup for this array. In the past, I directly attached the windows dynamic volume to a Windows 7 VM, and performance was abysmal. The question is, with hardware RAID, will it really make that much of a difference? Server specs: Intel Core 2 Quad Q9550 2.83GHz Asus Maximus II Formula (PCI-E x16) 8GB DDR2 RAM PC2-6400 (Yes, I know its a bit out of date)

    Read the article

  • Exchange 2010 UR3 - customizing OWA logon page

    - by STGdb
    I have an Exchange 2010 UR3 deployment that I need to customize the OWA logon page for. I've created a new LGNTOPL.GIF file to replace the existing one in the folder: “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.158.1\themes\resources” When I bring up OWA, I still get the original “Outlook Web App” logo. I’ve searched and found a couple of other instances of LGNTOPL.GIF in the directories: “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.123.3\themes\resources” “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.146.0\themes\resources” “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\Current\themes\resources” I’ve replaced the LGNTOPL.GIF file in each of the above directories but got the same results. I’ve tried clearing my browser cache and even using multiple browsers from multiple PC’s but the same results. I’ve even tried making my GIF file the same pixel size as the original LGNTOPL.GIF logo but still the same results. I’ve tried restarting IIS on the CAS server and restarting the server but same results. Has something changed with Exchange 2010 UR3 when trying to customize OWA? I don't see anything documented about any change to OWA customization. Thanks

    Read the article

  • i7 x980 at 70% speed

    - by Buxley
    Hi we bought a nice computer to use to solve optimization problems. intel i7 X980@3,33 Mhz with 12 Gb of Team Group 1600 MHz ddr3 Ram. When we use Gurobi The Computer uses all 12 cores at maximum in the beginning of the solve. However after a while (about 8 hrs) it all cores jump between 65 and 85% When I solve the same models on an I7 930 all cores are at a near 100% level even after longer solution times. we first thought that the Harddisk was the bottleneck since Gurobi writes out nodefiles after the memorylimit is used. However since the new computer have 12 GB of Ram we put the memorylimit to 7 GB so the solver only used the RAM and still with the same performance in the processor. Any Ideas about the bottleneck? As I said earlier it works at 100% for the first hours or so . Thanks very much for any answers! Our plan was to overclock it but we can't even get it to work at normal speed yet!

    Read the article

  • What switch should we use for PCoIP?

    - by Jay R.
    We have a small lab space that seats 10 people and has 20 machines. Each machine is set to 1920x1200 resolution because the user apps are best used at that resolution. Currently the machines are all located close enough to montors that a DisplayPort cable will reach, but the pending lab remodel positions them around 80 feet or more away in racks. Our proposed solution is to use PCoIP. We purchased 10 PCoIP portals and 20 PCoIP host cards. We plan to set up a dedicated network to handle just the PCoIP traffic. After testing just one portal and one host card with a cheap 1G switch from a local office supply store, we were left with less than good impressions about the usefulness in our lab. The framerates were not spectacular and the mouse seemed jerky. Our concern is that we can't get away with the cheap 1G stuff from the store because adding more machines to the switch will just make the user experience worse. What switch would be recommended to best support our PCoIP situation? We will need to plug in at least 30 cables based on just those machines. Is there a particular feature to search for that makes a difference? Is there a switch that works best with PCoIP? Added Info: The reporting webapp for the host card shows maximum bandwidth usage to be 220000 kbps. The average appears to be around 180000 kbps. The reverse direction is much lower, like 15000 kbps.

    Read the article

  • WD Caviar Green Extremely Slow

    - by Steven
    I am encountering a really weird problem on my WD Caviar Green HDD. Well first of all I have 2 HDDs on my Desktop, one 160GB Seagate holding my Win7 Ultimate x64 and the problematic one, WD 1.5 Caviar Green for storage purpose. My problem is kinda weird, when I transfer files from my Seagate(C:) to my WD (D:) the speed is good (50-60MB/s). Then the problem arises when I transfer too "many" large files, the transfer speed would go straight down to kilobytes/s. Well after I cancelled the transfer and access my D:, even entering a folder requires loading for like 10 seconds. Such problem not only arises when I am transferring files to my D:, it seems like my WD can't handle much activities. For instance, last time I installed my game on D: and I would face much lag after playing for some time. When the same game is installed on C: no problem arises. Does anyone knows what is the problem? P/S: There was one temporary solution that I used to tried. After the "situation" occurs, I tried to access as many folders on D: as I can and let it load, repeating such actions and giving it some time bring the D: back to speedy transfer. However, large transfers would causes the situation to happen again. Does it have something to do with cache whatsoever?

    Read the article

  • Server 2008 email on Event variables

    - by Jeff Miles
    One of the new features of Server 2008 is the ability to attach a task to a specific event in the event logs. One of the actions available is to send an email through a SMTP server. This is working great, however it would be ideal if in the message body, the Event contents could be placed. I have tried using $eventdescription and %eventdescription%, but those are just shots in the dark. Any amount of googling produces no results. Does anyone know if this is possible? Update: Sparks' suggestion below is a step in the right direction I believe, however that method doesn't seem to work for all values. For example, I can pull the RecordID, Severity and Channel as shown, but I can't use the same method to retreive the EventID, or most importantly the description. Here's the raw XML from one event: [Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"] [System] [Provider Name="DFSR" /] [EventID Qualifiers="16384"]4412[/EventID] [Level]4[/Level] [Task]0[/Task] [Keywords]0x80000000000000[/Keywords] [TimeCreated SystemTime="2009-05-14T18:18:09.000Z" /] [EventRecordID]45692[/EventRecordID] [Channel]DFS Replication[/Channel] [Computer]servername.domain.com[/Computer] [Security /] [/System] [EventData] [Data]9046C3F4-843E-4A53-B941-4B20764072E5[/Data] [Data]D:\departments\Geomatics\Plan Quality\Data Processing\CG3533017 2009-05-13 KT FIXED[/Data] [Data]D:\departments[/Data] [Data]{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [Data]Departments[/Data] [Data]domain.ca\files\departments[/Data] [Data]B8242CE2-F5EB-47DA-BA5B-1DD2F7EE3AB9[/Data] [Data]DFAA7A54-66CB-4C31-81A0-0F861382C32C[/Data] [Data]CG3533017 2009-05-13-{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [/EventData] [/Event] I have tried using a ValueQuery for EventData, but it returns no data.

    Read the article

  • Lost Windows 7 boot after EasyBCD with EFI

    - by drent
    I've got a Lenovo Y580 with a 64GB SSD and a 1TB HDD setup using GPT and setup to boot from (U?)EFI. I was trying to get my Linux Mint installation on the Windows boot manager using EasyBCD (I didn't realise EFI but it wiped my boot partition/loader and I cannot seem to get Windows back (and I still can't get a bootable Linux Mint). Using the System Recovery utility, Startup Repair can't "see" windows (it might be because I'm using a 7 Pro disk to recover Home Premium?). In command prompt, Bootrec tools don't do anything and bootsect can't run because it says that it's for BIOS only and I've booted with EFI. I can see the EFI data on the 200mb SSD partition using diskpart but I don't know how to add Windows back onto whatever bootloader I have/need. At the moment the only options I can see are: Do a fresh install of Windows and hope that the setup remains as fast as the default one (the SSD is some kind of cache for Windows but I can't quite see how it works given that the rest of the SSD is unpartitioned space). This seems like overkill given that Windows was working fine til EasyBCD deleted it. Try forcing BIOS mode and see if that somehow magically fixes things Try converting from GPT to MBR to try and use the bootrec/bootsect tools (and maybe back again) which seems like a really bad idea. Anyone have any ideas?

    Read the article

  • Matched or unmatched drives for RAID arrays?

    - by Will
    Looking around there is conflciting information on this, with some strongly suggesting one or the other. From my understanding the issue with matched drives is that the wear on both drives is more or less the same, so the potential for the second drive failing with or very soon after the first is pretty high. People also claim matched drives give substianatally higher performance however assuming the unmatched drives are more or less the same (eg 2, 1 TB STATA II 7200rpm drives with 32MB cache), would the minor differences between say a Seagate and a Western Digital one (say one has a 128MB/s read rate, and the other a 150MB/s read rate, as well as I guess various other minor differences) actually cause any notable performance loss, ie potentialy worse than two matched 128MB/s drives, or does RAID not really care and give you essentially an optimal solution (eg upto 278MB/s total read speed for RAID 0 and 1) and similar for other RAID with more "unmatched" drives (5 and 1+0 come to mind as possibilities)? Also I couldnt find much info on how this is different on different RAID setups, eg RAID 0 or RAID 1, software or hardware RAID, etc. I'm assuming such things have an effect, and thats it's not all the same for RAID in general?

    Read the article

  • Hyper-V and attaching physical disks [migrated]

    - by Mike Christiansen
    So, I'm looking at rebuilding my home server. My current setup is the following Windows 7 Ultimate 1TB Boot Drive (my smallest drive) Windows Dynamic Spanned volume, continaing 1x 1TB drive, 2x 2TB drives, totalling 5TB. I am upgrading to a hardware RAID controller, and I would like to run Hyper-V server core. However, I want to retain the ability to join my "file server" to a homegroup, so I must use Windows 7. I know VHDs can only be like 127GB or something, so I obviously need to directly connect disks to my Windows 7 machine. Here is my plan: Server Core 2008 R2 (Hyper-V) 1TB Boot Drive (storing VHDs for boot drives of VMs) - possibly in a RAID 1 with my other 1TB drive 5x 2TB drives (1x 2TB drive hot spare), totalling 10TB, directly attached to a Windows 7 VM, for use of homegroup for this array. In the past, I directly attached the windows dynamic volume to a Windows 7 VM, and performance was abysmal. The question is, with hardware RAID, will it really make that much of a difference? Server specs: Intel Core 2 Quad Q9550 2.83GHz Asus Maximus II Formula (PCI-E x16) 8GB DDR2 RAM PC2-6400 (Yes, I know its a bit out of date)

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

  • DNS caching server config problem

    - by Alex
    I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related. So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller. My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working. So that leaves me to think that my caching server is not forwarding properly. For example, this AD is going to have a naming convention of hostname.mydomain.local. If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local. However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain). Any suggestions? Here is my named.conf file: options { directory "/var/named"; listen-on { 192.168.0.14; 127.0.0.1; }; forwarders { ; ; }; forward first; }; zone "." in { type hint; file "db.cache"; }; zone "0.0.127.in-addr.arpa" in { type master; file "db.127.0.0"; }; //forward zone for mydomain.local zone "mydomain.local" { type forward; forwarders { 192.168.1.21; }; };

    Read the article

  • Server 2008 email on Event variables

    - by Jeff Miles
    One of the new features of Server 2008 is the ability to attach a task to a specific event in the event logs. One of the actions available is to send an email through a SMTP server. This is working great, however it would be ideal if in the message body, the Event contents could be placed. I have tried using $eventdescription and %eventdescription%, but those are just shots in the dark. Any amount of googling produces no results. Does anyone know if this is possible? Update: Sparks' suggestion below is a step in the right direction I believe, however that method doesn't seem to work for all values. For example, I can pull the RecordID, Severity and Channel as shown, but I can't use the same method to retreive the EventID, or most importantly the description. Here's the raw XML from one event: [Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"] [System] [Provider Name="DFSR" /] [EventID Qualifiers="16384"]4412[/EventID] [Level]4[/Level] [Task]0[/Task] [Keywords]0x80000000000000[/Keywords] [TimeCreated SystemTime="2009-05-14T18:18:09.000Z" /] [EventRecordID]45692[/EventRecordID] [Channel]DFS Replication[/Channel] [Computer]servername.domain.com[/Computer] [Security /] [/System] [EventData] [Data]9046C3F4-843E-4A53-B941-4B20764072E5[/Data] [Data]D:\departments\Geomatics\Plan Quality\Data Processing\CG3533017 2009-05-13 KT FIXED[/Data] [Data]D:\departments[/Data] [Data]{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [Data]Departments[/Data] [Data]swg.ca\files\departments[/Data] [Data]B8242CE2-F5EB-47DA-BA5B-1DD2F7EE3AB9[/Data] [Data]DFAA7A54-66CB-4C31-81A0-0F861382C32C[/Data] [Data]CG3533017 2009-05-13-{26D5F604-E603-4F87-8EC3-DE9A945DA8FD}-v927199[/Data] [/EventData] [/Event] I have tried using a ValueQuery for EventData, but it returns no data.

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >