Search Results

Search found 4137 results on 166 pages for 'reports'.

Page 123/166 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Bad Performance when SQL Server hits 99% Memory Usage

    - by user15863
    I've got a server that reports 8 GB of ram used up at 99%. When restart Sql Server, it drops down to about 5% usage, but gradually builds back up to 99% over about 2 hours. When I look at the sqlserver process, its reported as only using 100k ram, and generally never goes up or below that number by very much. In fact, if I add up all the processes in my TaskManager, it's barely scratching the surface of my total available (yet TaskManager still shows 99% memory usage with "All processes shown"). It appears that Sql Server has a huge memory leak going on but it's not reporting it. The server has ran fine for nearly two years, with this only starting to manifest itself in the last 3-4 weeks. Anyone seen this or have any insight into the problem? EDIT When the server hits 99%, performance goes down hill. All queries to the server, apps, etc. come to a crawl. Restarting the service makes things zippy again, until 2 hours has passed and the server hits 99% once again.

    Read the article

  • Application losing Printer within Terminal Services for remote users

    - by Richard
    Question: What I need to do is have a permanent link to a printer, normally only accessible through Terminal Services (Printer Redirect), to allow Sage Line 50 layouts to see that printer persistently, even after users have disconnected and reconnected to the Terminal Services session? Although the printer is accessible each time a user connects to the Sage Server via Terminal Services, it is given a different session number and therefore the Sage Layout sees it as a different printer. History behind question: Users using Terminal Services connecting to a Sage Server on a different site Using Sage Line 50 v 15 on that Server Users want to print invoices (sage layouts) locally Sage Server cannot see the users local printers, to get around this user uses the Print redirect features of Terminal Services The individual reports can be edited to point to a specific printer by default. This means the user just has to select an invoice and click print, then select the layout/report wanted and it auto prints that invoice to the default printer specified. The problem occurs because the layouts are edited to point to the users local printer "Ricoh 1018d (session#)", note the "(session#)" as this is the users local printer being redirected through the terminal services session. Users are able to print using the sage layouts once the default printer is setup within the layout and saved, but as soon as the users disconnects from the Terminal Services session and then reconnect in the morning go to print, it has lost the connection to that printer. I understand why its failed, because that the printer is on a per session basis and the layout would not be able to hold on to the connection from a previous session. Thanks in advance for any assistance...

    Read the article

  • Ghost Solution Suite: Booting over PXE to WinPE for re-imaging, then booting to installed OS

    - by uberdanzik
    I have 40 networked computers that need to be re-imaged each night over a network via an automatic and unattended process. This is to reset the computers to a default state, as well as update the computers to the latest software loads. I'm using Symantec Ghost Solution Suite 2.5. My process so far is the following: Client begins in a powered down WakeOnLan accepting state. Ghost Console task uses WakeOnLan and PXE to boot the client into the WinPE environment. The client connects to the ghost console and reimages itself successfully. The client closes WinPE and restarts. PROBLEM: The client boots into the WinPE environment again, instead of the newly installed OS (Win7) I need it to boot into Win7 once so that I can run a few configuration batch files, then shut down into the WakeOnLan state again. Ghost console even reports an error on the process, that it never rebooted into the OS. Right now it seems that there is not an option to stop it from booting into the PXE server's WinPE image after re-imaging. Even if I set up a PXE boot menu with other boot options, its pointless, because it will always boot the default option. I would expect the ghost console task to be able to influence the PXE boot choice somehow. What do they expect us to do, turn the PXE server on and off manually? How can I get the client to boot to the OS after re-imaging? Thank you.

    Read the article

  • Lion server profile manager, device enrollment doesn't work

    - by user964406
    I am in the process of setting up Lion Servers profile manager to manage iPads on our local school network. I don't need to manage them while they are outside the network. I have successfully had it working on my personal network. The school network is behind a proxy which we have no control over. I can get the iPads to view the mydevices page and install a trust cert. I have managed to get an iPad to successfully install the remote management profile. After this the profile manager bugs out. It will list the active task of 'new device (sending)' but it's unable to complete the task. If I click on the device on profile manager and try any of the actions out they will all fail to complete. I am using the auto generated certificates and this works if I bring the server and iPad outside of the school network. Shortly after device enrollment the system log on the Lion server reports the following Replaced the actual ip address with INTERNALIP Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini applepushserviced[778]: Got connection error Error Domain=NSPOSIXErrorDomain Code=1 "The operation couldn\u2019t be completed. Operation not permitted" UserInfo=0x7fa483b1a340 {NSErrorFailingURLStringKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS, NSErrorFailingURLKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS} Jun 4 08:40:53 mini applepushserviced[778]: Failed to get client cert on attempt 2, will retry in 15 seconds Does anyone have any ideas on how to get past this stage? Thanks in advance.

    Read the article

  • Error opening hyperlinks in Excel 2003

    - by richardtallent
    When clicking to follow hyperlinks from Excel, I'm now getting this error: Unable to open http://blah... Cannot download the information you requested. The hyperlinks in the Excel file are created using the HYPERLINK() formula. I use Google Chrome as my default browser. The web site in question uses Basic Authentication, and I've entered correct credentials when prompted (the dialog looked like an IE auth box, not Chrome's, but it's always been that way, even when it was working properly). This hasn't been an issue until recently. I'm guessing our IT department made some lame change to IE's configuration that is causing Office to not be able to open the URLs, despite having Chrome as my browser. Things I've checked already: URLs are good, they work fine when pasted manually into Chrome, IE, or Firefox. IE is not set to Work Offline (already found that suggestion on Google). I checked Program Access and Defaults and verified that Chrome is selected. Nothing in the URL requires URLEncoding, so it's no goofy issue with encoding I've had reports from some other users now and then about the same problem, but this is the first time I've experienced it myself.

    Read the article

  • Linux Kernel Packet Forwarding Performance

    - by Bob Somers
    I've been using a Linux box as a router for some time now. Nothing too fancy, just enabling forwarding in the kernel, turning on masquerading, and setting up iptables to poke a few holes in the firewall. Recently a friend of mine pointed out a performance problem. Single TCP connections seem to experience very poor performance. You have to open multiple parallel TCP connections to get decent speed. For example, I have a 10 Mbit internet connection. When I download a file from a known-fast source using something like the DownThemAll! extension for Firefox (which opens multiple parallel TCP connections) I can get it to max out my downstream bandwidth at around 1 MB/s. However, when I download the same file using the built-in download manager in Firefox (uses only a single TCP connection) it starts fast and the speed tanks until it tops out around 100 KB/s to 350 KB/s. I've checked the internal network and it doesn't seem to have any problems. Everything goes through a 100 Mbit switch. I've also run iperf both internally (from the router to my desktop) and externally (from my desktop to a Linux box I own out on the net) and haven't seen any problems. It tops out around 1 MB/s like it should. Speedtest.net also reports 10 Mbits speeds. The load on the Linux machine is around 0.00, 0.00, 0.00 all the time, and it's got plenty of free RAM. It's an older laptop with a Pentium M 1.6 GHz processor and 1 GB of RAM. The internal network is connected to the built in Intel NIC and the cable modem is connected to a Netgear FA511 32-bit PCMCIA network card. I think the problem is with the packet forwarding in the router, but I honestly am not sure where the problem could be. Is there anything that would substantially slow down a single TCP stream?

    Read the article

  • How to list rpm packages/subpackages sorted by total size

    - by smci
    Looking for an easy way to postprocess rpm -q output so it reports the total size of all subpackages matching a regexp, e.g. see the aspell* example below. (Short of scripting it with Python/PERL/awk, which is the next step) (Motivation: I'm trying to remove a few Gb of unnecessary packages from a CentOS install, so I'm trying to track down things that are a) large b) unnecessary and c) not dependencies of anything useful like gnome. Ultimately I want to pipe the ouput through sort -n to what the space hogs are, before doing rpm -e) My reporting command looks like [1]: cat unwanted | xargs rpm -q --qf '%9.{size} %{name}\n' > unwanted.size and here's just one example where I'd like to see rpm's total for all aspell* subpackages: root# rpm -q --qf '%9.{size} %{name}\n' `rpm -qa | grep aspell` 1040974 aspell 16417158 aspell-es 4862676 aspell-sv 4334067 aspell-en 23329116 aspell-fr 13075210 aspell-de 39342410 aspell-it 8655094 aspell-ca 62267635 aspell-cs 16714477 aspell-da 17579484 aspell-el 10625591 aspell-no 60719347 aspell-pl 12907088 aspell-pt 8007946 aspell-nl 9425163 aspell-cy Three extra nice-to-have things: list the dependencies/depending packages of each group (so I can figure out the uninstall order) Also, if you could group them by package group, that would be totally neat. Human-readable size units like 'M'/'G' (like ls -h does). Can be done with regexp and rounding on the size field. Footnote: I'm surprised up2date and yum don't add this sort of intelligence. Ideally you would want to see a tree of group-package-subpackage, with rolled-up sizes. Footnote 2: I see yum erase aspell* does actually produce this summary - but not in a query command. [1] where unwanted.txt is a textfile of unnecessary packages obtained by diffing the output of: yum list installed | sed -e 's/\..*//g' > installed.txt diff --suppress-common-lines centos4_minimal.txt installed.txt | grep '>' and centos4_minimal.txt came from the Google doc given by that helpful blogger.

    Read the article

  • Qmail Patching Makes me Nervous

    - by JM4
    We have a system running CentOS 5 with Plesk 8.6 and Qmail running. Our primary domain is hosted through Media Temple. When Plesk and Qmail are hosted on a single Dedicated Virtual server, it reads the primary server IP and domain and reports that when sending emails from the system. Our pages are written in PHP so we are using the mail() function. While our email goes out to everybody, several enterprise email domains reject our email because it shows a different originating IP (our primary server IP and domain) than the domain we list in the 'from' address. This is not modifiable. Every domain we own of course does have its own IP as well underneath our primary server IP. I have seen several places online that provide a patch, specifically - which allows Domain Binding: "DomainBindings -- For servers that host multiple domains or have multiple IP addresses assigned to them, it is sometimes useful (or important) to have qmail use a specific IP address for its outgoing mail. By default, qmail uses whatever address the OS chooses for all outbound connections. With this patch, you can specify which address to use. It uses a control file similar to smtproutes to specify the outbound IP address to use, based on the sender's domain (local copy) (pyropus.ca)" Qmail Link First off I do not have netqmail installed so I'll need to find another source, but also I am completely unfamiliar with applying patches to qmail. Will I lose email services if I patch? Is it a simple apply and use process? Will my existing email accounts and data be restored after the patch? I am very, very new to unix/linux so this does make me a bit nervous but I am the only person who can make the change and it is one our company "HAS" to have. Any ideas?

    Read the article

  • My hard drive seems to be overheating... what should I do?

    - by George Edison
    After a cold boot, the hard drive in my notebook jumps to 56? within an hour or so of idling. Is 56? a cause for alarm? Notes: The notebook is on a flat desk and none of the vents are obstructed. The video card is currently at 55? and the CPU at 50?. It's a Western Digital 250GB hard drive. SMART reports the drive healthy but does warn that: Edit: this problem had a very surprise ending. I inverted the notebook and unscrewed some of the panels on the back (there was one covering the hard drive, and one that provided access to the memory). I couldn't see any dust, so I simply screwed everything back together and powered it on... and it worked! The temperature is now staying at 46?, and it feels notable cooler to the touch. So I can only assume that some internal fan was malfunctioning or something. Whatever the case, it's working now so I won't complain. Edit: I have an SSD now, so temperature isn't as big an issue as it was when I had a mechanical drive.

    Read the article

  • Need help with some IIS7 web.config compression settings.

    - by Pure.Krome
    Hi folks, I'm trying to configure my IIS7 compression settings in my web.config file. I'm trying to enable HTTP 1.0 requests to be gzip. MSDN has all the info about it here. Is it possible to have this config info in my own website's web.config file? Or do i need to set it at an application level? Currently, I have that code in my web.config... <system.webServer> <urlCompression doDynamicCompression="true" dynamicCompressionBeforeCache="true" /> <httpCompression cacheControlHeader="max-age=86400" noCompressionForHttp10="False" noCompressionForProxies="False" sendCacheHeaders="true" /> ... other stuff snipped ... </system.webServer> It's not working :( HTTP 1.1 requests are getting compressed, just not 1.0. That MSDN page above says that it can be used in :- Machine.config ApplicationHost.config Root application Web.config Application Web.config Directory Web.config So, can we set these settings on a per-website-basis, programatically in a web.config file? (this is an Application Web.config file...) What have i done wrong? cheers :) EDIT: I was asked how i know HTTP1.0 is not getting compressed. I'm using the Failed Request Tracing Rules, which reports back:- DYNAMIC_COMPRESSION_START DYNAMIC_COMPRESSION_NOT_SUCESS Reason: 3 Reason: NO_COMPRESSION_10 DYNAMIC_COMPRESSION_END

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

  • Outlook 2010, 2007 Sync problems after migration from SMTP to Exchange

    - by kirgy
    Our organization recently switched from an SMTP server to an Exchange server, since then several user's Outlook's are not synchronizing their emails as expected with the Exchange server. Our move over from an SMTP server to an Exchange server consisted of adding the new Exchange account alongside the existing SMTP account, drag-dropping/copy-pasting folders client-side from the SMTP account in the folder pane in outlook, to the newly created Exchange account. The problem happens when a user moves an email to a folder from their inbox or another folder. At this point the email disappears from Outlook client side. Re-syncing the folder, send/receive, closing/opening outlook and even system reboots do not make this email reappear. The Outlook web interface (OWA) reports the email is in fact in the folder they placed it in, and is not deleted. Doing a "search all mail items" for the emails shows that the email is still there; not deleted nor removed. To add to the confusion, when new folders are created and the email is placed in these folders, the synchronization happens without any issue both client side and server side. As the emails are appearing server side, we are confident to presume this is a client side issue. We have tried adding/removing accounts on one system which resulted in the same issue. This was a very long and slow process due to the sheer volume of emails (20gig+ from most users). We have tried reinstalling outlook restoring accounts from back-ups which has not resolved the issue. We also tried upgrading one system from outlook 2007 to outlook 2010 which, again, did not resolve the issue. We have experienced issues with a lot of emails disappearing during the copy-over process in which I'm not convinced it was the best route of migration, but nonetheless we are where we are. Can anyone suggest potential avenues of solutions to resolve this issue? Thank you. Systems: Windows 7 (10 systems) Windows XP (2 systems) Outlook 2007 (2 systems) Outlook 2010 (7 systems) Problem Outlook systems: Windows XP, Outlook 2007 x 1 Windows 7, Outlook 2007 x1 Windows 7, Outlook 2010 x 2

    Read the article

  • High Apache CPU usage, but low nginx - Configured correctly?

    - by Buckers
    We've just moved a website of ours over to a brand new high-spec Linux server (1x Intel Xeon E3-1230 v2 @ 3.30GHz, 8GB DDR3 ECC, 2x 128GB SATA SSD RAID1). The server has been configured to use nginx but we're not sure if its working correctly. The site always loads very fast to us (http://www.onedirection.net), but Plesk often sends us reports that the Apache CPU usage percentage reaches high leves, yet when we look at the nginx percentage it's always very low. We've come from a Windows background so are very new to Linux, but shouldn't nginx run INSTEAD of apache? Here's a screenshot from Plesk showing the CPU usage: http://www.pixelkicks.co.uk/_download/plesk.JPG The website gets around 20,000 visitors per day, and we use W3 Total Cache to get it running as fast as possible. MySQL has been optimised well. Memory usage is only running at 2GB of the 8GB. Does this look right? How can we tell that nginx is doing most of the work? Thanks, Chris.

    Read the article

  • What used the linux memory? Low cache, low buffer, not a VM

    - by Jason
    First of all, yes, I have read LinuxAteMyRAM, which doesn't explain my situation. # free -tm total used free shared buffers cached Mem: 48149 43948 4200 0 4 75 -/+ buffers/cache: 43868 4280 Swap: 38287 0 38287 Total: 86436 43948 42488 # As shown above, the -/+ buffers/cache: line shows indicates the used memory rate is very high. However, from output of top, I don't see any process used more than 100MB of memory. So, what used the memory? PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28078 root 18 0 327m 92m 10m S 0 0.2 0:25.06 java 31416 root 16 0 250m 28m 20m S 0 0.1 25:54.59 ResourceMonitor 21598 root -98 0 26552 25m 8316 S 0 0.1 80:49.54 had 24580 root 16 0 24152 10m 760 S 0 0.0 1:25.87 rsyncd 4956 root 16 0 62588 10m 3132 S 0 0.0 12:36.54 vxconfigd 26703 root 16 0 139m 7120 2900 S 1 0.0 4359:39 hrmonitor 21873 root 15 0 18764 4684 2152 S 0 0.0 30:07.56 MountAgent 21883 root 15 0 13736 4280 2172 S 0 0.0 25:25.09 SybaseAgent 21878 root 15 0 18548 4172 2000 S 0 0.0 52:33.46 NICAgent 21887 root 15 0 12660 4056 2168 S 0 0.0 25:07.80 SybaseBkAgent 17798 root 25 0 10652 4048 1160 S 0 0.0 0:00.04 vxconfigbackupd This is an x86_64 machine (not a common-brand server) running x84_64 Linux, not a container in a virtual machine. Kernel (uname -a): Linux 2.6.16.60-0.99.1-smp #1 SMP Fri Oct 12 14:24:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Content of /proc/meminfo: MemTotal: 49304856 kB MemFree: 4066708 kB Buffers: 35688 kB Cached: 132588 kB SwapCached: 0 kB Active: 26536644 kB Inactive: 17296272 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 49304856 kB LowFree: 4066708 kB SwapTotal: 39206624 kB SwapFree: 39206528 kB Dirty: 200 kB Writeback: 0 kB AnonPages: 249592 kB Mapped: 52712 kB Slab: 1049464 kB CommitLimit: 63859052 kB Committed_AS: 659384 kB PageTables: 3412 kB VmallocTotal: 34359738367 kB VmallocUsed: 478420 kB VmallocChunk: 34359259695 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB df reports no large consumption of memory from tmpfs filesystems.

    Read the article

  • Project and Business Document Organization

    - by dassouki
    How do you organize, maintain edits, revisions and the relationship between: Proposals Contracts Change Orders Deliverables Projects How do you organize your projects for re-usability? For example, is there a way to add tags to projects, to make them more accessible? What's a good data structure to dump all my files on an internet server for easy access? Presently, my work folder is setup as follows: (1)/work/ (2)/projects (3)/project_a (4)/final (which includes all final documents) (5)/contracts (5)/rfp_rfq (5)/change_orders (5)/communications (logs all emails, faxes, and meeting notes and minutes) (5)/financial (6)/paid (6)/unpaid (5)/reports (4)/old (include all documents that didn't make it into the project_a/final/ (3)/project_b (4) ... same as above ... (2)/references (3)/technical_references (3)/gov_regulations (3)/data_sources (3)/books (3)/topic_based (each area of my expertise has a folder with references in them) (2)/business_contacts (3)/contacts.xls (file contains all my contacts) (2)/banking (3)/banking.xls (contains a list of all paid and unpaid invoices as well as some cool stats) (3)/quicken (to do my taxes and yada yada) (4)/year (2)/education (courses I've taken (3)/webinars (3)/seminars (3)/online_courses (2)/publications (includes the publications I've made (3)/publication_id We're mostly 5 people working together part-time on this thing. Since this is a very structured approach, I find it really difficult to remember what I've done on previous projects and go back and forth easily. What are your suggestions on improving my processes? I'm open to closed and open source software (as long as the price isn't too high). I also want to implement a system where I can save most of the projects online to increase collaboration and efficiency and reduce bandwidth especially on document editing. Imagine emailing a document back and forth 5-10 times a day.

    Read the article

  • Is my dns server being attacked? And what should I do about it?

    - by Mnebuerquo
    I've been having some intermittent dns problems with a web server, where certain isp's dns servers don't have my hostnames in cache and fail to look them up. At the same time, queries to opendns for those hostnames resolve correctly. It's intermittent, and it always works fine for me, so it's hard to identify the problem when someone reports connectivity problems to my site. In trying to figure this out, I've been looking at my logs to see if there are any errors I should know about. I found thousands of the following messages in my logs, from different ip's, but all requesting similar dns records: May 12 11:42:13 localhost named[26399]: client 94.76.107.2#36141: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#29075: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#47924: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#4727: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#16153: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#40267: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63507: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63721: query (cache) 'burningpianos.org/MX/IN' denied May 12 11:43:36 localhost named[26399]: client 82.209.240.241#3537: query (cache) 'burningpianos.com/MX/IN' denied I've read of Dan Kaminski's dns cache poisoning vulnerability, and I'm wondering if these log records are an attempt by some evildoer to attack my dns server. There are thousands of records in my logs, all requesting "burningpianos", some for com and some for org, most looking for an mx record. There are requests from multiple ip's, but each ip will request hundreds of times per day. So this smells to me like an attack. What is the defense against this?

    Read the article

  • Is TrueCrypt truly safe?

    - by Alfred
    Hi. I have been using TrueCrypt for a long time now. However, someone pointed me to a link that described the problems with the license. IANAL and so it really didn't make much sense to me, however I wanted my encryption software to be open source - not because I could hack into it but because I could trust it. Some of the issues with it I have noticed: There is no VCS for the source code. Is this OK? There are no change logs. The forums are a bad place to be. They ban even if you ask a genuine question. Who really owns TrueCrypt? There were some reports of tinkering with the md5 checksums. To be honest, the only reason why I used TrueCrypt was because it was open source. But however, somethings are just not right. Has anyone ever validated the security of TrueCrypt? Should I really be worried? Yes I am paranoid; if I use an encryption software, I trust it with all my life. If all my concerns are genuine, is there any other open source alternative to TrueCrypt?

    Read the article

  • HA Proxy won't load balance my web requests. What have I done wrong?

    - by Josh Smeaton
    I've finally got HA Proxy set up and running in a way I think I want. However, it is not load balancing the web requests it receives. All requests are currently being forwarded to the first server in the cluster. I'm going to paste my configuration below - if anyone can see where I may have gone wrong, I'd appreciate it. This is my first stab at configuring web servers in a *nix environment. First up, I have HA Proxy running on the same host as the first server in the apache cluster. We are moving these servers to virtual later on, and they will have different virtual hosts, but I wanted to get this running now. Both web servers are receiving their health checks, and are reporting back correctly. The haproxy?stats page correctly reports servers that are up and down. I've tested this by altering the name of the file that is checked. I haven't put any load onto these servers yet. I've just opened up the URLs on several tabs (private browsing), and had several co-workers hit the URL too. All of the traffic goes to WEB1. Am I balancing incorrectly? global maxconn 10000 nbproc 8 pidfile /var/run/haproxy.pid log 127.0.0.1 local0 debug daemon defaults log global mode http retries 3 option redispatch maxconn 5000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen WEBHAEXT :80,:8443 mode http cookie sessionbalance insert indirect nocache balance roundrobin option httpclose option forwardfor except 127.0.0.1 option httpchk HEAD health_check.txt stats enable stats auth rah:rah server WEB1 10.90.2.131:81 cookie WEB_1 check server WEB2 10.90.2.130:80 cookie WEB_2 check

    Read the article

  • Intermittent PHP error: Undefined function <core function>

    - by Daniel
    In the last week I've been coming across an incredibly annoying error on one of Slicehost slices. It appears that every now and then PHP will fail with a fatal error, saying a certain function is undefined. The function changes, but is always a core PHP function e.g. defined(), version_compare(), etc. This problem has occurred while using several different PHP applications - PHPMyAdmin, my own custom built apps, etc, leading me to believe that the problem is not specific to the running code. Here are some details: - Debian Lenny - Apache 2.2.9 - PHP 5.2.6-1+lenny4 with Suhosin-Patch (running eAccelerator 0.9.6) Apache and PHP are installed from Debian packages. Error logs show nothing out of the ordinary. I thought memory might be an issue, but free -m reports upwards of 100MB free almost all the time. Another thing I'm trying to investigate is if the problem might be related to eAccelerator, but testing this theory out is incredibly hard because the issue doesn't appear very often and I've been using eAccelerator for months on this install without any problems up until now. Has anyone ever come across anything like this? Why would PHP report undefined core functions?

    Read the article

  • LAMP server VM issues

    - by nullArray
    After getting a recommendation to salvage a wiki by installing a LAMP server, I went on the prowl for a good virtualized one. I used the VMware Player version. Since the windows box has Bonjour, I can, for example, go to http://lamp.local. and it works see the web client. The problem is, I can't ssh to a directory to scp the files I need, mount a usb thumbdrive (usbfs is unsupported) nor get samba working. I can't even update the ubuntu installation, it fails. I've tried bridged, nat and host-only networking settings in VMware Player. Bridged gives me an undefined IP, while the other two each have different IPs. All three settings allow me to access the web config, but none of them give me samba access. Windows usually freezes, then reports that it cannot connect. I'd rather not wipe a box to do a dedicated install, is there I way I can get this VM working, or are there better LAMP VMs out there? This one came already working and set up with VMware Player, so I thought it would be perfect... Thanks,

    Read the article

  • Computer loses all installed programs and appears to return OS-only state

    - by Jake
    This is a story regarding 3 laptops of different brand and models. On separate occassions, I configured each of these Windows 7 / Vista computers with the necessary configuration and applications (which are supposedly the same actually), e.g. join office domain, same windows updates, microft office etc. These machinese were configure in our office in Singapore, and then they were taken to India for use. Someday in India, when booting up the laptop, all went fine except when it reach the log in screen, it was no longer possible to login with domain credentials. Logging into the laptop local admin account will lead to discover that the machine has returned to "OS-only state". All the configurations and applications were gone. The actual user profiles are still in the C: drive so files can still be retrieved, but under Control Panel Uninstall Programs it is evident that at least the registry is corrupted. The above scenario happen to the first 2 laptops. For the third, the system reports "Operating System Not Found" on boot up. I cannot think of any reason except to suspect a power fluctuation issue. Question is, will a power issue create this behaviour? What else can cause this issue?

    Read the article

  • Add Route for machine in same DC

    - by gary
    My routing table on my machine with IP of 46.84.121.243 currently looks like this - Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 46.84.121.225 46.84.121.243 21 46.84.121.224 255.255.255.224 On-link 46.84.121.243 276 46.84.121.239 255.255.255.255 On-link 46.84.121.243 21 46.84.121.243 255.255.255.255 On-link 46.84.121.243 276 46.84.121.255 255.255.255.255 On-link 46.84.121.243 276 I'm trying to access 46.84.121.239, which is my other machine in the same DC but my guess is the first rule is blocking it as it is trying to go via the gateway and failing - Tracing route to [46.84.121.239] over a maximum of 30 hops: 1 OWNEROR-9O83HBL [46.84.121.243] reports: Destination host unreachable. Trace complete. I'm doing all this via RDP and already tried changing the metric on the persistent rule with devastating consequences! Here's the persistent rule (working) - Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 46.84.121.225 1 Any help to be able to access the 46.84.121.243 would be very helpful thanks very much.

    Read the article

  • Where is my free space?

    - by Andrey
    A week ago I got a low disk space warning on my Vista x64 Ultimate box - 60 Mb free on the disk C; I cleaned up some downloaded msdn images and got 20 Gb freed up. Three days ago I got another notification, it looked suspicious but I didnt have time to deal with it and just moved some heavy stuff to another drive to free up about 17 Gb.... Today morning - 53Mb left on drive C, again! Now it looks really suspecious, so I downloaded TreeSize to see what's taking up the space, just to see it reporting only 121 GB out of 200 GB used, in other words I suppose to have about 79 Gb free. Then I went to Folder Options, enabled viewing of system and hidden files, rerun teh tool to see another 5 Gb added (which is expected). Then I open disk C in windows explorer, select all and right click Properties, to see it reporting teh same amount of files - 126 Gb. But when I look at Drive C properties, it reports that 200GB of 200 Gb are taken. I just scanned the drive with two different antiviruses - Symantec and AVG and found no viruses... I'm a little confused at this point, any ideas where is my free space, woudl be highly appreciated! Thank you! Andrey

    Read the article

  • DFS replication and the SYSTEM user (NTFS permissions)

    - by HopelessN00b
    Question for which I'm having trouble finding an answer on Google or Technet... Does granting the SYSTEM user permissions to DFS-shared files and folders have any effect on DFS replication? (And while we're at it, is there any good reason not to let SYSTEM have permissions to DFS-shared files?) It comes up because I have a collection of DFS namespaces and folders that I'm not able to make someone else's problem, and while troubleshooting a problem where one DFS replica just wasn't replicating with another for no discernible reason, I observed that the SYSTEM didn't have any permissions granted to any of the files or folders in the folder in question. So I set SYSTEM to have full control and propagated it down, and our DFS health diagnostic reports went from showing a backlog of ~80 files to a backlog of ~100,000... and things started replicating, including a number of files that had been missing on one server or the other (so more than just the permissions changes started replicating). Naturally, this made me curious as to whether or not DFS needs the SYSTEM account to have permissions to do its work, or if perhaps it was just any change to folder tree in question that prompted DFS to jump into action. If it matters, our DFS namespaces were set up under 2000/2003, and I have just recently finished upgrading all the servers to 2008 R2 or 2012 (with UAC enabled, blech), but have not yet gotten around to raising the DFS namespace functional levels to Server 2008. (And bonus points if anyone has an official Microsoft article on NTFS file permissions and the SYSTEM account as it pertains to DFS or network files.)

    Read the article

  • Linksys WAP54G v3.1 no access, power and link LED solid

    - by user142113
    I'm managing the Network of a small enterprise. A Linksys WAP54G v3.1 used to provide the WiFi network. I was called, because the device did not provide a WiFi network anymore. I first of all tried to ping the device via LAN, but there was no reaction. I've frequently reconnected the AP to the mains and always the POWER and the LINK LED keep solid, even if no network cable is connected. What I've done yet: Reset as documented: Pressed the RESET button for 10 seconds. After that I have tried to access the AP with a direct cable connection to my computer, that I've set to a static ip of 192.168.1.240, but i got no ping response on the default IP 192.168.1.245. Furthermore ipconfig reports "media disconnected". More complex reset method as described here http://bruceshankle.blogspot.de/2005/12/how-to-reset-linksys-wap54g.html as well had no effect. also tried to ping 192.168.1.1 without success Tried this method: http://www.daniweb.com/hardware-and-software/networking/threads/142437/linksys-wireless-access-point-problem#post680245 but there was no ping response when powering up. As well the tftp transfer timed out Finally tried to short pin 15 and 16 of the flash chip on the bottom side of the AP mainboard while booting to provoke a Checksum error. This should lead to the possibility to upload a firmware with tftp, as the AP stops booting and waits for a tftp connection on 192.168.1.1. But I've had no success. As well i've put pin 15 and 16 to ground while booting, also without an effect. After all that I still can't ping the AP, ipconfig still tells me "media disconnected". The POWER and LINK LED are solid. I would appreciate your answers

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >