Search Results

Search found 17686 results on 708 pages for 'high level'.

Page 369/708 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • When does a cached website refresh?

    - by user142485
    If i go to example.com [placeholder for different website] it creates the numerous cache entries in the browser for different items on the page and two entries for example.com. One of which expires in 1.5hrs and the other has 'No expiration time.' What I am wondering is: when does the browser display the cached page and when does it get the newest version from the server (when re-visiting the site)? What do the two different expiration times for the top level domain mean? A web page I went to was temporarily redirecting to a different page. After it was back up, I was still getting redirected to the temp page even though the correct page was up. Would this have eventually resolved itself based on expiration times or does the cache need to be cleared?

    Read the article

  • Specific apache + mysql settings for a light-weight site

    - by Good Person
    I have a small website with a Joomla and a Moodle set up. It seems that both of these are very slow. The server (CentOS release 5.5 (Final)) is a virtual dedicated server with about 2GB of ram. I don't expect to ever get more than 10-15 people on at the same time (and if that is high) What settings could I change in either apache, mysql, or even the OS to increase the performance of my site? I'm not concerned about running out of resources if I get too many visitors. If you need more specific data leave a comment and I'll edit the question.

    Read the article

  • How can I tell if a host is bridged and acting as a router

    - by makerofthings7
    I would like to scan my DMZ for hosts that are bridged between subnets and have routing enabled. Since I have everything from VMWare servers, to load balancers on the DMZ I'm unsure if every host is configured correctly. What IP, ICMP, or SNMP (etc) tricks can I use to poll the hosts and determine if the host is acting as a router? I'm assuming this test would presume I know the target IP, but in a large network with many subnets, I'd have to test many different combinations of networks and see if I get success. Here is one example (ping): For each IP in the DMZ, arp for the host MAC Send a ICMP reply message to that host directed at an online host on each subnet I think that there is a more optimal way to get the information, namely from within ICMP/IP itself, but I'm not sure what low level bits to look for. I would also be interested if it's possible to determine the "router" status without knowing the subnets that the host may be connected to. This would be useful to know when improving our security posture.

    Read the article

  • openVAS - Microsoft RDP Server Private Key Information Disclosure Vulnerability - false Alarm?

    - by huebkov
    I performed a openVAS scan on a Windows Server 2008 R2 and got a report for a high threat level vulnerability called Microsoft RDP Server Private Key Information Disclosure Vulnerability. An remote attacker could perform a man-in-the-middle attack to gain access to a RDP session. Affected Software is Microsoft RDP 5.2 and below. My server uses RDP 7.1, is this alarm a false alarm? Security Advisor Pages say: Solution Status Unpatched, No remedy... References http://secunia.com/advisories/15605/ http://xforce.iss.net/xforce/xfdb/21954/ http://www.oxid.it/downloads/rdp-gbu.pdf CVE: CVE-2005-1794 BID:13818

    Read the article

  • Amazon EC2 as load balanced/failover solution

    - by sugiggs
    Hi All, I'm thinking of an idea but not sure the pros/cons of it. At the moment, we are hosting our website on a dedicated server. As a failover/load balanced solution, I'm thinking to use Amazon EC2+EBS. The files can be rsync and mysql can be setup as master-master replication When the load is high, I can up the machine, given sometime to "sync" and load balanced the traffic there. is it do-able? any link I can read more on this?

    Read the article

  • What needs updating when moving a bootable Windows 7 (or Vista) partition?

    - by SuperTempel
    When I move a bootable NTFS partition with Windows on it to a different block offset, what needs updating to make it bootable again? In particular, here's what I tried: I have a disk with several partitions, one of which is the NTFS partition with Windows on it, and the disk uses the plain old MBR block 0 for the partitions layout (no more than 4 partitions). Now I format and partition a new, larger, disk. There I make room for the NTFS partition and copy the contents from the old disk's NTFS Windows partition into. And I make the partition "active". However, when I try to boot from this disk, I get a "read error" message immediately and the booting stops, the exact text is: A disk read error occurred Press Ctrl+Alt+Del to restart I verified that both disks have the same boot sector code in block 0. It seems to me that something else might need updating. I guess that somewhere there's a absolute block reference that I need to update, probably pointing to the next level loader or to the NT kernel. Update: I found this article going quite into the depth of what I want to know. However, it says to modify boot.ini, but I have Windows 7 installed here, where such things appear to have changed: No boot.ini but a folder called System Volume Information with GUID and other data in it that sounds related to my problem. Going to keep digging... Update 2: Thanks to the terrible looking but very informative website by starman, I was able to figure out the first step: The NTFS boot sector has a field for "hidden" sectors. This feld has to contain the sector number of the boot sector. This solves the "read error" message. Now, however, I get a "BOOTMGR is missing" error instead. Looks like there's another place where a block number has to be adjusted, but I can't find anything in the code listing about this. I do find a lot of help sites suggesting Windows tools for fixing this "BOOTMGR is missing" problem, but none seem to know what goes on behind the scenes. Kind of like suggesting to re-install Windows when there's a little problem with it. At least, those fixes seem to work, mostly involving the Bcdedit and Bootrec tools. Now, who knows what they do, especially the latter, in regards to a moved partition? Update 3: After lots of trial-and-error attempts, I believe now that the solution lies in the BCD-Template registry file, residing usually inside \Windows\System32\config. If I get this updated using the "bcdboot" command, Windows starts up from it. I am now in the middle of figuring out what information this registry contains relevant to the above question. Any pointers to the contents of this registry are welcome. Update 4: Turns out that while the BCD-Template file gets rewritten and has different binary contents than its predecessor, the values inside do not change. So it must be something else that bcdboot.exe writes. I had previously already checked if it changes the first 32 boot blocks of the partition, but they appear to remain unchanged. Parititon map doesn't get changed, either. So what is it that bcdboot modifies besides the BCD registry? Any tips on how I can trace that? Are there low level tools that show me what files a program writes to? Update 5: The answer seems to be: c:\Boot\BCD is also changed, and that appears to be the key file for the boot manager's process. I'll investigate this later... Update 6: It seems to be an important detail that I had originally two partitions created when I installed Windows 7: A small partition of 204800 sectors which appears to be a bootstrap partition, followed by the actual, large, partition containing the Windows system (drive C:). When I tried to transfer this installation to a new, larger, disk, I had kept the same two partitions intact on the new drive, although they ended up at a different offset. This alone led to the "BOOTMGR is missing" message. Since then, I've used bcdboot.exe only on the Windows partition, which added the \Boot\BCD file on that partition. That file (and folder) did originally only exist on the smaller partition. Hence, this problem may be more complicated in my case as one partition (the boot strapper) referred to another partition (the one containing the OS), whereas other people may only have to deal with one partition containing both, and maybe there the solution is simpler. Update 7: Found one more detail: The \Boot\BCD file records the MBR's serial number. If that number doesn't match, the system won't boot. Next I'll test if there's also an absolute block reference stored in there.

    Read the article

  • Hosting solution for sensitive client data

    - by Mark
    Hello, We are developing a web application that will deal with highly sensitive (financial) data of clients (audience is medium to large sized businesses). Clients will be under scrutiny from regulators & auditors and, as such, we will be too. More importantly to give clients a level of comfort our application and related hosting arrangement should instill a lot of confidence with them. We are looking into using a cloud based service like Linode, Amazon EC2, etc. To allow for maximum flexibility We are keen on putting everything on virtual servers and avoiding having to buy our own hardware. Does a cloud based service make sense for our particular scenario? If not what type of hosting should we consider? If so what should we look out for? Thanks!

    Read the article

  • Hardware recommendations for building an Ubuntu encrypted file server

    - by Robert Mashlan
    I would like to build a file server for my home network using Ubuntu. It will serve files from RAID1 configured disks, either in the OS or in hardware. It will be connected to a Gigabit ethernet LAN. The disks will use an encrypted file system. It will serve samba shares. I would like a recommendation on what kind of processing power/memory I would need to build a box that would be able to sustain the full capacity of the Gigabit ethernet connection in a file transfer for a single connection with the overhead of serving from an encrypted disk. I'm not looking to build a dream server, I just want enough processing capacity for high performance (and reliable) file sharing and spend as little as possible for it. This may be tangential, but what kind of hardware would I need to have a server be able to reliably go into a low power mode when no requests are being made of it?

    Read the article

  • Started an application through SSH, command line now gone, what happens next?

    - by Chris Dutrow
    Context: This is a very basic question Using Putty and SSH for the first time to do some serious server setup and run into the situation where I have started a process that I do not want to stop. The process is the gunicorn WSGI HTTP Server (running on Centos 6.3). The command I used to start the process is (as per their Quick Start): gunicorn -w 4 myapp:app At this point in the work session, I have lost the command prompt. This must be such a non-issue that it doesn't even enter into an experienced user's consciousness. But unfortunately at my level of experience, I am left with several fundamental questions: Does the fact that I have lost the command prompt mean that the process is still running? How do I get back to the command prompt without killing the process? How do I come back and monitor the process later? How do I eventually kill the process? Any help is appreciated, thanks so much!

    Read the article

  • Is there a way to create a copy-on-write copy of a directory?

    - by BCS
    I'm thinking of a situation where I would have something that creates a copy of a directory, tweaks a few files, and then does some processing on the result. This wold be done fairly often, maybe a few dozen times a day. (The exact use case is testing patch submissions; dupe the code, patch it, build/test/report/etc.) What I'm looking for could be done by creating a new directory structure and populating it with hard links from the origonal. However this only works if all the tools you use delete and recreate files rather than edit them in place. Is there a way to have the file system do copy-on-write for a file? Note: I'm aware that many FSs use COW at a block level (all updates are done via writes to new blocks) but this is not what I want.

    Read the article

  • Replacing explorer.exe under Windows 7

    - by Whitey
    A bit of an odd request I gather, but for reasons too deep to go into detail here I need to replace my explorer.exe in the C:\Windows directory. I have tried doing it myself, through the GUI and command prompt (ran as administrator) but I get access denied. It seems that being an admin on your machine is not the highest permission level after all, and only Trusted Installer can modify the file. Does anybody know a way that works? I was about to boot into safe mode and try it but wanted to get some feedback before I do anything in-depth. Thanks.

    Read the article

  • Our company has 100,000s+ photos, how to store and browse/find these efficiently?

    - by tobefound
    We currently store our photos in a structure like this: folder\1\10000 - 19999.JPG|ORF|TIF (10 000 files) folder\2\20000 - 29999.JPG|ORF|TIF (10 000 files) etc... They are stored on 4 different 2TB D-link NASes attached and shared on our office network (\\nas1, \\nas2, and so on...) Problems: 1) When a client (Windows only, Vista and 7) wishes to browse the let's say \\nas1\folder\1\ folder, performance is quite poor. A problem. List takes a long time to generate in explorer window. Even with icons turned off. 2) Initial access to the NAS itself is sometimes slow. Problem. SAN disks too expensive for us. Even with iSCSI interface/switch technology. I've read a lot of tech pages saying that storing 100 000+ files in one single folder shouldn't be a problem. But we don't dare go there now that we experience problems on a 10K level. All input greatly appreciated, /T

    Read the article

  • If a SQL Server Replication Distributor and Subscriber are on the same server, should a PUSH or PULL subsciption be used?

    - by userx
    Thanks in advance for any help. I'm setting up a new Microsoft SQL Server replication and I have the Distributor and Subscriber running on the same server. The Publisher is on a remote server (as it is a production database and MS recommends that for high volumes, the Distributor should be remote). I don't know much about the inner workings of PUSH vs PULL subscriptions, but my gut tells me that a PUSH subscription would be less resource intensive because (1) the Distributor is already remote, so this shouldn't negatively effect the Publisher and (2) pushing the transactions from the Distributor to the Subscriber is more efficient than the Subscriber polling the Distribution database. Does any one have any resources or insight into PUSH vs PULL which would recommend one over the other? Is there really going to be that big of a difference in performance / reliability / security?

    Read the article

  • SSL encryption standards by browser

    - by hfidgen
    Hiya, Does anyone have a table of the default levels of encryption which the various browsers out there support? For instance I know that IE5 and lower struggle even to cope with 40 bit encryption but the latest browsers easily do 256 and beyond. The reason I ask is that I'm looking to get a wildcard certificate for my domain and the price difference is huge between a server gated certificate (where it enforces a minimum of 128bit) and a non-gated certificate (where the browser sets the encryption level). Obviously I like the idea of paying £300 less for the non-gated certificate, but only if I can be sure that the majority of my users (FF3 / Opera / Chrome / IE7+) are going to get good encryption.

    Read the article

  • tomcat processParameters complains about "invalid chunk ignored"

    - by cgicgi
    I am hosting a software system running under tomcat for quite a number of customers. Some of these send invalid URLs as request. These URLs may contain "&=" or "&&", which is not within the http specs. Now my tomcat complains about the following: "08.09.2010 12:36:04 org.apache.tomcat.util.http.Parameters processParameters WARNING: Parameters: Invalid chunk '' ignored." It is no problem, as is doesn't affect the operation in any way. Only problem ist that the tomcat/logs/catalina.out is growing with every single request. In the net you can find suggestions like: - Fix your URLs (which I can't, as it is the customers who send them) - Raise tomcats log level to ERROR (which I don't want to do, as it would suppress INFO like "INFO: Reloading context [/ContextName]" and other stuff you want to know. - Redirect the log to the application log (which won't solve the problem, as the message will flood just another log) Does anyone know how to solve the problem at its ROOT, which means: Tell tomcat not to complain about invalid request parameters any longer

    Read the article

  • Auto restart server if virtual memory is too low

    - by Sukhjinder Singh
    There are quite number of software running on my server: httpd, varnish, mysql, memcache, java.. Each of them is using a part of the virtual memory and varnish was configured to be allocated 3GB of memory to run. Due to high traffic load which is 100K, our server ran out of memory and oom-killer is invoked. We've to reboot the server. We have 8GB of Virtual Memory and due to some reason we cannot extend to larger memory. My question is - Is there any automated script, which will monitor how much virtual memory left and based upon certain criteria, lets say if 500MB left than restart the server automatically? I do know this is not the proper solution but we have to do it, otherwise we don't know when server will get OOM and by the time we know and restart the server, we lost our visiting users.

    Read the article

  • MySQL keeps crashing due to bug

    - by mike
    So about a week ago, I finally figured out what was causing my server to continually crash. After reviewing my mysqld.log I keep seeing this same error, 101210 5:04:32 [Warning] option 'max_join_size': unsigned value 18446744073709551615 adjusted to 4294967295 Here is a link to the bug report, http://bugs.mysql.com/bug.php?id=35346 someone recommend that you set the max_join_size vaule in my.cnf to 4M, and I did. I assumed this fixed the issue, and it was working for about a week with no issues until today... I checked MySQL and the same error is now back, 101216 06:35:25 mysqld restarted 101216 6:38:15 [Warning] option 'max_join_size': unsigned value 18446744073709551615 adjusted to 4294967295 101216 6:38:15 [Warning] option 'max_join_size': unsigned value 18446744073709551615 adjusted to 4294967295 101216 06:40:42 mysqld ended Anyone know how I can really fix this issue? I can't keep having mysql crash like this. EDIT: I forgot to mention every time this happens I get an email from linode staying I have a high disk io rate Your Linode, has exceeded the notification threshold (1000) for disk io rate by averaging 2483.68 for the last 2 hours.

    Read the article

  • How to effectively secure a dedicated server for intranet use?

    - by Mark
    I need to secure a dedicated server for intranet use, the server is managed so will have software based security, but what other security should be considered for enterprise level security? The intranet is a host for an ECM (Alfresco) managing and storing sensitive documents. As the information is sensitive we are trying to make it as secure as reasonably possible (requirement in UK law). We plan to encrypt the data on the database. It will be connected to via SSL encryption. Should we consider Hardware firewall, Private lan between the application server and database server?

    Read the article

  • How do you assign resources and keep begin, end and duration of a task intact?

    - by Random
    I have problems with assigning more than one resource to a group of tasks. The idea is simple, my tasks are in one group and are manually scheduled to particular begin and end dates. I want to assign more than one resource to keep task duration and dates (fixed duration) and increase work. For top level tasks it works fine but as long tasks are grouped, the duration of each is extended to reach group end date and work remains. For the problematic tasks, the Gantt chart looks like this: One resource attached (good) ( Task 1.1 ) ( Task 1.2 ) (Task 1.3) More than one resource attached (wrong) ( Task 1.1 )....................... ( Task 1.2 ).......... (Task 1.3) So for tasks like that, I want to have a fixed schedule and just increase work by adding resources that work in the same time, but sometimes MS Project does leveling to do resources work sequentialy.

    Read the article

  • Apache Bench length failures

    - by Laurens
    I am running Apache Bench against a Ruby on Rails XML-RPC web service that is running on Passenger via mod_passenger. All is fine when I run 1000 requests without concurrency. Bench indicates that all requests successfully complete with no failures. When I run Bench again with a concurrency level of 2, however, requests start to fail due to content length. I am seeing failures rates of 70-80% when using concurrency. This should not happen. The requests I am sending to the web service should always results in the same response. I have used cURL to verify that this is in fact the case. My Rails log is not showing any errors as well so I am curious to see what content Bench actually received and interpreted as a failure. Is there any way to print these failures?

    Read the article

  • Utility to record IO statistics (random/sequential, block sizes, read/write ratio) in Unix

    - by Michael Pearson
    As part of provisioning our new server (see other SF) I'd like to find out the following: ratio of random to sequential reads & writes amount of data read & written at a time (pref in histogram form) I can already figure out our reads/writes on a per-operation and overall data level using iostat & dstat, but I'd like to know more. For example, I'd like to know that we're mostly random 16kb reads, or a lot of sequential 64kb reads with random writes. We're (currently) on an Ubuntu 10.04 VM. Is there a utility that I can run that will record and present this information for me?

    Read the article

  • Rejecting new HTTP requests when server reaches a certain throughput

    - by user56221
    I have a requirement to run an HTTP server that rejects new HTTP requests (with a 503, or similar) when the global transfer rate of current HTTP responses exceeds a certain level. For example, if the web server is transferring at 98Mbps, and a new HTTP request arrives, we would want to reject this (as we couldn't guarantee a good speed). I've had a look at mod_cband for Apache, limit_req for nginx, and lighttpd's rate limiting features, but none of them seem to handle my (rather contrived, granted) use case. I should add that I'm open to using pretty much any web server, and am open to implementing this in iptables rules if someone can craft such a rule! (Refusing the TCP connection is fine, it doesn't have to respond with an HTTP 503). Any suggestions?

    Read the article

  • How to disable screens turn off Windows 8.1

    - by Warpzit
    So recently my computer started turning my screen monitors off after a while without activity, if I move the mouse or hit the keyboard they turn back on. I just want them to stay on. My System is windows 8.1 64 bit, I have 2 screens with mini display port and 1 television with hdmi which often is in regular television mode when this issue happens. Now before you jump to conclusions this is what I've tryed: Disabled screen saver Power saving settings to high with following set: Turn of the display: Never Put the computer to sleep: Never Turn off hard disk after: Settings: Never Sleep: Sleep after: Settings: Never Display: Turn off display after: Settings: Never Display: Enable adaptive brightness: Settings: Off Multimedia settings: When sharing media: Settings: Prevent idling to sleep. Any pointers that get me in the right direction will be greatly appreciated.

    Read the article

  • How to get a windows domain server to recognize a linux machine by its name?

    - by CaCl
    In my company I ran into an issue where we have a linux machine that serves up a Subversion repository. Its hooked up via LDAP to the Active Directory. We got an account setup for an application and they set the Limited Workstations up so it didn't have full access to the network. The problem is that even though the hostname for our machine resolves correctly for me, the credentials for the application account seem to come back as not being allowed based on the name (the error was related to authorized workstations). I don't have access to any of the domain servers but it might be helpful to come at the management or high-level techs with some ideas, they don't seem to have a solution besides allowing all workstations for the user. Does anyone have any idea on how to get my linux machine to properly identify itself with the Domain machine by name?

    Read the article

  • Does "I securely erased my drive" really work with Truecrypt partitions?

    - by TheLQ
    When you look at Truecrypt's Plausible Deniability page it says that one of the reasons for partition with solely random data is that you securely erased your drive. But what about the partition table with full disk encryption? How can you explain why the partition table says there's a partition of unknown type (With my limited knowledge of partition tables I think that they store all the partition filesystem types) and with solely random data? It seems that if your going to securely erase the drive you would destroy everything, including the partition table. And even if you just wiped the partition, the partition table would still say that the partition was originally NTFS, which it isn't anymore. Does the "I securely erased my drive" excuse still work here? (Note: I know that there's hidden truecrypt volumes, but I'm avoiding them due to the high risk of data loss)

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >