Search Results

Search found 51183 results on 2048 pages for 'local files'.

Page 49/2048 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Difference in performance: local machine VS amazon medium instance

    - by user644745
    I see a drastic difference in performance matrix when i run it with apache benchmark (ab) in my local machine VS production hosted in amazon medium instance. Same concurrent requests (5) and same total number of requests (111) has been run against both. Amazon has better memory than my local machine. But there are 2 CPUs in my local machine vs 1 CPU in m1.medium. My internet speed is very low at the moment, I am getting Transfer rate as 25.29KBps. How can I improve the performance ? Do not know how to interpret Connect, Processing, Waiting and total in ab output. Here is Localhost: Server Hostname: localhost Server Port: 9999 Document Path: / Document Length: 7631 bytes Concurrency Level: 5 Time taken for tests: 1.424 seconds Complete requests: 111 Failed requests: 102 (Connect: 0, Receive: 0, Length: 102, Exceptions: 0) Write errors: 0 Total transferred: 860808 bytes HTML transferred: 847155 bytes Requests per second: 77.95 [#/sec] (mean) Time per request: 64.148 [ms] (mean) Time per request: 12.830 [ms] (mean, across all concurrent requests) Transfer rate: 590.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 1 Processing: 14 63 99.9 43 562 Waiting: 14 60 96.7 39 560 Total: 14 63 99.9 43 563 And this is production: Document Path: / Document Length: 7783 bytes Concurrency Level: 5 Time taken for tests: 33.883 seconds Complete requests: 111 Failed requests: 0 Write errors: 0 Total transferred: 877566 bytes HTML transferred: 863913 bytes Requests per second: 3.28 [#/sec] (mean) Time per request: 1526.258 [ms] (mean) Time per request: 305.252 [ms] (mean, across all concurrent requests) Transfer rate: 25.29 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 290 297 14.0 293 413 Processing: 897 1178 63.4 1176 1391 Waiting: 296 606 135.6 588 1171 Total: 1191 1475 66.0 1471 1684

    Read the article

  • Windows 7 offline files - work temporarily offline even if network connection works

    - by Robert
    Sometimes I am connected via VPN to a network containing the server where files are stored which are cached by Windows offline files feature. Sometimes the connection works good and working this way is not a problem - on other times working is quite a pain because of high latency when working with the files in the Windows explorer. Is there an interactive way how a user (with admin permissions) can temporary suspend online usage of offline files? I already activated the "Transparent caching" group policy feature (Computer Configuration Policies Administrative Templates Networks Offline Files) with a network latency of 200msec but from my experience even if I get ping times to the file server of less than 40msec online usage is quite tenacious. Setting low latency times at this point causes the offline files often to toggle which makes problems with some applications working with several files and requires them to be consistent (like SVN client).

    Read the article

  • How to make FileZilla open all the required files with one click

    - by Omar Tariq
    Is there any way of configuring FileZilla so that I can open all the files on a server that I use to edit with just one click. For example if the files are like this: /home/abc/def/one.txt /home/abc/def/yet/another/directory/two.txt /home/abc/def/ghi/yet/another/directory/three.txt Then it is very time-consuming to navigate through each directory and open the required files. These are only 3 files but what if we have around 10 to 20 files? Yes, copying the path of the directories is one thing. But something that is built-in so that I can just click a button like open all the required files of this connection and it opens all the files in the editor (as set in FileZilla preferences) then that would be great!

    Read the article

  • Local DNS and Apache Server Configuration Interferring - example.com / www.example.com

    - by nicorellius
    I have a domain for my site: example.com I am also running local DNS with these lines: www IN CNAME server.<host_provider>.com. dev IN CNAME server.<host_provider>.com. So www.example.com and dev.example.com go to production and development sites, respectively, that are hosted by a host company. In my Apache configuration for the main site, I'm running a rewrite rule like this: RewriteEngine ON RewriteCond %{HTTP_HOST} ^example\.com$|!dev\.example\.com$ [NC] RewriteRule ^(.*)$ http://www\.%{HTTP_HOST}/$1 [R=302,L,NE] This rule seems to work, as when you are off the network and go to example.com in the browser, you get redirected to www.example.com. The problem is when I'm on the network, and I go to example.com I get an error page, saying page can't be found. No server errors; just a page can't be found, as if the local DNS is causing it to stop looking at that point. I'm also using Nettica for DNS service and have this A record in place: example.com Host (A) Default xxx.xx.xxx.xx This handles the external DNS, but my problem seems to be related to my internal DNS. For example, inside my network, I can go to servers on the network with addresses like this: server.example.com server1.example.com server2.example.com These are configured in my local DNS. I'm just not sure how to get past the "empty" subdomain and go to example.com. Adding to this since it might not be clear. If I'm out side the example.com network, on another network, like example123.com, then when I go to example.com I'm redirected to www.example.com as expected, eg, the Apache rewrite rule is working. Thanks in advance for any information.

    Read the article

  • Redirect local, not internal, requests using SuSEfirewall2 or an iptables rule

    - by James
    I have a server that is running a web application deployed on Tomcat and is sitting in a test network. We're running SuSE 11 sp1 and have some redirection rules for incoming requests. For example we don't bind port 80 in Tomcat's server.xml file, instead we listen on port 9600 and have a configuration line in SuSEfirewall2 to redirect port 80 to 9640. This is because Tomcat doesn't run as root and can't open up port 80. My web application needs to be able to make requests to port 80 since that is the port it will be using when deployed. What rule can I add so that local requests get redirected by iptables? I tried looking at this question: How do I redirect one port to another on a local computer using iptables? but suggestions there didn't seem to help me. I tried running tcpdump on eth0 and then connecting to my local IP address (not 127.0.0.1, but the actual address) but I didn't see any activity. I did see activity if I connected from an external machine. Then I ran tcmpdump on lo, again tried to connect and this time I saw activity. So this leads me to believe that any requests made to my own IP address locally aren't getting handled by iptables. Just for reference he's what my NAT table looks like now: Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:http redir ports 9640 REDIRECT tcp -- anywhere anywhere tcp dpt:xfer redir ports 9640 REDIRECT tcp -- anywhere anywhere tcp dpt:https redir ports 8443 Chain POSTROUTING (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • SQL Server Files Local or NAS or SAN?

    - by Jedi Master Spooky
    I have to install a new Server with SQL Server 2008, What do you recommend, One server with Raid 10 or the Files in a NAS? What about iSCSI should I use it? What about SAN? The server has 4Gb of RAM and that database file is about 2GB. To make my self clear today the server has no RAID, I have to implement some kind of strategy so if something happend I can have my files safe, so What should I choose Local Files, NAS, SAN? What option has the most performance, what is the more secure?

    Read the article

  • What is stored in %Windir%\System32\LogFiles\WMI\RtBackup?

    - by Helge Klein
    I occasionally notice in Resource Monitor hard disk activity related to ETL files in the folder C:\Windows\System32\LogFiles\WMI\RtBackup. Which process/service creates these ETL files and what is their purpose? Resource Monitor shows "System" as the process which is correct since ETW traces (that is what ETL files are) are created by the kernel. But I am interested in the process that causes the traces to be created. This happens on Windows 7, by the way.

    Read the article

  • Local references to old server name remain after Windows 2003 server rename

    - by imagodei
    I have a standalone Win 2003 server with Windows Sharepoint Services (WSS3) running on it. I had to rename the server and I had bunch of problems resulting from this. Note that the server is not in AD environment. Most obvious problems were with Sharepoint, which didn't work. I was somewhat naive to think it will work in the first place, but OK - I've solved this using step 1 & 3 from this site (TNX) Other curious behavior/problems remain. Most disturbing is that Sharepoint isn't able to send email notifications to participants. I noticed there are several references to old server name everywhere I look: in Registry, in Windows Internal Database (MICROSOFT##SSEE). I see instances of old server name in the Sharepoint Central Administration - Operations - Servers in farm. There is reference to a servers: oldname.domain.local oldname.local On one of those servers there is also Windows SharePoint Services Outgoing E-Mail Service (Stopped). Also, when I try to telnet locally to the mail server (Simple Mail Transfer Protocol (SMTP) service), I get a response: 220 oldname.domain.local Microsoft ESMTP MAIL Service, Version: 6.0.3790.4675 ready at Tue, 15 Jun 2010 13:56:19 +0200 IMO these strange naming problems are also the reason why email notifications from within Sharepoint don't work. Can anyone tell me how to correct/replace those references to oldservername? Why is the email service insisting on old name? Of course I would like to try it without reinstalling the server. TNX!

    Read the article

  • How to compare mp3, flac audio data in a file, ignoring header data (ID3 tag) etc.?

    - by Rob
    I've backed up some audio files up in 2 places and added ID3 tags into one backup but not the other, since time has passed my own memory has faded on whether the backups are actually the same, but now one has ID3 data and the other doesn't, basic binary compare will fail and inspection will be cumbersome. Is there a tool to compare just the audio data (not the header, ID3) in mp3s, flac files, and other files using header data such as ID3. started a thread on beyond compare here: http://www.scootersoftware.com/vbulletin/showthread.php?t=7413 would consider other comparison software that does this task

    Read the article

  • Accessing local files through an http:// address

    - by RexE
    I would like to access a folder of mp3 files on my local Windows machine through http:// addresses. For example, typing http://localhost:9999/songs/test.mp3 into my browser would play test.mp3, which sits in a specified folder on my C: drive. What is the very simplest way to do this? (Background: a program I'm using wants me to enter the URLs of these files, but assumes they are remote and accessed over http. It doesn't accept URLs of the form file://C/Users.... So, I'd like to give these local files addresses that makes them "look" remote.)

    Read the article

  • Manual NAT on Checkpoint (Redirect all http requests to a local web server)

    - by B. Kulakli
    We have a proxy server in our internal network and I want to redirect all internet http requests to a web server in local network. It'll be like a Network Billboard that says "No direct connection is available. Set up your proxy etc." For example: A user starts the computer Opens the browser Tries to open www.google.com Should see web server output on local network Tries another web site on internet Should see web server output on local network Sets up proxy Tries to connect to a web site Web site should be loaded I have added a simple manual NAT rule to address translation in Checkpoint firewall but it simply does not work. Here is my address translation rule Source Destination Service T.Source T.Destination T.Service MY_PC A_GOOGLE_IP ALL ORIGINAL INT_WEB_SRV ORIGINAL Then when I ping A_GOOGLE_IP, replies come from INT_WEB_SRV, as I expected. However, when I try to connect A_GOOGLE_IP from browser (http://A_GOOGLE_IP), no replies come from SYN_SENT and falls into timeout. When I look at the firewall log of INT_WEB_SRV, I can see the incoming connection requests from MY_PC is accepted and NO denies. By the way, there is no problem to see INT_WEB_SRV (http://INT_WEB_SRV) from browser. My understanding is, my NAT rule at checkpoint NGX R60 does not include return packets. I definitely need some help.

    Read the article

  • Manual NAT on Checkpoint (Redirect all http requests to a local web server)

    - by kulakli
    Hi, We have a proxy server in internal network and I want to redirect all internet http requests to a web server in local network. It'll be like a Network Billboard that say "No direct connection is available. Set up your proxy etc." For example: A user starts the computer Opens the browser Trys to open www.google.com Should see web server output on local network Trys another web site on internet Should see web server output on local network Sets up proxy Trys to connect to a web site Web site should be loaded I have added a simple manual NAT rule to address translation in Checkpoint firewall but it simply does not work. Here is my address translation rule Source Destination Service T.Source T.Destination T.Service MY_PC A_GOOGLE_IP ALL ORIGINAL INT_WEB_SRV ORIGINAL Then when I ping A_GOOGLE_IP, replies come from INT_WEB_SRV, as I expected. However, when I try to connect A_GOOGLE_IP from browser (http://A_GOOGLE_IP), No replies come from SYN_SENT and falls into timeout. When I look at the firewall log of INT_WEB_SRV, I can see the incoming connection requests from MY_PC is accepted and NO denies. By the way, there is no problem to see INT_WEB_SRV (http://INT_WEB_SRV) from browser. My understanding is, my nat rule at checkpoint NGX R60 does not include return packets. I definitely need some help. Regards, Burak

    Read the article

  • Ubuntu 10.04 bind9 local zone include files and apparmor

    - by Gilgongo
    Rather than putting all my zones in one named.conf.local file, I'd like to have them in groups that I can manage as separate files. So, I've tried putting the following into named.conf.local: include "/home/zones/group1.conf"; include "/home/zones/group2.conf"; include "/home/zones/group3.conf"; However, when I restart named, I see "permission denied" errors in the logs. Ubuntu uses apparmor for bind, so I also added the following in /etc/apparmor.d/usr.sbin.named: /home/zones/group1.conf r, /home/zones/group1.conf r, /home/zones/group1.conf r, Now, when I re-start named, all appears to be well. Zones are loaded (I think). However, a day or two later, I see my secondary name server complaining that the primary is telling it that it's not authoritative for those domains. I then have to put all the domains back into the named.conf.local file again. How can I get bind9 to use include files in this way? I don't know much about apparmor, so that may or may not be the issue here, but I've used include files in this way on Debian OK.

    Read the article

  • Why can't gif images copy at a reasonable speed on this dell laptop with XP?

    - by alt234
    I've got this somewhat old Dell Latitude D810. Strangest thing... If I try to copy anything that has gif files in it the gif files take forever. Like a few minutes per gif regardless of size. Everything else copies fine. I notice this when copying files off our network, copying off multiple external drives, and even when files are copying during an installation process. I'm on Windows XP Pro service pack 3. I've never seen anything like this before. Anyone else?

    Read the article

  • Linux: don't use file system cache under a directory

    - by GetFree
    For a PHP website I'm monitoring, I need to see what files are being used each time the browser makes a request. I thought of using find . -type f -amin 1. With that I get all files which were read in the last minute (it's a developing server so only I am using the website). I took care of removing the noatime attribute from the mounting point. However there must be something else that's preventing the kernel from reading the actual files on disk because the access time is not being updated when I read a file. I guess it must be the file-system cache which is retrieving the files from memory. Is there a way to disable file caching under a specific directory? (public_html in my case) Also I read somewhere that there is the nobh mounting atributes which apparently disables file caching under that mounting point, but I'm not sure.

    Read the article

  • Transparently cache files from a network drive in Linux

    - by Vadim
    We have a Linux server that reads files from a network drive and processes them. In a common scenario, a user will log in and access the same files over and over again. The size of the files varies but the larger ones can be around 50+ Mb. The files seldom change. I was wondering if it's somehow possible to transparently cache the files. I don't want (or can) change the program the reads the files, nor do I control the protocol by which the files are accessed. I just want something to detect that I access a certain path, copy the file locally (if needed) and then read the file from the local drive. I've read about Bcache but can't figure out if it's what I need. Do you have any suggestions? Thanks, Vadim.

    Read the article

  • rsync remote to local automatic backup

    - by Mark Molina
    Because all my work is stored on a remote server I would like to auto backup my server monthly and weekly. My server is running Centos 5.5 and while searching the web I'm found a tool named rsync. I got my first update manually by using this command in terminal: sudo rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP I then prompt my password for that user and bob's my uncle. This backups the necessary files from my remote server to my local device but does somebody know how I can automate this? Like automatic running this script every sunday? EDIT I forgot to mention that I let direct admin backup the files I need and then copy those files from the remote server to a local server.

    Read the article

  • IIS7 doesn't monitor changes across symlinks

    - by Matt Hensley
    I've used the mklink utility to create a symlink to a directory of web content. IIS7 doesn't "see" changes any classic ASP files in this linked directory without issuing an iisreset. I've disabled caching and file changes are picked up on other static files (such as .html) but .asp files are ignored.

    Read the article

  • Where and how does Kindle Cloud Reader store downloaded books, on a Windows 7 system?

    - by einpoklum
    I use Firefox and sometimes Chrome, on Windows 7. Amazon's in-browser Kindle Cloud Reader lets you "download" books for local/offline viewing. Where are these stored, given my OS+browser combination? I've searched the Users subdirectory for my user, and could not find a relevant (separate) file in there, specifically not in the Firefox and Chrome profile directories. To clarify, the files are obviously not downloaded as-is and are stored in some potentially-obfuscated format, possibly in the browser's local store and possibly elsewhere. The question is, where and how exactly? (This was the first of this question, but wasn't answered there since it was not the main focus of the question.)

    Read the article

  • nginx logrotate config

    - by TomOP
    Whats the best way to rotate nginx logfiles? In my opinion, I should create a file "nginx" in /etc/logrotate.d/ and fill it with the following code and do a /etc/init.d/syslog restart after that. This would be my config (I havn't tested it yet): /usr/local/nginx/logs/*.log { #rotate the logfile(s) daily daily # adds extension like YYYYMMDD instead of simply adding a number dateext # If log file is missing, go on to next one without issuing an error msg missingok # Save logfiles for the last 49 days rotate 49 # Old versions of log files are compressed with gzip compress # Postpone compression of the previous log file to the next rotation cycle delaycompress # Do not rotate the log if it is empty notifempty # create mode owner group create 644 nginx nginx #after logfile is rotated and nginx.pid exists, send the USR1 signal postrotate [ ! -f /usr/local/nginx/logs/nginx.pid ] || kill -USR1 `cat /usr/local/nginx/logs/nginx.pid` endscript } I have both the access.log and error.log files in /usr/local/nginx/logs/ and want to rotate both daily. Can anyone please tell me if "dateext" is correct? I want the log filename to be something like "access.log-2010-12-04". One more thing: Can I do the log rotation every day on a specific time (e.g. 11 pm)? If so, how? Thanks.

    Read the article

  • Moving files with batch files from one pc to a server, to a another pc - worried about disk corruption

    - by AnchientAnt
    I use scheduled tasks that calls a batch file, that calls more batch files to move about three files from a pc, to a server, then to multiple other pcs. It all happens very quickly, as they are small files. Are there any pitfalls for how fast these transfers happen? I'm just mildly concerned about causing some disk corruption somehow. I use logic like 1. Call MapToPc if files exist then move file to folder on server. Disconnect 2. Call SendtoPCs If files exist (the files just moved to the server) then MapToPCs Move all files Disconnect All of this happens in about 2 secs or less. edit: this on windows 7, server 2003, xp respectively

    Read the article

  • Recover deleted files on windows 2008 file server

    - by aniga
    We have recently been hit by a weird virus which made all files and folders a system files/folders and also it hid all files and folders par some weird ones it created including: ..exe porn.exe secret.exe password.exe etc We have managed to restore the files with attrib command to unhide and unmark them as system files however we have noticed that we are missing some 4 to 5 folders of which (based on my luck) 2 of them are the two most important client we have. I am not sure if these files were deleted by the worm/virus or by my colleagues who are not owning up to them but the files are now gone. Worst of all, we do not have any backup what so ever (Yes I know, we should not have done that but it is a lesson learned and since last night we have created two forms of backup systems one to external device and one on the cloud, but I doubt any of that will help us now) We have 1 Windows 2008 File server and 4 client computers based on Windows 2007. I would be grateful if anyone can help us on how we can recover from this disaster which could potentially put us out of business.

    Read the article

  • Choosing local versus public domain name for Active Directory

    - by DSO
    What are the pros and cons of choosing a local domain name such as mycompany.local versus a publicly registered domain name such as mycompany.com (assuming that your org has registered the public name)? When would you choose one over the other? UPDATE Thanks to Zoredache and Jay for pointing me to this question, which had the most useful responses. That also led me to find this Microsoft Technet article, which states: It is best to use DNS names that are registered with an Internet authority in the Active Directory namespace. Only registered names are guaranteed to be globally unique. If another organization later registers the same DNS domain name, or if your organization merges with, acquires, or is acquired by other company that uses the same DNS names, then the two infrastructures cannot interact with one another. Note Using single label names or unregistered suffixes, such as .local, is not recommended. Combining this with mrdenny's advice, I think the right approach is to use either: Registered domain name that will never be used publicly (e.g. mycompany.org, mycompany.info, etc). Subdomain of an existing public domain name which will never be used publicly (e.g. corp.mycompany.com). The "never used publicly" part is a business decision so its probably best to get sign off from those in the company authorized to reserve domain names and subdomains. E.g. you don't want to use a registered name or subdomain that the marketing dept later wants to use for some public marketing campaign.

    Read the article

  • Allied Telesis router: IP filtering for the LOCAL interface

    - by syneticon-dj
    Given an Allied Telesis router with an AlliedWare OS (2.9.1) I would like to disable access to all management services of the router except for a number of subnets (or alternatively have what is a "management VLAN" with other manufacturers' switch and router models). What I have tried so far: creating a new VLAN and an appropriate IP interface, setting the LOCAL IP into this subnet, creating an IP filter for the IP interface and specifying my exclusion subnets: it simply does not work as intended as I can access the LOCAL IP set from any of the other VLAN interfaces - the traffic is apparently not going through my defined filter set at all creating a new IP filter set and binding it to the LOCAL IP interface: this seems not to affect any kind of traffic at all, the counters for the filter set remain at zero packets setting the Remote Security Officer Level IP address range: this only restricts the ability for a user with the Security Officer privilege level to log in from any but the specified address ranges / subnets. Unfortunately, it does not prevent service availability (and thus DoS capacity) or the ability to log in as a less privileged user (e.g. a "manager") calling technical support: unfortunately no solution so far What I have not tried: creating a filter set for each and every IP interface defined on the router and excluding access to the router's management IP: I would like to reduce the overhead induced by IP filters as the router already is CPU-constrained at times. Setting up filters for every IP interface would mean that each and every traffic packet would have to pass the filters, thus consuming CPU cycles. If by any means possible, I would like to find a different solution.

    Read the article

  • Can't access apache from outsite my local network

    - by valter
    UPDATED: Now, when I type my external ip like xxx.xxx.xxx.xxx:8079, i can access xampp defaults page. But the strange is that when someone else from outside my network, try to access it using the same ip, it doesnt work. I Think it should, because its the external ip. I'm getting crazy. I have tried for hours to access xampp defaults page from outside my local network. My ISP blocks port 80 and 8080. So I changed apache to listen to port 8079 Listen 8079 My local computer ip is 10.1.1.2 I can access the webserver, from any computer on my local network when I type http://10.1.1.2:8079 I also oppended the port 8079 on my modem, as the image shows bellow. (I think i did it right) When apache is running on my computer, if I test the port 8079 at http://canyouseeme.org/ i get the message "Success: I can see your service on xxx.xxx.xxx.xxx on port (8079) Your ISP is not blocking port 8079" If apache is not running I get "Error: I could not see your service on xxx.xxx.xxx.xxx on port (8079) Reason: Connection refused". So, it's clear that the port 8079 is oppened. But when I type xxx.xxx.xxx.xxx:8079 on google chrome for example, I get Oops! Google Chrome could not connect to xxx.xxx.xxx.xxx:8079 What can I do to solve this, to allow apache to server the pages? I don't know what else I shoud configure. Please, help me. Thanks.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >