Search Results

Search found 2130 results on 86 pages for 'serve u'.

Page 63/86 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • overriding default scheduler for blkio requests in cgroups

    - by Aamir Mushtaq
    I am trying to optimize a set of servers that have to reside on single machine. i.e. i can have multiple application server, a DB server and of course a samba server as well in same instance. Now I was looking into several optimizing options available to me. In my quest, i did my tuning of the network stack. coming to the CPU, MEMORY and the BLKIO tweaks, i am using CGROUPS. The problem i am facing is that for enhanced performance in the nature of the applications that i am running, the CFQ Scheduler that is implemented for the BLKIO subsystem is not optimal. I was looking more for a Deadline Scheduler because that will serve my purpose well. My question is whether it is possible for us to change the scheduler in the kernel compilation itself for the BLKIO to Deadline and it will reflect in my usage of [CGROUP hierarchies][3]? Since when running the service cgconf, a new fs is mounted and i dont want it to revert to CFQ scheduler. I also welcome any suggestions that will enable me to have more control over my resources.

    Read the article

  • Very slow connection to xserve via afp or smb

    - by Mhoffman13
    Help. File transfer and connection speed to our Xserve are painfully slow from newly purchased iMacs. The Xserve is only used as a file server, its running 10.4.11. The problem seems to be only happening on brand new iMacs running 10.6.3. When connected either over afp or smb copying files is many times slower than usual. Other machines on the network running either 10.4 or 10.5 have a normal connection speed. To try to rule out OS incompatibility I connected the new iMac running 10.6 to another computer running 10.4 over the network. The file transfer speed was fast as normal. So it seems the problems lies with the X serve (maybe). The afp logs either access or error don't show anything unusual. One thing that did look different was when the imac was connected to the Xserve the user had its id listed as its IP address. The other machines connected, had the id of broadcasthost. I also noticed that when connected from the new iMac I can only see one of the mirrors. When any other computer connects both mirrors are shown. Tried a restart of the Xserve but the problem persists. Thanks in advance for any advice

    Read the article

  • Is Cherokee (probably) the best static content server for beginner sysadmins?

    - by Bad Learner
    I have read the pros and cons of most of the popular web servers and have come to a conclusion that Apache would (probably) be the best web server for serving dynamic content - - no wonder YouTube, Flickr and Facbook, among many others, use it. I do not know if that C10K problem applies to Apache even when serving dynamic content only, but I think any web server used to serve dynamic content needs some good tweaking for optimized performance, and the fact that nothing beats Apache when it comes to documentation, resources and support on the web, I think should will go with Apache for dynamic content. That apart, the confusion begins when it comes to choosing web servers for static content (including streaming videos). I see that Nginx, Cherokee and Lighttpd are among the best (I am not considering non-open source or non-linux stuff here). So, which too choose? I know one cannot go wrong with any of the three (Nginx, Cherokee, Lighttpd). Lighttpd's development has evidently gotten slower than it was a good time ago. The documentation is pretty good for all the three, and hopefully, so are the resources (knowledge of these among the users of Stackoverflow/Serverfault sites, the web etc). Precisely, and noting point [2] and [3], if I am not wrong, I should either go with Nginx or Cherokee. I would love to see someone clarify these... is Cherokee just as fast (mb/s), performant (connections/s), and reliable (think downtime/restarting server) as Nginx for serving static content and load balancing, for small, medium to large (and really large) websites and applications? (Think, the size of YouTube, Apache or Facebook.) if the answer for the Q above is a big "hell, yes!" then, I should probably prefer Cherokee, right? Because, since I am a beginner, it would a lot easier to setup Cherokee as it has a graphical admin user interface + really good documentation. Yes? I could be wrong, I could be right. I put down what I know so that you can offer most relevant advise. Pardon if anything I've said is offensive.

    Read the article

  • Connect to Apache times out randomly

    - by Amadan
    We are trying to set up an Apache server on a remote machine, but we experience strange behaviour. Checking with telnet remote.machine 80, one of these things happen randomly: Connect and serve content normally (no delay) Connect after a long pause Connect normally, then time out without response Timeout on connect Once connected, the request seems to be processed normally. These things do not occur if I connect from that machine directly to localhost 80. The Apache is dedicated, as is the server it runs on (runs only this one application, no-one else is using it for anything else). I am not an administrator of the remote site, and I do not know the network architecture over there, but apparently it's firewalled: (HTTP port is open, SSH port is IP-restricted, most others are closed). If there was any one pattern, I might have some ideas, but this variety of symptoms baffles me. Any ideas as to what could be causing this? Apache is 2.2; Server version is: Linux version 2.6.9-22.ELsmp ([email protected]) (gcc version 3.4.4 20050721 (Red Hat 3.4.4-2)) #1 SMP Mon Sep 19 18:32:14 EDT 2005

    Read the article

  • Configuring network route between two routers on home network

    - by Paul
    I have a home network - the main router connected to the internet (and has wifi) is a Netopia box. Connected to it is a Linksys router. Everything currently works - I can connect via the wireless network and get to the internet. Machines connected to the Linksys can connect with each other and connect to the internet. Both routers are configured to serve addresses via DHCP (Netopia 192.168.1.1 - 192.168.1.99), Linksys (192.168.0.1 - 192.168.0.100). Here's how they are connected: Internet <-> Netopia w/wifi (192.168.1.254) <-> Linksys (192.168.0.1) I decided I really need to allow wireless connections to also communicate with machines behind the Linksys router. Currently the Linksys is configured to obtain an IP address via DHCP. I thought this would be straightforward. I configured the Linksys to have a static IP address: IP: 192.168.1.100 Mask: 255.255.255.0 GW: 192.168.1.254 Then I configured a static route on the Netopia: Network: 192.168.0.0 Mask: 255.255.255.0 GW: 192.168.1.100 So it should now look like this: Internet <-> Netopia w/wifi (192.168.1.254) <-> (192.168.1.100) Linksys (192.168.0.1) I reset both routers. I cannot ping the Netopia (192.168.1.254) from inside the Linksys network, and if I attempt to ping 192.168.0.1 from a wifi connection I get a "Destination host not available" error. Obviously I'm missing something, but I'm not sure where. Any ideas on what I'm missing?

    Read the article

  • Simplest DNS solution for remote offices

    - by dunxd
    I look after a bunch of remote offices that connect via VPN - a Cisco ASA 5505 in each office acts as Firewall and VPN end point. Beyond that we keep things as simple as possible in the offices to minimise the support burden. We don't have any kind of server except in offices large enough to justify having someone dedicated to IT. Basically there is the ASA, some computers, a network printer and a switch. One of the problems I am seeing in a lot of offices is that DNS requests looking up hosts inside our network often fail - I'm assuming timeouts due to the offices internet connection (they are all in developing world countries) having some sub-optimal qualities (e.g. high latency caused by VSAT segments, or packet loss. The obvious solution to this is to have some sort of local DNS service that can serve local requests - so I think it would need to do zone transfers from our Microsoft Windows 2008 R2 DNS servers at HQ. However, simply installing Windows Servers in each office is both expensive, and creates a support burden. This got me thinking about pfsense/m0n0wall on embedded devices - those can act as a DNS server, and could be configured at HQ and sent out as just something that needs to be plugged into the network and can then be forgotten about by the staff locally. Maybe there are some alternatives to the ASA 5505 that include some DNS functionality. Has anyone here dealt with the problem, either using some kind of embedded device, or found some other solution? Any gotchas or reasons to avoid what I have suggested?

    Read the article

  • Apache proxy: Why is one vhost returning Forbidden while the other one works?

    - by Stefan Majewsky
    I have a Java application that needs to talk to another intranet website using HTTPS in both directions. After fighting with Java's SSL implementations for some time, I gave up on that, and have now set up an Apache that's supposed to act as a bidirectional reverse proxy: external app ---(HTTPS request)---> Apache ---(local HTTP request)---> Java app This direction works just fine, however the other direction does not: Java app ---(local HTTP request)---> Apache ---(HTTPS request)---> external app This is the configuration for the vhost implementing the second proxy: Listen 127.0.0.1:8081 <VirtualHost appgateway:8081> ServerName appgateway.local SSLProxyEngine on ProxyPass / https://externalapp.corp:443/ ProxyPassReverse / https://externalapp.corp:443/ ProxyRequests Off AllowEncodedSlashes On # we do not need to apply any more restrictions here, because we listened on # local connections only in the first place (see the Listen directive above) <Proxy https://externalapp.corp:443/*> Order deny,allow Allow from all </Proxy> </VirtualHost> A curl http://127.0.0.1:8081/ should serve the equivalent of https://externalapp.corp, but instead results in 403 Forbidden, with the following message in the Apache error log: [Wed Jun 04 08:57:19 2014] [error] [client 127.0.0.1] Directory index forbidden by Options directive: /srv/www/htdocs/ This message completely puzzles me: Yes, I have not set up any permissions on the DocumentRoot of this vhost, but everything works fine for the other proxy direction where I haven't. For reference, here's the other vhost: Listen this_vm_hostname:443 <VirtualHost javaapp:443> ServerName javaapp.corp SSLEngine on SSLProxyEngine on # not shown: SSLCipherSuite, SSLCertificateFile, SSLCertificateKeyFile SSLOptions +StdEnvVars ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ ProxyRequests Off AllowEncodedSlashes On # Local reverse proxy authorization override <Proxy http://localhost:8080/*> Order deny,allow Allow from all </Proxy> </VirtualHost>

    Read the article

  • SELinux - Allow multiple services access to same /home/dir

    - by Mike Purcell
    I currently have SELinux enabled and have been able to configure apache to allow access to /home/src/web with a chcon command granting the 'httpd_sys_content_t' type. But now I am trying to serve the rsyslogd.conf file from the same directory, but every time I start rsyslogd I see an entry in my audit log saying that rsyslogd was denied access. My question is, is it possible to grant two applications the ability to access the same directory, while still keeping SELinux enabled? Current perms on /home/src: drwxr-xr-x. src src unconfined_u:object_r:httpd_sys_content_t:s0 src Audit log message: type=AVC msg=audit(1349113476.272:1154): avc: denied { search } for pid=9975 comm="rsyslogd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349113476.272:1154): arch=c000003e syscall=2 success=no exit=-13 a0=7f9ef0c027f5 a1=0 a2=1b6 a3=0 items=0 ppid=9974 pid=9975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="rsyslogd" exe="/sbin/rsyslogd" subj=unconfined_u:system_r:syslogd_t:s0 key=(null) -- Edit -- Came across this post, which is sort of what I am trying to accomplish. However when I viewed the list of allowed sebool params, the only relating to syslog was: syslogd_disable_trans (SELinux Service Protection), seems like I can maintain the current SELinux 'type' on the /home/src/ dir, but set the bool on syslogd_disable_trans to false. I wonder if there is a better approach?

    Read the article

  • Generalized strategy for file server virtualization in Xenserver

    - by Jamie
    I'm not shopping as much as I'm looking for some guidance on good idea / bad idea strategies. I'm sure I'm not in the "best practices" budget range. Currently, I have 3 dell poweredges running xenserver in a pool. Each node has a ubuntu file server, serving about 6TB. One is the primary, the other two are rsync targets for backup. The 6TB is stored on their respective local storage disks as an LVM of 3x2tb virtual disks. The fileserver VM disks are also stored on the node local disks. Each node also runs a smattering of light-weight VMs for web, development, windows VMs, and stuff like that. Several of those VM's disks reside on a QNAP NAS to play with live migration. These VM's are often clients of the primary file server (like all the mail, web content, user files are stored on the file server, not on the mail, web, and samba VMs). This all works fine, and is a major step up for us. The downside is that the QNAP is a single point of failure. And the only thing the QNAP is doing is serving migratable VM images, not client data. Someday the poweredge local arrays will be full, and we will have to reinvent ourselves again. Is it wise to have heavywieght vms (like the fileserver, with its 6+ TB disks) on a SAN or NAS? Would it be better to keep the VMs lightweight, have the VM images on a SAN or NAS, and use 2 or more NAS act as NFS-serving file appliances? A hybrid SAN/NAS that can serve iscsi for images and NFS for the client vms? It seems like live-magration would be a misnomer if you have to migrate a fileserver with its entire 6+ TB disk. I recognize there are plenty of ways to skin the cat. We've already skinned it a few ways. What makes sense?

    Read the article

  • Apache DAV at `/` with normal hosting at `/foo` - how?

    - by mandrake
    Should I not be able to have a configuration where I serve SVN repos with SVNParentPath at <Location /> and then override DAV and host normal files using another location <Location /foo>? I wish to host my XSLT files on the same subdomain and still host repos at root. Of course, if I was to have a repo called foo, that would not be accessible, and that's ok. <VirtualHost *:80> ... #Host XSLT files here <Location /foo> DAV Off </Location> #Host my repos relative to root, such as /my_repo/ <Location /> DAV svn SVNParentPath "myrepos" SVNListParentPath on SVNIndexXSLT "/foo/my.xsl" ... </Location> </VirtualHost> But DAV SVN still looks for a repo: <?xml version="1.0" encoding="utf-8"?> <D:error xmlns:D="DAV:" xmlns:m="http://apache.org/dav/xmlns" xmlns:C="svn:"> <C:error/> <m:human-readable errcode="720003"> Could not open the requested SVN filesystem </m:human-readable> </D:error>

    Read the article

  • How to host multiple FLEX applications in IIS7

    - by Devtron
    Hello, I manually deploy a FLEX application to my web server (IIS 7). There are two virtual directories, 1.) Default 2.) myFlexApp1. myFlexApp1 is where my working FLEX application resides. I now need to deploy a different FLEX application (let's call it myFlexApp2) to the same web server. I set up a virtual directory for [myFlexApp2] and it complains about the "bindings" using port 80, which is already used by [myFlexApp1]. I have tried to give them separate host names in their bindings properties. For example, myFlexApp1.mydomain.com and myFlexApp2.mydomain.com. I can never get [myFlexApp2] to show from an external browser. I was able to get only one or the other to display, but never could run both. Here is what I need: myFlexApp1.mydomain.com -- myFlexApp1 calendar.mydomain.com -- myFlexApp2 test.mydomain.com -- myFlexApp1 where test.mydomain.com is the default URL. Is this possible? What am I doing wrong? I even tried to edit the hosts file in [C:\Windows\System32\drivers\etc] but that didnt work either. How can I serve up two FLEX applications on IIS 7? It shouldn't be this hard!

    Read the article

  • Can varnish cache files without specific extension or residing in specific directory

    - by pataroulis
    I have a varnish installation to cache (MANY) images that my service serves. It is about 200 images of around 4k per second and varnish happily serves them according to the following rule: if (req.request == "GET" && req.url ~ "\.(css|gif|jpg|jpeg|bmp|png|ico|img|tga|wmf)$") { remove req.http.cookie; return(lookup); } Now, the thing is that I recently added another service on the same server that creates thumbnails to serve but it does not add a specific extension. The files are of the following filename pattern: http://www.example.com/thumbnails/date-of-thumbnail/xxxxxxxxx.xx where xx are numbers, so xxxxxxxxx.xx could be 6482364283.73 (two numbers at the end) (actually this is the timestamp so I can keep extra info in the filename) That has the side effect that varnish does not cache them and I see them constantly being served by apache itself. Even though I can change the format from now on to create thumbs ending in .jpg, is there a way to change the vcl file of my varnish daemon to either cache everything under a directory (the thumbnails directory) or everything with two numbers at its extension? Let me know if I can provide any additional info ! Thanks!

    Read the article

  • Links break in IE9 when using Wordpress plugins in non Wordpress Page

    - by mouli
    I have a site that uses SEF URLs and htaccess RewriteRules to serve up the pages. This has worked fine for several years until the arrival of IE9. Now it appears that the links are not being rewritten and the site is dead in the water. I have tried different compatabilty modes, to no avail, and I've played with the Rewrite Rules over and over, tried different doctypes and a few other browser settings. I agree that it cannot in theory be a browser specific problem if the problem is with the htaccess file but this site works in IE8, firefox and chrome. I have run the rewriterule through a validator and it looks fine. Any ideas would be appreciated as I am running out of ideas. The site is www.marlboroughsounds.co.nz a sample link is http://www.marlboroughsounds.co.nz/walking/freedom-walk-queen-charlotte-track/4dfw and the rewrite rule thats not working looks like this: RewriteRule ^walking/.*/([a-z0-9_]*)/?$ /walking.php?act_code=$1 [L] The link fails and it serves up a browser 404 page, not even the custom 404 I have for the site. Any ideas would be much appreciated as I am stumped.

    Read the article

  • Process PHP files from a network share in a vmware virtual machine

    - by nhinkle
    As a testing environment, I have set up a vmware virtual machine running Windows Server 2008 R2. I have Apache and PHP installed (as part of the xampp package). I am doing the development outside of the VM, and so want Apache to serve PHP files from a VM shared folder (which appears as a network share in the VM). I have done this by creating an NTFS symbolic link in Apache's htdocs directory. I can access this directory from the browser, and plain-text files are readable. However, PHP fails to process files, instead returning the following error: Warning: Unknown: failed to open stream: No such file or directory in Unknown on line 0 Fatal error: Unknown: Failed opening required 'C:/xampplite/htdocs/path/to/file.php' (include_path='.;C:\xampplite\php\PEAR') in Unknown on line 0 It appears to be a permissions issue — PHP doesn't seem to be allowed to read the file to process it. However, Apache has no problem opening files in the directory. I cannot figure out how to give PHP the necessary permissions to process the file. Does anybody know of a way to make this work, or else another solution for getting the files into the VM automatically while I develop on the host machine?

    Read the article

  • IIS7 can't read web.config on shared Mac filesystem

    - by RobG
    I'm running a VirtualBox virtualized Windows 2008 Server on my Mac, just finished setting it up today. On it, I have SQL Server 2008, IIS and ColdFusion 9. I want to serve websites from my Mac filesystem (for development purposes). So I created a new website in IIS and pointed it at the appropriate path using a UNC path: \vboxsvr\rob\Sites\testsite, which contains the ColdFusion code and a web.config file. When I attempt to modify the file at all, or view the site in a web browser, I get an error: HTTP 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. I did some Googling, and found several similar problems, but nothing exactly like I have. The closest one seemed to indicate permissions. So I recreated the site and set it up to allow the Administrator (in Windows) to access the stuff. That didn't help. I can read/modify the files just fine from within Windows, but IIS itself can't seem to do it. What do I need to do to fix this? Thanks!

    Read the article

  • Requests are making it to my app server, but not into node.js -- why?

    - by Zane Claes
    I detailed in this question on StackOverflow how some random requests are not making it from the client to my Node.js app server, resulting in a gateway timeout. In summary, identical requests are, at random, not even making it far enough to trigger a console.log() in my first line of express middleware. I need to narrow down the problem, though, to find out WHERE the traffic is being lost and it was suggested that I try a packet sniffer on my app servers. Here's my setup: 2x Load Balancers (m1.larges) 2x node.js servers (also m1.large) Here's what's interesting/unusual: the node.js servers started as PHP servers with an Apache stack and continue to serve PHP files for my domain (streamified.me). However, I use a little httpd.conf magic on the app servers so that requests to api.streamified.me get routed over port 8888 to the node.js server: RewriteCond %{HTTP_HOST} ^api.streamified.me RewriteRule ^(.*) http://localhost:8888$1 [P] So, the request hits the load balancer = goes to an app server = gets routed to port 8888 if it's intended for the API = gets handled by node.js So, in the same httpd.conf file, I turned on RewriteLogLevel 5 and then created a simple PHP+CURL script on my localhost to hit my api.streamified.me with a random URL (which should cause node.js to trigger a simple "not found" response) until it resulted in a Gateway timeout. Here, you can see that it has happened -- and the rewrite log shows that the request was definitely received by the app server and forwarded to port 8888... but it was never received by node.js (or, at least, the first line of code in the first line of middleware never gets it...) Image Link: http://i.stack.imgur.com/3OQxS.png

    Read the article

  • How do I prevent my ASP .NET site from continually prompting for user credentials?

    - by gilles27
    I'm trying to get an ASP .NET website up and running on IIS6. The site will run in its own application pool, and uses Windows authentication, with anonymous access turned off. When I run the app pool under NETWORK SERVICE, everything works fine. However we need the app pool to run under a different account, because this account needs some extra privileges (we are printing Word documents). This new account is a member of the local users group, and the IIS_WPG group. It has also been granted the "Log on as a service right". When I browse to the site I am prompted for credentials, not once, but several times. When the page finally loads it looks wrong because the style sheets have not been applied. My suspicion is that I am being prompted once for each file (e.g. all the images, styles and script files) the browser requests, and that for some reason the website is unable to validate those credentials in order to serve the files back. If I allow anonymous access the page loads fine - we don't want to allow it but I mention it in case it offers any further clues. My theory is that perhaps the account the app pool runs under needs permissions to validate domain credentials? If that is so, how do I enable this?

    Read the article

  • Nginx: Disallow index.html in URL

    - by Martin Vilcans
    We're generating a site consisting of only static files (using Assemble). Having the .html extension on URLs looks so nineties, so we generate every static HTML file in its own directory and call it index.html. For example, the url http://www.example.com/foo/bar/ is in the file /var/www/foo/bar/index.html. This works well, but there is one small thing nagging me: Now there are two possible URLs to the same resource: http://www.example.com/foo/bar/ (slash URL) http://www.example.com/foo/bar/index.html (index.html URL) By accident someone may link to the index.html form of the URL, which is bad for SEO and looks ugly (remember the nineties?). Is it possible in Nginx to give a 404 error on the index.html URL, but serve the slash URL? I tried this: location ~ /index\.html$ { return 404; } But it seems that Nginx does some internal rewrite of the slash URL to the index.html URL, and then matches this location so we get a 404 even on the slash URL. Note that to catch mistakes, we want index.html URLs to be an error, not just redirect to the slash URL.

    Read the article

  • Intermittently uncommunicative subnets

    - by mhd
    Last week proved me a veritable Cassandra: I've always said that it's a bad idea to have only one firewall/router, without a backup or failover. And thus our Cisco PIX went haywire, refusing to route properly. And of course, the only one available here on short notice is me, and while I'm quite grounded in Linux, I'm really a developer not a sysadmin (the fact that this hit me on sysadmin appreciation day is a bit ironic). Anyway, this weekend I tried to hack up a temporary solution: I used an old server with enough NICs (two built-in, four on a card) to serve as a gateway and firewall. Due to some problems with the raid controller, I got only two router distros running, and between Untangle and Ebox I decided for the latter. Now everything is quite okay. I've got all the different subnets we've got here (all with separate switches) talking to each other and even to the internet (Cisco 2800 router, T1 lines). But from time to time (20-60 minute intervals), I get a total routing failure. Our main, office subnet can't talk to our server subnet and can't connect to the internet. This is not the end of a gradual slowdown, either everything's working perfectly or I get a total lack of communication for about two minutes each time. Now I'm a bit at wits end what to check. At least with the default EBox setup, nothing in /var/log shows anything weird and it doesn't exactly have lots of built-in monitoring tools. So I'm hoping someone here could give me some pointers about what to look out for. I did change the ethernet cable from the office switch to the firewall, with no results. I might change switches, although within the switch it seems to work ok enough. Edit: I'm not sure whether this is the sole cause of the problem, but after I noticed a few DHCP entries just before the last drop of connectivity, I tried to reproduce that. And alas, whenever I renew a DHCP connection, I can't access other subnets anymore. Running ISC DHCPD 3.0.6.

    Read the article

  • Configure linux machine as bridge/switch and end device

    - by leemes
    At my home, I have two desktop PCs in two rooms. The router / DSL modem is in one of these rooms. Now I want to configure a home server (having 2 LAN ports, running 24/7) in the corridor between the two rooms, using only one LAN cable at each door. This gives me the following physical configuration: (door) (door) .----/-/----. .-----/-/----------._ FritzBox | | | .----´´ DSL Router PC1 Server | PC2 As just said, the server has 2 network interfaces and is running Ubuntu. What I need now is a network configuration which enables both the server and PC1 to connect to the router. I think the server needs to serve as a bridge or switch. Currently, all computers are configured having static IP addresses. If I'm understanding it correctly, a bridge / switch doesn't have its own IP address, but as the server needs to be configured as an own end device, it needs to have one. My first question is, do I have to configure both interfaces separately, giving both the same static IP address? My next question is, how do I bridge the two physical networks into one? I have basic understanding (but am always confused again and again) of bridges and switches, but I don't know how to configure it in software. I only know that it's possible to do so :) The third question is: Is it possible to configure this in a way that network packets from/to PC1 to/from the router only go through hardware or only consume low CPU in the server? Can you help me? Thanks in advance!

    Read the article

  • a brand new FS based on a database without using fuse

    - by Devrim
    hi all, To serve millions of files out of a single directory, being able to connect to a drive from hundreds of endpoints, and for some other reasons (to avoid gluster/nfs/all fs based networking solutions), I want to evaluate the possibility of making a filesystem that's based on a mongodb (or any other). Basically, it works like fusefs, every single file is kept in mongo gridfs. In theory, I do, mount mongodbfs /mountPoint mongodb://localhost then when i say touch /mountPoint/test.txt this file is inserted into mongodb. This FS will also store uid/gid and perms with the file, we can throw hundreds of servers to it, and no useradd will be necessary. I'm not thinking to include all the features of FS, just the ones we need. My question is, how do I start my quest in finding resources, books, links, people, developers who'd help me implement this? at least a proof of concept. Is it feasible? What should I expect as a timeline for such undertaking? Please only think about gazillion small files and folders.

    Read the article

  • Where do vendors publish internal transfer rates of HDDs?

    - by red888
    So I've started to dig into storage fundamentals and found that in order to calculate the IOPS of a HDD you need to know the internal transfer rate of the drive (time it takes data to move from the platters to internal disk's cache). I went on newegg and even a few vendor sites and could not find this info published for any HDDs. Is it sometimes called something else? Take this link to a seagate HDD for instance. Nowhere do I see "internal transfer rate", but I do see something called "Sustained Data Rate OD"- is that the same thing? Just so you know where I'm getting this info (Book: "Information Storage and Management Storing, Managing..."): Consider an example with the following specifications provided for a disk: The average seek time is 5 ms in a random I/O environment; therefore, T = 5 ms. Disk rotation speed of 15,000 revolutions per minute or 250 revolutions per second — from which rotational latency (L) can be determined, which is one-half of the time taken for a full rotation or L = (0.5/250 rps expressed in ms). 40 MB/s internal data transfer rate, from which the internal transfer time (X) is derived based on the block size of the I/O — for example, an I/O with a block size of 32 KB; therefore X = 32 KB/40 MB. Consequently, the time taken by the I/O controller to serve an I/O of block size 32 KB is (TS) = 5 ms + (0.5/250) + 32 KB/40 MB = 7.8 ms. Therefore, the maximum number of I/Os serviced per second or IOPS is (1/TS) = 1/(7.8 × 10^-3) = 128 IOPS.

    Read the article

  • Windows 7: L10N mechanics

    - by John Sonderson
    I have a localized version of Windows 7. I can't figure out where windows gets the names for files and directories on the system. For instance, consider the following (default) files. > cd C:\Users\Public\Pictures\Sample Pictures > dir Chrysanthemum.jpg Desert.jpg ... When I view these files on the default file explorer I see these names: Crisantemo.jpg Deserto.jpg ... This seems to imply that each file can be somehow assigned a localized name somewhere. However I cannot figure out how. Would appreciate if someone could shed some light on this issue. Thanks. UPDATE EDIT: The desktop.ini file in the folder containing Chrysanthemum.jpg contains the following entries. The .dll files used to translate the various resources are unfortunately not human-readable and I have no clue as to how they could be generated for other files created by the user to be translated, but they serve the purpose, and solve the mystery which lead to the post. Thanks. [LocalizedFileNames] Chrysanthemum.jpg=@%systemroot%\system32\SampleRes.dll,-101 Desert.jpg=@%systemroot%\system32\SampleRes.dll,-102 Hydrangeas.jpg=@%systemroot%\system32\SampleRes.dll,-103 Jellyfish.jpg=@%systemroot%\system32\SampleRes.dll,-104 Koala.jpg=@%systemroot%\system32\SampleRes.dll,-105 Tulips.jpg=@%systemroot%\system32\SampleRes.dll,-106 Lighthouse.jpg=@%systemroot%\system32\SampleRes.dll,-107 Penguins.jpg=@%systemroot%\system32\SampleRes.dll,-108 [.ShellClassInfo] LocalizedResourceName=@%SystemRoot%\system32\shell32.dll,-21805

    Read the article

  • BIND: forward 1st level zone

    - by raven
    First of all: sorry for the language, English is not my primary language. I have star-like DNS structure with many filials (more that 2): ^ | v filialNS_1.filial_1.city.local <---- ns.main.city.local <---- filialNS_2.filial_2.city.local ^ | v ns.mail.city.local is slave of all filials zones filialNS_1 is master of filial_1.city.local filialNS_2 is master of filial_2.city.local filialNS_N is master of filial_N.city.local I want to: serve DNS queries for xxx.filial_N.city.local with filialNS_N.filial_N.city.local forward all queries for xxx.xxx.xxx.local from filialNS_N to ns.main.city.local forward other queries to our provider's DNS on filial (or google-public-dns or anything else) FILIAL CONFIG named.conf zone "filial_1.city.local" { type master; file "/etc/namedb/dynamic/filial_1.city.local"; allow-update { key DHCP_UPDATER; }; allow-transfer { <ns.main.city.local IP address> }; }; zone "2.76.10.in-addr.arpa" { type master; file "/etc/namedb/dynamic/2.76.10.in-addr.arpa"; allow-update { key DHCP_UPDATER; }; allow-transfer { <ns.main.city.local IP address> }; }; zone "local." { type forward; forward only; forwarders { <ns.main.city.local IP address> }; }; nslookup server.filial_1.city.local - works fine nslookup server.main.city.local Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find server.main.city.local: NXDOMAIN Where am I going wrong?

    Read the article

  • Amazon EC2: Instances, IPs and a wordpress blog (LAMP)

    - by JustinXXVII
    I had a link to my blog posted on Reddit yesterday and MySQL crashed on my EC2 Micro instance. I know I didn't have that many visitors because I used a marketing link that tracks hits. The link got 167 hits over the course of the last 18 hours, and MySQL crashed twice. So anyway, 167 visits is not a lot, so I've done some short term optimizations like restricting the number of Apache threads to limit the MySQL calls. I also set up WP Super Cache to serve static content. Soon I'm going to offload all of my images to S3 or CloudFront. So this leads me to my question. If this doesn't seem to help, and if i have another traffic "spike", how do AMIs work when you have a MySQL database? I think I understand that if you have more than one instance and assign the same Elastic IP to both of them, the incoming traffic gets distributed among both. But what happens when the MySQL database gets updated on one of the instances? I just need to wrap my mind around what happens when I create an AMI and then launch a new instance to help with traffic. Thanks for your suggestions.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >