Search Results

Search found 3635 results on 146 pages for 'concurrent collections'.

Page 72/146 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Cross-platform distributed fault-tolerant (disconnected operation/local cache) filesystem

    - by Adrian Frühwirth
    We are facing a design "challenge" where we are required to set up a storage solution with the following properties: What we need HA a scalable storage backend offline/disconnected operation on the client to account for network outages cross-platform access client-side access from certainly Windows (probably XP upwards), possibly Linux backend integrates with AD/LDAP (permission management (user/group management, ...)) should work reasonably well over slow WAN-links Another problem is that we don't really know all possible use cases here, if people need to be able to have concurrent access to shared files or if they will only be accessing their own files, so a possible solution needs to account for concurrent access and how conflict management would look in this case from a user's point of view. This two years old blog posts sums up the impression that I have been getting during the last couple of days of research, that there are lots of current übercool projects implementing (non-Windows) clustered petabyte-capable blob-storage solutions but that there is none that supports disconnected operation nicely and natively, but I am hoping that we have missed an obvious solution. What we have tried OpenAFS We figured that we want a distributed network filesystem with a local cache and tested OpenAFS (which, as the only currently "stable" DFS supporting disconnected operation, seemed the way to go) for a week but there are several problems with it: it's a real pain to set up there are no official RHEL/CentOS packages the package of the current stable version 1.6.5.1 from elrepo randomly kernel panics on fresh installs, this is an absolute no-go Windows support (including the required Kerberos packages) is mystical. The current client for the 1.6 branch does not run on Windows 8, the current client for the 1.7 does but it just randomly crashes. After that experience we didn't even bother testing on XP and Windows 7. Suffice to say, we couldn't get it working and the whole setup has been so unstable and complicated to setup that it's just not an option for production. Samba + Unison Since OpenAFS was a complete disaster and no other DFS seems to support disconnected operation we went for a simpler idea that would sync files against a Samba server using Unison. This has the following advantages: Samba integrates with ADs; it's a pain but can be done. Samba solves the problem of remotely accessing the storage from Windows but introduces another SPOF and does not address the actual storage problem. We could probably stick any clustered FS underneath Samba, but that means we need a HA Samba setup on top of that to maintain HA which probably adds a lot of additional complexity. I vaguely remember trying to implement redundancy with Samba before and I could not silently failover between servers. Even when online, you are working with local files which will result in more conflicts than would be necessary if a local cache were only touched when disconnected It's not automatic. We cannot expect users to manually sync their files using the (functional, but not-so-pretty) GTK GUI on a regular basis. I attempted to semi-automate the process using the Windows task scheduler, but you cannot really do it in a satisfactory way. On top of that, the way Unison works makes syncing against Samba a costly operation, so I am afraid that it just doesn't scale very well or even at all. Samba + "Offline Files" After that we became a little desparate and gave Windows "offline files" a chance. We figured that having something that is inbuilt into the OS would reduce administrative efforts, helps blaming someone else when it's not working properly and should just work since people have been using this for years. Right? Wrong. We really wanted it to work, but it just doesn't. 30 minutes of copying files around and unplugging network cables/disabling network interfaces left us with (silent! there is only a tiny notification in Windows explorer in the statusbar, which doesn't even open Sync Center if you click on it!) undeletable files on the server (!) and conflicts that should not even be conflicts. In the end, we had one successful sync of a tiny text file, everything else just exploded horribly. Beyond that, there are other problems: Microsoft admits that "offline files" in Windows XP cannot cope with "large files" and therefore does not cache/sync them at all which would mean those files become unavailable if the connection drop In Windows 7 the feature is only available in the Professional/Ultimate/Enterprise editions. Summary Unless there is another fault-tolerant DFS that supports Windows natively I assume that stacking a HA Samba cluster on top of something like GlusterFS/Lustre/whatnot is the only option, but I hope that I am wrong here. How do other companies allow fault-tolerant network access to redundant storage in a heterogeneous environment with Windows?

    Read the article

  • How can I work around problems with certificate configuration in Remote Desktop Services?

    - by Michael Steele
    I am setting up a Remote Desktop Services farm, and am having trouble configuring certificates for it to use. A demonstration of the problem I'm seeing can be found in Step #4. At this point I am convinced that there are problems with the user interface, and am looking for ways around them. Is there any way to configure certificates in Remote Desktop Services so that the settings hold and are reflected in the GUI? If not, is there any way for me to verify that the settings are correct? Step #1 - Create certificate to be used. I've configured a certificate to use with RD Web Access. The certificate is stored with in the Certificates MMC on my RD Connection Broker, and I am configuring the farm from that computer. I found by letting RD Web Access generate its own certificate that the following properties are required: Enhanced Key Usage Server Authentication Client Authentication This may not be required, but the self-signed certificate includes it. Key Usage Digital Signature Key Agreement Subject Alternative Name DNS Name=domain.com Detour about self-signed certificate generation As a quick detour, I was able to work around a problem with creating self-signed certificates using powershell. The documentation for the New-RDCertificate cmdlet gives the following example: PS C:\> $password = ConvertTo-SecureString -string "password" -asplaintext -force New-RDCertificate -Role RDWebAccess -DnsName "test-rdwa.contoso.com" -Password $password -ConnectionBroker rdcb.contoso.com -ExportPath "c:\test-rdwa.pfx" Typing this into the shell will result in an error message claiming that a function, Get-Server cannot be found. Prior to using New-RDCertificate, you must import the RemoteDesktop Module with Import-Module RemoteDesktop. Step #2 - Observe out-of-box behavior The first time you visit the Deployment Properties dialog box by navigating to Server Manager - Remote Desktop Services - Collections and selecting "Edit Deployment Properties" from the "TASKS" dropdown list in the "COLLECTIONS" grouping, you will see the following screen: This window is misleading because the level field is listed as "Not Configured". If I understand correctly all three of the role services are using a self-signed certificate. For the RD Web Access role this can be verified by visiting the website: The certificate being used also appears in the Certificates MMC: Step #3 - Assign new certificate The Deployment Properties dialog box will allow me to select my existing certificate. The certificate must be placed within the local computers Certificates MMC in the "Personal" certificate store. The private key will need to be exportable, and you will need to provide the password. I temporarily exported my certificate to a file named temp.pfx with a password, and then imported it into Remote Desktop Services from there. Once this is done the GUI will indicate that it is ready to accept the new configuration. Once I click the "Apply" button, the GUI indicates success. This can be verified by visiting the RD Web Access web site a second time. There is no certificate error. Step #4 - The GUI fails to maintain its state If the GUI is closed and reopened, all of these settings appear to be lost. Actually, the certificate I configured is still being used. I am able to continue accessing the RD Web Access site without any certificate errors. Oddly, if I use the "Create new certificate..." button to generate a self-signed certificate this window will update to an "Untrusted" level. This setting will then be maintained through the opening and closing of the Deployment Properties dialog box. Is there anything I can do to have my settings appear to stick? I feel like something is wrong when the GUI claims I haven't fully configured certificates.

    Read the article

  • Differences between Remote Desktop and Terminal services

    - by Uwe
    What is the difference between Remote Desktop and Terminal services? We run a windows 2008 R2 server. There are several administrators who need to access this server. Windows 2008 allows only two concurrent sessions with different users. So I thought of installing terminal services. But I wonder what will happen to the server if I do so? What will be installed additionally? Will there be more features, ports, issues with the server?

    Read the article

  • Why are there tons of PHP processes open on my server?

    - by fiftyeight
    Today I saw that a website of mine isn't working so I ssh'd to the server and executed ps -eF. I see about 200 PHP processes that are running all for 4 hours. Apache is built with mpm event and mod fcgid. I killed all the PHP processes and now it's running fine, why does this happen? is this expected behavior? I don't really understand how processes how Apache keeps track of the number of PHP processes and their process IDs, so it would be nice if someone can also give some reference when I can read about this. Also, I used the "ab" command (Apache Benchmark) to see if this happens all the time, so I ran it about 4-5 times with 30 concurrent requests and again there are like 150 PHP processes running, when I keep running "ab" now it doesn't spawn more processes and the website is still working. Please shed some light on this! Thank you :)

    Read the article

  • How to avoid maximum Workgroup Manager connections in Mac OS X Server 10.6

    - by Stephan
    Is there a limit on the Mac OS X Server (10.6) Workgroup Manager in respect to concurrent connections to a server? I have an OS X server up and running and Open Directory configured but I am not able to log in remotely as I get the message the maximum number of connections for Workgroup Manager is already reached and I should wait for a user to disconnect. Even after a restart I get this message remotely. However, locally on the server I can start Workgroup Manager without any issues. It always lets me connect. Any advice what I need to do to make Workgroup Manager work from a remote location? I could not find any max connection setting in Server-Admin and nothing in the slapd log files. The server license says unlimited so I am quite sure it should not be a regular error message that indicates to me I should upgrade.

    Read the article

  • Configuring CentOS for Heavy-ish Traffic

    - by Jonathan Sampson
    New sysadmin here, managing a CentOS server. This past week has been one of the most exciting weeks of my career, and full of learning all sorts of new things. Today I have another task though, and that is to make sure our server can handle more than what our old shared-hosting was capable of. We were originally limited to 200 concurrent connections on our GoDaddy shared-hosting. Eventually we out-grew this (usually during campaigns/marketing events) and moved on to a Virtual Dedicated Server. I'm assuming the number of connections would be handled by Apache. What configurations should I be mindful of in order to allow more traffic in?

    Read the article

  • How to speed up apache

    - by Zen_silence
    We have a server with 8Cores, 16GB of RAM and RAID 0 SAS 10K drives. Our goal is to use this to serve a fairly simple php application quickly. We have tested all other components and we think we have narrowed it down to apache is our bottleneck. I am no apache guru I have done some research and tested a couple things but when i test with JMeter launching 100 concurrent connections against the server the first 10 - 20 come back quickly 30 - 100ms but the rest take between 1000ms to 3000ms. Anyone have any ideas on what to change in our apache config to make this faster right now its a vanilla install of apache.

    Read the article

  • Rules to choose hardware for OLTP systems (sql server)

    - by Roman Pokrovskij
    Ok. We know database size, number of concurrent users, number of transactions per minute; should choose number of processors, RAID, RAM, mirroring and clustering. There are no exact rule.. but may be there are no rules at all? In my practice in every case I have "legacy" system, and after some inspections and interview I can form an opinion how hardware and design can be improved. But every time when I meet "absolutely" new system (I guess there are no new systems, but sometimes are such tasks) I can't say anything trustful. So I'm interesting how people deal with such tasks? They map task on theirs experience or have some base formulas?

    Read the article

  • Shared block device file system (cluster file system without networking)

    - by fungs
    Is there any file system that can be mounted multiple times and supports concurrent file access for Linux? Basically I want something like a cluster file system but without the need to have a running network for a distributed lock manager. That can be very handy in connection with virtual machines that can share data with the host or another VM without the need to create a network link. This I want to avoid to keep the network architecture secure (virtual machine in DMZ) but share large files. No need to scale it up, just two machines that mount the same block device. Shouldn't it be possible to have file locking information right on the disk?

    Read the article

  • How do I install and run Tomcat on port 80 as my only web server? (Rooted Ubuntu box)

    - by gav
    Hi All, tl;dr - I have a rooted linux box that I want to run tomcat on as a server (No Apache Web Server) how would you set this up avoiding common security pitfalls? I've written a Grails App that I want to run on a VPS I rent. The VPS has very little memory and I am using it for the sole purpose of running this application so I don't need the apache web server. This is my first venture into Server administration and I'm sure to fall into some well known traps. Should I use iptables to redirect requests from port 80 to 8080? Should I run tomcat as root or as it's own user? What configuration settings would be good for a low memory system expecting less than 10 concurrent users? Hopefully an easy one for you! Anyone who could link to a tutorial would be a personal hero destined for great things no doubt. Gav

    Read the article

  • static index.html file nginx

    - by Guntis
    We are using nginx with php-fpm. We plan to make first page static (generate html file). if we have 100 concurrent connections, how we can handle file regeneration? basically we need generate new file index_new.html, then delete index.html, and then move index_new.html to index.html. What happens when index.html file was deleted? User gets 404 error? Or nginx handles file from OS cache? One idea is to tell nginx, that 404 error is index_new.html and then not to move index_new to index, but copy. But i don't like idea about 404 error. Thanks.

    Read the article

  • Providing internet access to users in an Enterprise (MPLS ) Network

    - by Vivek Bernard
    Scenario I'm planning to setup a typical Head Office - Branch Office(s) Network Setup. there will be 25 branch offices in India of two will be overseas (one in US and the other in UK). All these will be connected via MPLS. Additional details: No of Concurrent users in each office is going to be 25 tranlating to 650 users The requirement is to provide "proxied" internet connectivity to the branch offices. How should I go about doing it? Plan A: Buying an internet leased line in the Head Office and distribute it through an internal proxy server to all the branches Plan B Buying separate internet lines for all the branches and setup individual proxies to all the branch offices.

    Read the article

  • apache and ajp performance

    - by user12145
    I have an apache sitting in front of two tomcat app servers(one on the same physical server, the other on a different one) that does time consuming work(0.5 sec to 10sec per request). The apache http server is getting killed by an average of 1 to 2 concurrent requests per second. both Server spec is about 2GB of RAM. Is there a way to optimize apache to handle the load? any advise is welcome. BalancerMember ajp://localhost:8009/xxxxxx BalancerMember ajp://XXX.XX.XXX.XX:8009/xxxxxx I keep getting the following in apache2.2 log: [Mon Dec 28 00:31:02 2009] [error] ajp_read_header: ajp_ilink_receive failed [Mon Dec 28 00:31:02 2009] [error] (120006)APR does not understand this error code: proxy: read response failed from 127.0.0.1:8009 (localhost)

    Read the article

  • All browsers crash when file uploading or downloading dialog opens up

    - by Mitulát báti
    I pretty much summarized the problem in the title. I tried to get some solutions. All I found was check if the Chrome has any concurrent or erroneous applications installed on my computer that conflicts with Chrome (by typing chrome://conflicts). But "unfortunately" it said that there are no perceived conflicts. First I thought it is only with Chrome, but soon I saw that no. All internet browsers are affected. I noticed this problem after I installed Fruity Loops, but uninstalling it didn't solve the problem. Maybe Fruity loops is not the guilty reason. Have any of you met this problem before? What should I do? Thank you. UPDATE: Sorry I forgot that this is under Windows 8.1.

    Read the article

  • Weblogic Threads Usage

    - by Hila
    I have an application deployed on WebLogic 10.3, which exhibits a strange behavior. I am running a constant (not too high) load on my application (20 concurrent users, running a light activity). The response time is reasonable (well below 100ms after the application stabilizes) Memory consumption seems fine (My application creates a lot of short-living objects, but they are garbaged collected so the overall memory consumption stays under 500 mb). Threads stats seem healthy as well: And yet, after I leave my test running for a while, more and more execute threads ("[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'") are created, until eventually the application crashes: This test hasn't been running for a long time (All the new threads that you don't see in the first screenshot were created while I was writing this question), and I've seen much more threads being created. Any idea why these threads are being created?

    Read the article

  • DDoS attacks to PBX

    - by user316687
    I'm wondering if DDOS attacks to PBX or telecommunications systems is possibe real. According to this links: http://threatpost.com/en_us/blogs/firm-sees-more-ddos-attacks-aimed-telecom-systems-073112 http://news.softpedia.com/news/DDOS-Attacks-Against-Telecom-Systems-Cost-as-Little-as-20-16-Per-Day-284875.shtml it is possible. There are DDOs attacks to web servers, which mostly give them so much concurrent loads or connections that service get unavailable. Many government or non-profit organizations that suffered this kind of attacks, eventually could choose to shutdown their web server and that's it, waiting for these attacks to end. For a DDOs attacks to PBX, I imagine that it would result in telephones getting busy or ringing all the time unstoppably. This kind of attack could really damage any kind organization. Is it possible to do that or are we just in the beginnings?

    Read the article

  • How to know if my nginx is in good health?

    - by Howard
    I am running a nginx on EC2 (m1.small) for SSL termination. I am using 2 workers on Ubuntu, with latest nginx (stable), the network throughput is around 2Mbps and system load average is around 2 to 3. I am wondering if this system is in good health for now, e.g. what is the queue length (I know nginx can handle a lot of concurrent request, but I mean before the request is being served, how many of them need to wait before being served) what is the average queue time for a given request to be served. I want to know because if my nginx is cpu bounded (e.g. due to SSL), I will need to upgrade to a faster instance. My current nginx status Active connections: 4076 server accepts handled requests 90664283 90664283 104117012 Reading: 525 Writing: 81 Waiting: 3470

    Read the article

  • Memory limit on PHP + Apache + Windows 32 bit?

    - by thkala
    I am considering using Apache 32-bit for a Moodle installation on a Windows 2008 R2 64-bit/16GB server. Since the available memory affects the number of concurrent of users that can be served, I was wondering how the 2GB memory limit on 32-bit Windows processes affects Apache+PHP. Is it a collective limit for the whole server, or is it applied separately for each Apache child process/thread? If it is separate, how many of those children are launched on Windows? One per request? One per processor core? Something in the middle? Is this somehow configurable?

    Read the article

  • How do I analyze an Apache Bench result?

    - by Alan Hoffmeister
    I need some help with analyzing a log from Apache Bench: Benchmarking texteli.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Server Hostname: texteli.com Server Port: 80 Document Path: /4f84b59c557eb79321000dfa Document Length: 13400 bytes Concurrency Level: 200 Time taken for tests: 37.030 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 13524000 bytes HTML transferred: 13400000 bytes Requests per second: 27.01 [#/sec] (mean) Time per request: 7406.024 [ms] (mean) Time per request: 37.030 [ms] (mean, across all concurrent requests) Transfer rate: 356.66 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 27 37 19.5 34 319 Processing: 80 6273 1673.7 6907 8987 Waiting: 47 3436 2085.2 3345 8856 Total: 115 6310 1675.8 6940 9022 Percentage of the requests served within a certain time (ms) 50% 6940 66% 6968 75% 6988 80% 7007 90% 7025 95% 7078 98% 8410 99% 8876 100% 9022 (longest request) What this results can tell me? Isn't 27 rps too slow?

    Read the article

  • How to decide the optimal number of ruby thin/mongrel instances for a server, number of cores?

    - by Amala
    We are trying to deploy mongrel instances on a machine. What is the optimal number of mongrel instances for a server? Since an instance can handle concurrent connections, I do not see any benefit in starting more than 1 per core. Any more than that and the threads will just fight for CPU. Our predecessors have assigned 10 instances for 4 cores, but I think it will just cause CPU contention. Any definitive answers / opinions? I have seen this question: How many mongrel instances? But it is really not specific enough.

    Read the article

  • How to reliably run a batch job every 5 seconds?

    - by Benjamin
    I'm building an application where the sending of all notifications (email, SMS, fax) will be asynchronous. The application will write the notifications to the database, and a batch job will read these notifications and send them with the appropriate transport. I was first reading at ways to run cron faster than the minute, and realized this was a bad idea. The batch scripts are written in PHP, and I guess that writing a proper daemon would be quite an overhead (though I'm open to any suggestion, as PHP car run indefinitely as well). What I have in mind is a solution that would: Run the PHP script every 5 seconds Check that the previous run has finished, or abort (never 2 concurrent batches running) Kill the script if live for more than x minutes (a security in case it hangs) Start with the system (if a reboot occurs) Any idea how to do this?

    Read the article

  • Managing service passwords with Puppet

    - by Jeff Ferland
    I'm setting up my Bacula configuration in Puppet. One thing I want to do is ensure that each password field is different. My current thought is to hash the hostname with a secret value that would ensure each file daemon has a unique password and that password can be written to both the director configuration and the file server. I definitely don't want to use one universal password as that would permit anybody who might compromise one machine to get access to any machine through Bacula. Is there another way to do this other than using a hash function to generate the passwords? Clarification: This is NOT about user accounts for services. This is about the authentication tokens (to use another term) in the client / server files. Example snippet: Director { # define myself Name = <%= hostname $>-dir QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 3 Password = "<%= somePasswordFunction =>" # Console password Messages = Daemon }

    Read the article

  • Determine display or VNC session based on PID

    - by Daniel Kessler
    I frequently VNC into a server where we run many concurrent computationally intensive matlab processes. Sometimes, one of my processes misbehave, which I can see from top, but I have a hard time figuring out which VNC session it's running on, or more specifically, which display it's running on. Suppose I see that PID 8536 looks like a resource hog, and I want to investigate. Because it's a matlab session, I know there is likely an IDE open somewhere, and I want to check to see if anything important is happening before I kill it. We've solved this somewhat awkwardly in the past by identifying which PTY 8536 was launched from, then looking at a process tree to figure out things launched in that context, scrolling up, and seeing the VNC initialization. Seems like there must be a better way to go PID - X Display (or VNC Session).

    Read the article

  • Key modifiers affect remote VNC sessions in OS X

    - by Michael
    I have two concurrent users of my MacBook: one local (with local peripherals) and one remote (connecting via VNC to a user kept logged in with fast user switching). As described here http://macosx.com/forums/howto-faqs/52547-howto-simultaneous-user-environments-via-vnc.html That's working fine, except that when I hit modifier keys (e.g. shift, option, ...), I also affect the remote user. For example, if I hold down shift, the remote user's key strokes are capitalised, and if I hold down option, they get strange glyphs instead of the normal letters. Does anyone have any idea what could be causing this, or how to fix it?

    Read the article

  • Load testing nginx inside AWS

    - by andy
    I'm trying to load test nginx running on AWS. I need to try to optimise it to handle 1Gbps of inbound traffic. Currently I've got it to peak at 85Mbit/s by running nginx on an m1.large with 4 other machines hitting it by using ab with -i (for head requests) -k (keepalives) -r (ignore failed requests) -n 500000 -c 20000. I'm struggling to generate more than 85 Mbit/s traffic from 4 machines, yet when I do scp a large file I get nearly 0.25Gbit/s of traffic going over the network. Are there any tools or approaches that I could use to load test nginx that might generate more load? I'm only interested in inbound traffic, so perhaps a DoS tool could help if it chucks away responses? I'm hitting a very small (40 byte) static asset, and have peaked at handling 50K concurrent connections and getting 25k reqs/s when just using a single load generator machine.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >