Search Results

Search found 1977 results on 80 pages for 'concurrent modification'.

Page 37/80 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Project management software that syncs with smartphone?

    - by overtherainbow
    Hello, I went through the archives here and the Wikipedia page on Project management software, but didn't find information on this type of application to manage projects: Native Windows application (don't like web-based solutions; prefer native to cross-platform) Free or affordable, ie. not an Enterprise solution Scalable from one to a few concurrent users Like MS Projet et al., a project consists in tasks which can be further divided into sub-tasks, and the whole thing is displayed in an tree list An item that has a date set (either start/due) must be displayed in a Calendar view, so it's easy to know what work must be done each day The Calendar view must somehow sync with smartphones (at least BlackBerry) At this point, the apps I know either don't provide a Calendar at all, or do but they can't sync with smartphones, which forces me to copy/paste scheduled items into Outlook so they are synced with my BlackBerry :-/ Thank you for any help.

    Read the article

  • Increasing FreeBSD threads

    - by sh-beta
    For network apps that create one thread per connection (like Pound), threadcount can become a bottleneck on the number of concurrent connections you can server. I'm running FreeBSD 8 x64: $ sysctl kern.maxproc kern.maxproc: 6164 $ sysctl kern.threads.max_threads_per_proc kern.threads.max_threads_per_proc: 1500 $ limits Resource limits (current): cputime infinity secs filesize infinity kB datasize 33554432 kB stacksize 524288 kB coredumpsize infinity kB memoryuse infinity kB memorylocked infinity kB maxprocesses 5547 openfiles 200000 sbsize infinity bytes vmemoryuse infinity kB pseudo-terminals infinity swapuse infinity kB I want to increase kern.threads.max_threads_per_proc to 4096. Assuming each thread starts with a stack size of 512k, what else do I need to change to ensure that I don't hose my machine?

    Read the article

  • IIS 8 Random 503 service unavailable

    - by Ivo
    We migrated a busy website to Windows Server 2012 with IIS 8. The website randomly gives the error "503 service unavailable" after an user presses F5 the error is gone again. The website is build in ASP.NET MVC 3. The website runs on one application pool, with default settings There are around 500 to 900 concurrent users on the website during the day, and the error happens more often when there are more 650 users. The CPU and the memory use on the server is stable. There is nothing about the 503 errors in the application log, the IIS log and the event log. Does anyone has any clue what the problem can be or how we can trace the the problem?

    Read the article

  • How much processor speed and cores do I need for these tasks?

    - by ajay
    I am planning to buy a new laptop as I find my current one very slow. My question here is specifically related to RAM size and CPU power. I will mostly be doing development (not much games). I would be dabbling in distributed computing, multithreaded and data intensive parallelizable tasks on multi-cores. For e.g. I would want to be able to Concurrent programming in Scala/Java/Clojure etc. and be able to see parallelization. Furthermore, I would want the RAM to be enough. But from a developer machine standpoint, do you think 4GB RAM and 2.53GHz Dual Core processor would be enough. I'm basically looking at this model: http://store.apple.com/us/configure/MC118LL/A?mco=MTM3NDcyODk (link dead)

    Read the article

  • Finding out whether files are added, changed or deleted on a FTP server

    - by futureelite7
    I've recently been given the task to migrate about 200GB of data from one dedicated server to another. As this will take a week or more, I've been taking a snapshot of the current files on the FTP server using wget's mirror feature. However, since other users will probably be uploading / changing stuff in the meantime, the snapshot that I have made will not include the most recent changes. Since I only have access to FTP on this server, I'm planning to write a script that will recursively do a FTP stat on all files in the FTP folder, and compare the directory listing against the snapshot I have locally. If there are differences in the number of files, then I know files have been added or deleted. If the modification dates have been changed, then I know the files have been changed, and should redownload those files specifically. Am I missing anything in my approach, or are there any possible improvements to this approach?

    Read the article

  • How do I objectively measure an application's load on a server

    - by Joe
    All, I'm not even sure where to begin looking for resources to answer my question, and I realize that speculation about this kind of thing is highly subjective. I need help determining what class of server I should purchase to host a MS Silverlight application with a MSSQL server back-end on a Windows Server 2008 platform. It's an interactive program, so I can't simply generate a list of URLs to test against, and run it with 1000 simultaneous users. What tools are out there to help me determine what kind of load the application will put on a server at varying levels of concurrent users? Would you all suggest separating the SQL server form the web server, to better differentiate the generated load on the different parts of the stack?

    Read the article

  • Squid Proxy Antivirus - Recommendations / Performance

    - by Jon Rhoades
    Due to our user's increasing expertise at downloading virus and the like, we are investigating adding Antivirus to our Squid proxy. A casual Google reveals several free and one paid: HAVP squid-vscan Viralator Safe Squid (commercial) None of the 'free' ones have reached v1 yet, nor do any inspire huge confidence form their websites (although of course I would rather they spent their time on the app rather than their website!). Does anybody have experience with any of these (or any other similar apps). If so are they suitable for a production network with 400ish concurrent users, and what sort of CPU/RAM requirements does it have?

    Read the article

  • What are the "must " security tools for small organizations ?

    - by Berkay
    One of my friend has started a company, it's a small-scale company that has a 40 workers.There is two guys also responsible for the security and IT related issues.He is managing the LAN, Webpage of the company, e-mail configuration, printer hardware modification, application deployment etc. In this point, to provide the security measures including access controls, authentication, web server security etc. Which tools do you use for securing, monitoring and controlling the system ? Are you paying for these tools or are they open source? This question is due to the security administrators requests to my friend.He offers to get some tools for the company and my friend hesitates to pay that much on them (what he mentioned me.)

    Read the article

  • Properly Hosting Multiple Sites on VDS

    - by Aristotle
    I'm going to be moving about 7-10 websites (5-8 with Databases - MySQL) onto our new Virtual Private Server. I'm curious what the best way to host many sites on a single server is though. Do I create a directory for each site immediately within my root directory, and then point the domain names for each site to http://123.123.123.123/siteDirectory - or is there a more appropriate way to do this? I'm very interested in maintining control over how many concurent connections each site can have at any given time - would I be able to do that on the directory-level, or am I required to limit the concurrent-connections to the VPS itself?

    Read the article

  • Dealing with different usernames when mounting removable media in Linux

    - by dimatura
    I have a laptop in which my username is, say, "foo". I have an external drive, formatted with Ext4, for which all files are owned by "foo" (at a filesystem level). Now, I have a desktop in which my username is, say, "bar". If I mount this external drive in this computer the files are considered to not be owned by "bar". This makes sense, but it is annoying because their bits mode are set so that only the owner can modify/delete them. What's the cleanest way to deal with this? Create a group with "foo" and "bar" and add group modification permissions?

    Read the article

  • Is it possible to set a SRV record with xname.org

    - by Emilien
    According to XName's (not up-to-date) ChangeLog Thu Oct 19 2006 Yann Hirou ([email protected]) Adding SRV records - including modification of dns_records it appears to be possible to set a SRV record for one of your zones. However, I just can't find a way to do this in the UI. I've contacted Yann Hirou, but didn't receive any answer (he is either submerged by emails, or only responds to "paying supporters") It might be that the feature is available in the source code, but that the instance running on XName has not been updated (since 2006?) Has anyone using XName been able to set up such records? Otherwise I might be forced to switch to another free DNS service...

    Read the article

  • postgres memory allocation tuning 2

    - by pstanton
    i've got a Ubuntu Linux system with 12Gb memory most of which (at least 10Gb) can be allocated solely to postgres. the system also has a 6 disk 15k SCSI RAID 10 setup. The process i'm trying to optimise is twofold. firstly a single threaded, single connection will do many inserts into 2-4 tables linked by foreign key. secondly many different complex queries are run against the resulting data, using group by extensively. this part especially needs to be optimised. i have four of these processes running at once in order to make use of the quad core CPU, therefore there will generally be no more than 5 concurrent connections (1 spare for admin tasks). what configuration changes to the default Postgres config would you recommend? I'm looking for the optimum values for things like work_mem, shared_buffers etc. relevant doco thanks!

    Read the article

  • Apache, modifying response codes from 404 to 301

    - by user72539
    Hi, I'm running a magento installation on an apache server. There are many pages indexed in both google and linked to from external sites. I can't use 301 redirects in a .htaccess file as I can't be sure I will catch all the links. At the moment all requests are rewritten through magento and if a request isn't found magento returns a 404 File not Found. Is there a way of using one of the apache modules to filter the response* from magento and if a 404 Not Found is being sent back then replace the response with a standard 301 Redirect to the home page? E.g. Request to Magento -- Apache -- Rewrite to Magento index.php page -- page processed. Response if request exists -- return results (200) if request doesn't exist -- return 404 -- apache filter change response -- return 301 redirect to / I appreciate any help. Thanks, Jon as far as I am aware mod_rewrite is only used to rewrite requests and doesn't allow the modification of responses.

    Read the article

  • What paths are guaranteed to exist on Windows Server 2008 R2?

    - by jpmc26
    What paths are guaranteed to exist on a Windows Server 2008 R2 instance? A client is requiring that some instructions specify exact paths in all cases. (The person executing said instructions is not supposed to have to decide on any path themselves, even when the path makes absolutely no difference.) So I need to know what paths I can rely on to be there. It's fine with me if they involve environment variables, but they need to be variables guaranteed to hold an existing path. (That is, no modification to a path that doesn't exist possible.) Or are there no guaranteed paths?

    Read the article

  • SQL Server Performance & Latching

    - by Colin
    I have a SQL server 2000 instance which runs several concurrent select statements on a group of 4 or 5 tables. Often the performance of the server during these queries becomes extremely diminished. The querys can take up to 10x as long as other runs of the same query, and it gets to the point where simple operations like getting the table list in object explorer or running sp_who can take several minutes. I've done my best to identify the cause of these issues, and the only performance metric which I've found to be off base is Average Latch Wait time. I've read that over 1 second wait time is bad, and mine ranges anywhere from 20 to 75 seconds under heavy use. So my question is, what could be the issue? Shouldn't SQL be able to handle multiple selects on a single table without losing so much performance? Can anyone suggest somewhere to go from here to investigate this problem? Thanks for the help.

    Read the article

  • postfix destination concurrency

    - by Hamlet
    Hi, I have a website sending email using phpmailer - Centos, postfix, php, mysql etc Emails are getting delivered to all hosts correctly except one. Mar 30 14:38:22 server postfix/qmgr[15467]: 7237D218852D: to=, relay=none, delay=0.04, delays=0.04/0.01/0/0, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mail06.indamail.hu[91.83.45.46] while sending MAIL FROM) They told me to limit concurrent connections to 2 to their server. I did, but they still say I connect more than twice. main.cf: default_destination_concurrency_limit = 1 fallback_relay = smtp_destination_concurrency_limit = 1 initial_destination_concurrency = 1 Any ideas? Thanks, Hamlet

    Read the article

  • Cross-platform distributed fault-tolerant (disconnected operation/local cache) filesystem

    - by Adrian Frühwirth
    We are facing a design "challenge" where we are required to set up a storage solution with the following properties: What we need HA a scalable storage backend offline/disconnected operation on the client to account for network outages cross-platform access client-side access from certainly Windows (probably XP upwards), possibly Linux backend integrates with AD/LDAP (permission management (user/group management, ...)) should work reasonably well over slow WAN-links Another problem is that we don't really know all possible use cases here, if people need to be able to have concurrent access to shared files or if they will only be accessing their own files, so a possible solution needs to account for concurrent access and how conflict management would look in this case from a user's point of view. This two years old blog posts sums up the impression that I have been getting during the last couple of days of research, that there are lots of current übercool projects implementing (non-Windows) clustered petabyte-capable blob-storage solutions but that there is none that supports disconnected operation nicely and natively, but I am hoping that we have missed an obvious solution. What we have tried OpenAFS We figured that we want a distributed network filesystem with a local cache and tested OpenAFS (which, as the only currently "stable" DFS supporting disconnected operation, seemed the way to go) for a week but there are several problems with it: it's a real pain to set up there are no official RHEL/CentOS packages the package of the current stable version 1.6.5.1 from elrepo randomly kernel panics on fresh installs, this is an absolute no-go Windows support (including the required Kerberos packages) is mystical. The current client for the 1.6 branch does not run on Windows 8, the current client for the 1.7 does but it just randomly crashes. After that experience we didn't even bother testing on XP and Windows 7. Suffice to say, we couldn't get it working and the whole setup has been so unstable and complicated to setup that it's just not an option for production. Samba + Unison Since OpenAFS was a complete disaster and no other DFS seems to support disconnected operation we went for a simpler idea that would sync files against a Samba server using Unison. This has the following advantages: Samba integrates with ADs; it's a pain but can be done. Samba solves the problem of remotely accessing the storage from Windows but introduces another SPOF and does not address the actual storage problem. We could probably stick any clustered FS underneath Samba, but that means we need a HA Samba setup on top of that to maintain HA which probably adds a lot of additional complexity. I vaguely remember trying to implement redundancy with Samba before and I could not silently failover between servers. Even when online, you are working with local files which will result in more conflicts than would be necessary if a local cache were only touched when disconnected It's not automatic. We cannot expect users to manually sync their files using the (functional, but not-so-pretty) GTK GUI on a regular basis. I attempted to semi-automate the process using the Windows task scheduler, but you cannot really do it in a satisfactory way. On top of that, the way Unison works makes syncing against Samba a costly operation, so I am afraid that it just doesn't scale very well or even at all. Samba + "Offline Files" After that we became a little desparate and gave Windows "offline files" a chance. We figured that having something that is inbuilt into the OS would reduce administrative efforts, helps blaming someone else when it's not working properly and should just work since people have been using this for years. Right? Wrong. We really wanted it to work, but it just doesn't. 30 minutes of copying files around and unplugging network cables/disabling network interfaces left us with (silent! there is only a tiny notification in Windows explorer in the statusbar, which doesn't even open Sync Center if you click on it!) undeletable files on the server (!) and conflicts that should not even be conflicts. In the end, we had one successful sync of a tiny text file, everything else just exploded horribly. Beyond that, there are other problems: Microsoft admits that "offline files" in Windows XP cannot cope with "large files" and therefore does not cache/sync them at all which would mean those files become unavailable if the connection drop In Windows 7 the feature is only available in the Professional/Ultimate/Enterprise editions. Summary Unless there is another fault-tolerant DFS that supports Windows natively I assume that stacking a HA Samba cluster on top of something like GlusterFS/Lustre/whatnot is the only option, but I hope that I am wrong here. How do other companies allow fault-tolerant network access to redundant storage in a heterogeneous environment with Windows?

    Read the article

  • apache and ajp performance

    - by user12145
    I have an apache sitting in front of two tomcat app servers(one on the same physical server, the other on a different one) that does time consuming work(0.5 sec to 10sec per request). The apache http server is getting killed by an average of 1 to 2 concurrent requests per second. both Server spec is about 2GB of RAM. Is there a way to optimize apache to handle the load? any advise is welcome. BalancerMember ajp://localhost:8009/whoisserver BalancerMember ajp://XXX.XX.XXX.XX:8009/whoisserver I keep getting the following in apache2.2 log: [Mon Dec 28 00:31:02 2009] [error] ajp_read_header: ajp_ilink_receive failed [Mon Dec 28 00:31:02 2009] [error] (120006)APR does not understand this error code: proxy: read response failed from 127.0.0.1:8009 (localhost)

    Read the article

  • Universal Windows XP With Ghost and Sysprep

    - by RobertPitt
    Hey I have an idea but im not sure whether it is possible and looking for advice on how to accomplish this. If i was to try and deploy Windows XP Sysprepped image from a Lenovo Thinkcentre 6073-CTO and tried to drop it onto a Dell Optiplex for instance I would get a BSOD due to the hardware change. What im looking to do is create a few images like so: Windows XP x86 SP3 (Sysprepped) Windows XP x64 SP3 (Sysprepped) ... So that we can use throughout the organization without having to create individual images per computer type. Is there anyway's to accomplish this such as some modification of files pre-ghost, removal a certain drivers pre-sysprep Any advice is appreciated.

    Read the article

  • How to limit disk performance?

    - by DrakeES
    I am load-testing a web application and studying the impact of some config tweaks (related to disk i/o) on the overall app performance, i.e. the amount of users that can be handled simultaneously. But the problem is that I hit 100% CPU before I can see any effect of the disk-related config settings. I am therefore wondering if there is a way I could deliberately limit the disk performance so that it becomes the bottleneck and the tweaks I am trying to play with actually start impacting performance. Should I just make the hard disk busy with something else? What would serve the best for this purpose? More details (probably irrelevant, but anyway): PHP/Magento/Apache, studying the impact of apc.stat. Setting it to 0 makes APC not checking PHP scripts for modification which should increase performance where disk is the bottleneck. Using JMeter for benchmarking.

    Read the article

  • How to get both PIPESTATUS and output in bash script

    - by Mustafa Serdar Sanli
    I'm trying to get last modification date of a file with this command TM_LOCAL=`ls -l --time-style=long-iso ~/.vimrc | awk '{ print $6" "$7 }'` TM_LOCAL has value like "2012-05-16 23:18" after execution of this line I'd also like to check PIPESTATUS to see if there was an error. For example if file does not exists, ls returns 2. But $? has value 0 as it has the return value of awk. If I run this command alone, I can check the return value of ls by looking at ${PIPESTATUS[0]} ls -l --time-style=long-iso ~/.vimrc | awk '{ print $6" "$7 }' But $PIPESTATUS does not work as I've expected if I assign the output to a variable as in the first example. In this case, $PIPESTATUS array has only 1 element which is same as $? So, the question is, how can I get both $PIPESTATUS and assign the output to a variable at the same time?

    Read the article

  • Differences between Remote Desktop and Terminal services

    - by Uwe
    What is the difference between Remote Desktop and Terminal services? We run a windows 2008 R2 server. There are several administrators who need to access this server. Windows 2008 allows only two concurrent sessions with different users. So I thought of installing terminal services. But I wonder what will happen to the server if I do so? What will be installed additionally? Will there be more features, ports, issues with the server?

    Read the article

  • Why are there tons of PHP processes open on my server?

    - by fiftyeight
    Today I saw that a website of mine isn't working so I ssh'd to the server and executed ps -eF. I see about 200 PHP processes that are running all for 4 hours. Apache is built with mpm event and mod fcgid. I killed all the PHP processes and now it's running fine, why does this happen? is this expected behavior? I don't really understand how processes how Apache keeps track of the number of PHP processes and their process IDs, so it would be nice if someone can also give some reference when I can read about this. Also, I used the "ab" command (Apache Benchmark) to see if this happens all the time, so I ran it about 4-5 times with 30 concurrent requests and again there are like 150 PHP processes running, when I keep running "ab" now it doesn't spawn more processes and the website is still working. Please shed some light on this! Thank you :)

    Read the article

  • How to sync a folder on a remote computer to a server on a domain

    - by Pierre-Alain Vigeant
    We have a small remote office that often share data with us. I learned that the data is shared as a email attachment, but that obviously leads to versioning hell and overriding. I am looking for a way for then to synchronize a folder directly on our main office domain controller. I personally use LiveMesh, but I would like a tool that is synchronized to our server directly without a 3rd party hosting the data, since we already have an online backup service taking care of the offsite backup. What enterprise class tool would let us synchronize a folder from a remote computer that is out of our domain, into our the file server of our domain? The synchronization has to be two-way, e.g.: Someone from the remote office will create an invoice. Someone from our office will review it and make modification to it. The remote office need to see the change. Our server is on Windows 2003.

    Read the article

  • How to avoid maximum Workgroup Manager connections in Mac OS X Server 10.6

    - by Stephan
    Is there a limit on the Mac OS X Server (10.6) Workgroup Manager in respect to concurrent connections to a server? I have an OS X server up and running and Open Directory configured but I am not able to log in remotely as I get the message the maximum number of connections for Workgroup Manager is already reached and I should wait for a user to disconnect. Even after a restart I get this message remotely. However, locally on the server I can start Workgroup Manager without any issues. It always lets me connect. Any advice what I need to do to make Workgroup Manager work from a remote location? I could not find any max connection setting in Server-Admin and nothing in the slapd log files. The server license says unlimited so I am quite sure it should not be a regular error message that indicates to me I should upgrade.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >