Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 84/457 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Filesystems for webserver with SATA and Solid State disk,

    - by Jorisslob
    We have just ordered a new webserver with 120 Gb solid state disk and a SATA disk. I am trying to plan ahead what sort of filesystem to use. This system will be running Linux, Apache/Tomcat to host java services. The main service is a system where people can upload reasonably large files (in the order of 100 Mb, images, image stacks and video), which people will be able to annotate and which will be sent to a database server when annotation is complete. Thus far, I plan to put most of the utility programs of the operating system om the SSD and put the large media files there. The SATA disks will hold the less volitile data like apache, tomcat and the servlets. For filesystems I have considered going for the stable EXT3 because I hear that it is best supported. The downside seems to be that it not the ideal choice for large files. That is why I am leaning towards using XFS for the SSD and EXT3 for the SATA. My questions are: 1) Does this sound like a reasonable setup? 2) What filesystems would you recommend for the SSD and for the SATA? Thanks

    Read the article

  • Unexplained CPU and Disk activity spikes in SQL Server 2005

    - by Philip Goh
    Before I pose my question, please allow me to describe the situation. I have a database server, with a number of tables. Two of the biggest tables contain over 800k rows each. The majority of rows are less than 10k in size, though roughly 1 in 100 rows will be 1 MB but <4 MB. So out of the 1.6 million rows, about 16000 of them will be these large rows. The reason they are this big is because we're storing zip files binary blobs in the database, but I'm digressing. We have a service that runs constantly in the background, trimming 10 rows from each of these 2 tables. In the performance monitor graph above, these are the little bumps (red for CPU, green for disk queue). Once ever minute we get a large spike of CPU activity together with a jump in disk activity, indicated by the red arrow in the screenshot. I've run the SQL Server profiler, and there is nothing that jumps out as a candidate that would explain this spike. My suspicion is that this spike occurs when one of the large rows gets deleted. I've fed the results of the profiler into the tuning wizard, and I get no optimisation recommendations (i.e. I assume this means my database is indexed correctly for my current workload). I'm not overly worried as the server is coping fine in all circumstances, even under peak load. However, I would like to know if there is anything else I can do to find out what is causing this spike? Update: After investigating this some more, the CPU and disk usage spike was down to SQL server's automatic checkpoint. The database uses the simple recovery model, and this truncates the log file at each checkpoint. We can see this demonstrated in the following graph. As described on MSDN, the checkpoints will occur when the transaction log becomes 70% full and we are using the simple recovery model. This has been enlightening and I've definitely learned something!

    Read the article

  • sorry, the maximum allowed clients from your host (10) are already connected" FTP error

    - by Sejanus
    Hello, I keep getting the "sorry, the maximum allowed clients from your host (10) are already connected" error whenever I try to transfer a large number of files. At first I thought it's a filezilla bug, however I get the same error basically with every FTP client I've tried, including Total Commander under Wine. I do not get that error using Windows. I did try to limit maximum allowed connections for Filezilla, both in server settings and in global settings, it didnt change anything. I did try to switch between passive and active modes (not sure if it's related at all, just last desperate attempt), and it didnt change anything either. When I try to use native ftp client (not sure how is it called, the one in Places - Connect to a server) I get abstract "connection refused" error every time I transfer large number of files. Connection is refused for separate particular files, if I click "Ignore" each time the rest of files are transfered perfectly well, so I assume it's the very same error. Anything I could do? This really drives me mad, transfering large numbers of files is a part of my everyday job... P.S. and this happens with many different FTP servers. Also I dont get this error in Windows. So I assume it's not a server problem. P.P.S. I am aware of similar question here, the answer provided just didn't solve it to me.

    Read the article

  • Batch deletion of smaller files from group of files via unix command line

    - by artlung
    I have a large number (more than 400) of directories full of photos. What I want to do is to keep the larger sizes of these photos. Each directory has 31 to 66 files in it. Each directory has thumbnails, and larger versions, plus a file called example.jpg I dispatched the example.jpg file easily with: rm */example.jpg I initially thought that it would be easy to delete the thumbnails, but the problem is they are not consistently named. The typical pattern was photo1.jpg and photo1s.jpg. I did rm */photo*s.jpg but it ended up some of the files named photoXs.jpg were actually larger and not smaller. Argh. So what I want to do is scan each directory for filesize and delete (or move) the thumbnails. I initially thought I'd just ls -R every file and extract the size of each file and save those under a threshold. The problem? In one directory the large will be 1.1 MB and the thumb is 200k. In another the large is 200k and the small 30k. Even worse, the files really are mostly named photo1.jpg - so simply putting them all in the same folder, sorting by size, and deleting in groups would not work without renaming already, and if it's possible I'd prefer to keep them in their folders. I was almost resolved to just doing this all manually, but then thought I'd ask here. How would you do this task?

    Read the article

  • Using NFS for scalable PHP/MySQL web application

    - by Jeroen Moons
    Here's the situation: I have a PHP/MySQL web application that accepts user uploads (pdf files). From these pdf files' pages a preview image is made on the fly and presented to the web app's users. Some pdfs might be on the large side, most will be under 50 MB but some extreme cases could be as large as a few hundred MB. A little waiting for the preview image for large pdf files is acceptable but no more than a minute let's say. Everything is running on one server for now, but soon the app will hit the server's limit on both storage and processing power. My idea to solve the problem: To deal with this situation I had the idea of having one or more pdf processing servers as needed, and one or more file storage servers. These two types of servers are mounted to the server on which the actual app runs using NFS. The app could then use GearMan to delegate pdf processing tasks to these processing servers. The processing server can mount the storage server and read the file stored there, process it and write its output to that server. The servers I'm talking about will be amazon ec2 instances. The web app returns a link to the resulting pdf preview image on the storage server that was used which can then be used on the front end to show the image to the user. My question: I have zero experience with apps that use multiple servers, is this idea viable or is there a better way to do it? Is an NFS setup fast and reliable enough for this situation?

    Read the article

  • Creating a Jenkins build farm in a hands-off manner?

    - by user183394
    My colleague and I have set up and run Jenkins on a KVM guest running Ubuntu 12.04 with good results for a while now. We are thinking about deploying a cluster of Jenkins CI hosts in master/slave configuration, with the libvirt slave plugin to keep our hardware count low. Our environment is strictly Linux (CentOS, Scientific Linux, Fedora, and Ubuntu). Both of us are competent in setting up large clusters. We typically use tools like cobbler + a configuration management tool (Puppet, Chef, and alike) to set up a large number of machines (physical and/or virtual) hands off (hundreds of nodes in less than an hour typical). We would like to do the same for nodes running Jenkins. But the step by step guide doesn't give us any clues in this regard. I did see a Multi-slave config plugin. But, being used to dealing with hundreds or more machines completely hands-off, clicking the UI for many machines just doesn't feel right. Can someone point to us a reference that talks about how to set up large cluster of Jenkins CI hosts more in the hands-off way?

    Read the article

  • Array on servers which receive several hundred GB of data a day

    - by Matthew
    This is hopefully a simple question. Right now we are deploying servers which will serve as data warehouses. I know with raid 5 the best practice is 6 disks per raid 5. However, our plan is to use RAID 10 (both for performance and safety). We have a total of 14 disks (16 actually, but two are being used for OS). Keeping in mind that performance is very much an issue, which is better - doing several raid 1's? Do one large raid 10? One large raid 10 had been our original plan, but I want to see if anyone has any opinions I haven't thought of. Please note: This system was designed for using Raid 1+0, so losing half of the raw storage capacity is not an issue. Sorry i hadn't mentioned that initially. The concern is more whether or not we want to use one large Raid 1+0 containing all 14 disks, or several smaller raid 1+0's and then stripe across them using LVM. I know the best practice for higher raid levels is to never use more than 6 disks in an array.

    Read the article

  • McAfee ePolicy-Orchestrator (ePO) - policy ownership by groups?

    - by bkr
    Is there a way to grant ownership of an ePO policy to a group? Alternatively, is there a permission that can be set that would allow owners of an ePO policy to add other owners to that policy without making them ePO admin? In the case I'm looking at, ePO is deployed within a large heterogeneous organization with a large amount of delegation in the form of create/modify policy rights to allow multiple IT departments to customize to their needs for their sections of the system tree. The problem is that the policies are owned by the creator of the policy. This causes problems when they leave (staff turnover) or when other people on their teams need the ability to modify the existing policy. Unfortunately, as far as I can see, only someone who is an ePO admin can change the owners. Even the owner of the policy cannot add other owners (unless they are also an ePO admin). Ideally, I should be able to assign ownership of a policy to a group - since that would be easier to manage than me or another admin having to continually fix policy ownership or remove orphaned polices. Even just allowing the owners of the polices to add other owners would be sufficient. How are other people handling policy ownership when dealing with a large amount of delegated control of polices? Is there a way to delegate this out without making users full ePO admins?

    Read the article

  • Windows Server 2008 R2 grinds to a screeching halt during file copy operations

    - by skolima
    When my Windows Server 2008 R2 machine is performing any large disk operations (copying 10GB files from one drive to another, copying similar file over network, merging HyperV snapshots, compressing large files), performance of the whole machine slows down terribly, everything becomes unresponsive. This is noticeable in any situation when the disk access is large enough not to fit in the cache. Are there any settings available for tuning this behaviour? I can accept slower file transfer if this would give me more responsiveness. System details: Dell Optiflex 960, Core 2 Quad Q9650, 8GB RAM, 2 SATA drives - 320GB (ST3320418AS) and 1TB (ST31000528AS), NCQ active on both, Intel 82564LM-3 Gigabit Ethernet, ATI HD 3450 graphics, Intel ICH10 bridge. We have multiple machines like this, every one is exhibiting the same behaviour. I though this was overkill for a workstation, apparently I was mistaken. Update: I guess I shouldn't have mentioned the HyperV at all. The above configuration is a standard workstation setup at the company I work for, this is not a server of any kind. I have at most 3 virtual machines working, and usually I'm the only person accessing them. Never the less, the slowdown occurs even when no VMs are running. On a Linux machine I'd simply ionice the copy process and I could forget about it, is there any way to manage IO priorities on Windows?

    Read the article

  • Decimal data type in Visual Basic 6.0

    - by Appu
    I need to do calculations (division or multiplication) with very large numbers. Currently I am using Double and getting the value round off problems. I can do the same calculations accurately on C# using Decimal type. I am looking for a method to do accurate calculations in VB6.0 and I couldn't find a Decimal type in VB6.0. What is the data type used for doing arithmetic calculations with large values and without getting floating point round off problems? Thanks

    Read the article

  • Slow synch of files form iPad to webserver?

    - by MikeN
    I'm building a recording iPad application that will take some moderately large recordings on the iPad (5-10 minutes of full audio roughly 5-10Megabytes in size.) How can I synch such large files to my web server for use? I want the synching to occur asynchronously in the background. Is there an existing library/utility to synch files in the Megabyte range from an iPhone/iPad to a server in small chunks?

    Read the article

  • Distance Between GIS Points

    - by Paul
    I have a large number of GIS (latitude, longitude) coordinates, and I'd like to get the distance between them. Is there a service that will calculate the shortest path for me? I know about google maps, but I'd like something I can use from Python, and that can handle a large batch of requests at once. I'm looking for the driving distance, so a straight distance won't do. Thanks

    Read the article

  • Change dropdown size for .NET touchscreen application

    - by calico-cat
    I'm trying to create a .NET touchscreen application. I would like to be able to create a large dropdown. By increasing the font size, the button stays the same width but increases in height, meaning it's long and thin. Is there a way to 'scale up' .NET controls so they are large in size for touchscreen applications?

    Read the article

  • Tool Tip Text in IReport

    - by Arun Singla
    Is there any functionality in Ireport, so that we can use tool tip text. Actually I have a field name description which contains very large text. Instead of displaying large text I want to show 2-3 lines and rest of text as a tool tip. how cud i achieve this in ireport

    Read the article

  • How can i trace changes made to the DOM by JavaScript?

    - by Denis Hoctor
    I have a large website in development with a large amount of JS in different files. I have come across an issue where something is removing a class from the DOM. I can see it when I view source but not in Firebug. Normally I would place some alerts/console.log calls with the hasClass value but because I have no idea where to start I wanted to know if I can trace the change back when it occurs somehow? Denis

    Read the article

  • Big numbers in C

    - by teehoo
    I need help working with very big numbers. According to Windows calc, the exponent 174^55 = 1.6990597648061509725749329578093e+123. How would I store this using C (c99 standard). int main(){ long long int x = 174^55; //result is 153 printf("%lld\n", x); } For those curious, it is for a school project where we are implementing the RSA cryptographic algorithm, which deals with exponentiating large numbers with large powers for encryption/decryption.

    Read the article

  • How do I get .NET to garbage collect aggressively?

    - by mmr
    I have an application that is used in image processing, and I find myself typically allocating arrays in the 4000x4000 ushort size, as well as the occasional float and the like. Currently, the .NET framework tends to crash in this app apparently randomly, almost always with an out of memory error. 32mb is not a huge declaration, but if .NET is fragmenting memory, then it's very possible that such large continuous allocations aren't behaving as expected. Is there a way to tell the garbage collector to be more aggressive, or to defrag memory (if that's the problem)? I realize that there's the GC.Collect and GC.WaitForPendingFinalizers calls, and I've sprinkled them pretty liberally through my code, but I'm still getting the errors. It may be because I'm calling dll routines that use native code a lot, but I'm not sure. I've gone over that C++ code, and make sure that any memory I declare I delete, but still I get these C# crashes, so I'm pretty sure it's not there. I wonder if the C++ calls could be interfering with the GC, making it leave behind memory because it once interacted with a native call-- is that possible? If so, can I turn that functionality off? EDIT: Here is some very specific code that will cause the crash. According to this SO question, I do not need to be disposing of the BitmapSource objects here. Here is the naive version, no GC.Collects in it. It generally crashes on iteration 4 to 10 of the undo procedure. This code replaces the constructor in a blank WPF project, since I'm using WPF. I do the wackiness with the bitmapsource because of the limitations I explained in my answer to @dthorpe below as well as the requirements listed in this SO question. public partial class Window1 : Window { public Window1() { InitializeComponent(); //Attempts to create an OOM crash //to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops int theRows = 4000, currRows; int theColumns = 4000, currCols; int theMaxChange = 30; int i; List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack byte[] displayBuffer = null;//the buffer used as a bitmap source BitmapSource theSource = null; for (i = 0; i < theMaxChange; i++) { currRows = theRows - i; currCols = theColumns - i; theList.Add(new ushort[(theRows - i) * (theColumns - i)]); displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create(currCols, currRows, 96, 96, PixelFormats.Gray8, null, displayBuffer, (currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8); System.Console.WriteLine("Got to change " + i.ToString()); System.Threading.Thread.Sleep(100); } //should get here. If not, then theMaxChange is too large. //Now, go back up the undo stack. for (i = theMaxChange - 1; i >= 0; i--) { displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create((theColumns - i), (theRows - i), 96, 96, PixelFormats.Gray8, null, displayBuffer, ((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8); System.Console.WriteLine("Got to undo change " + i.ToString()); System.Threading.Thread.Sleep(100); } } } Now, if I'm explicit in calling the garbage collector, I have to wrap the entire code in an outer loop to cause the OOM crash. For me, this tends to happen around x = 50 or so: public partial class Window1 : Window { public Window1() { InitializeComponent(); //Attempts to create an OOM crash //to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops for (int x = 0; x < 1000; x++){ int theRows = 4000, currRows; int theColumns = 4000, currCols; int theMaxChange = 30; int i; List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack byte[] displayBuffer = null;//the buffer used as a bitmap source BitmapSource theSource = null; for (i = 0; i < theMaxChange; i++) { currRows = theRows - i; currCols = theColumns - i; theList.Add(new ushort[(theRows - i) * (theColumns - i)]); displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create(currCols, currRows, 96, 96, PixelFormats.Gray8, null, displayBuffer, (currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8); } //should get here. If not, then theMaxChange is too large. //Now, go back up the undo stack. for (i = theMaxChange - 1; i >= 0; i--) { displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create((theColumns - i), (theRows - i), 96, 96, PixelFormats.Gray8, null, displayBuffer, ((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8); GC.WaitForPendingFinalizers();//force gc to collect, because we're in scenario 2, lots of large random changes GC.Collect(); } System.Console.WriteLine("Got to changelist " + x.ToString()); System.Threading.Thread.Sleep(100); } } } If I'm mishandling memory in either scenario, if there's something I should spot with a profiler, let me know. That's a pretty simple routine there. Unfortunately, it looks like @Kevin's answer is right-- this is a bug in .NET and how .NET handles objects larger than 85k. This situation strikes me as exceedingly strange; could Powerpoint be rewritten in .NET with this kind of limitation, or any of the other Office suite applications? 85k does not seem to me to be a whole lot of space, and I'd also think that any program that uses so-called 'large' allocations frequently would become unstable within a matter of days to weeks when using .NET. EDIT: It looks like Kevin is right, this is a limitation of .NET's GC. For those who don't want to follow the entire thread, .NET has four GC heaps: gen0, gen1, gen2, and LOH (Large Object Heap). Everything that's 85k or smaller goes on one of the first three heaps, depending on creation time (moved from gen0 to gen1 to gen2, etc). Objects larger than 85k get placed on the LOH. The LOH is never compacted, so eventually, allocations of the type I'm doing will eventually cause an OOM error as objects get scattered about that memory space. We've found that moving to .NET 4.0 does help the problem somewhat, delaying the exception, but not preventing it. To be honest, this feels a bit like the 640k barrier-- 85k ought to be enough for any user application (to paraphrase this video of a discussion of the GC in .NET). For the record, Java does not exhibit this behavior with its GC.

    Read the article

  • I want to create adjacency matrix using python

    - by A A
    I have very large data set it is almost 450000 lines and two rows, i want to compute adjacency matrix using python, because previously i have tried to do it in matlab, and it shows memory error because of large data values. my data values also start from 100 and goes upto 450000, Anyone can help me in this issue, as i am new to python. I have to first import the file into python using excel sheet or notepad and then compute the adjacency matrix

    Read the article

  • open source smooth particle hydrodynamics

    - by user325181
    Anyone know of any open source libraries for particle based large scale smooth particle hydrodynamics. I am looking for a easier way of simulating large scale planetary body impacts with rotation. I was also wondering if you had any ideas on how to visualize the output from said simulation. I have tried using IBM graphviz, but it is very difficult to work with. Any pointers would be appreciated. Thanks!

    Read the article

  • What other libraries or tools would you add to a Spring/Hibernate stack for improved rapid applicati

    - by CaptainAwesomePants
    My team at work maintains a fairly large webapp written on top of Spring and Hibernate. We're about to start making some fairly large scale changes to the site, and we're enamored with the rapid application development speeds allowed by some other frameworks, like Rails. We haven't really changed our stack much in the last year or two, and I'm wondering what new tools, approaches, and libraries might be out there to help speed up webapp development.

    Read the article

  • Image auto resize to fit div container.

    - by Khou
    How does do you auto resize a large image to fit a smaller width div container? Example can be seen here on stackoverflow.com - when an image is inserted onto the editor panel and it is too large for display, the image is automatically resized.

    Read the article

  • C# function normal return value VS out or ref argument

    - by misha-r
    Hi People, I've got a method in c# that needs to return a very large array (or any other large data structure for that matter). Is there a performance gain in using a ref or out parameter instead of the standard return value? I.e. is there any performance or other gain in using void function(sometype input, ref largearray) over largearray function(sometype input) ? Thanks in advance!

    Read the article

  • Performance tuning of a Hibernate+Spring+MySQL project operation that stores images uploaded by user

    - by Umar
    Hi I am working on a web project that is Spring+Hibernate+MySQL based. I am stuck at a point where I have to store images uploaded by a user into the database. Although I have written some code that works well for now, but I believe that things will mess up when the project would go live. Here's my domain class that carries the image bytes: @Entity public class Picture implements java.io.Serializable{ long id; byte[] data; ... // getters and setters } And here's my controller that saves the file on submit: public class PictureUploadFormController extends AbstractBaseFormController{ ... protected ModelAndView onSubmit(HttpServletRequest request, HttpServletResponse response, Object command, BindException errors) throws Exception{ MutlipartFile file; // getting MultipartFile from the command object ... // beginning hibernate transaction ... Picture p=new Picture(); p.setData(file.getBytes()); pictureDAO.makePersistent(p); // this method simply calls getSession().saveOrUpdate(p) // committing hiernate transaction ... } ... } Obviously a bad piece of code. Is there anyway I could use InputStream or Blob to save the data, instead of first loading all the bytes from the user into the memory and then pushing them into the database? I did some research on hibernate's support for Blob, and found this in Hibernate In Action book: java.sql.Blob and java.sql.Clob are the most efficient way to handle large objects in Java. Unfortunately, an instance of Blob or Clob is only useable until the JDBC transaction completes. So if your persistent class defines a property of java.sql.Clob or java.sql.Blob (not a good idea anyway), you’ll be restricted in how instances of the class may be used. In particular, you won’t be able to use instances of that class as detached objects. Furthermore, many JDBC drivers don’t feature working support for java.sql.Blob and java.sql.Clob. Therefore, it makes more sense to map large objects using the binary or text mapping type, assuming retrieval of the entire large object into memory isn’t a performance killer. Note you can find up-to-date design patterns and tips for large object usage on the Hibernate website, with tricks for particular platforms. Now apparently the Blob cannot be used, as it is not a good idea anyway, what else could be used to improve the performance? I couldn't find any up-to-date design pattern or any useful information on Hibernate website. So any help/recommendations from stackoverflowers will be much appreciated. Thanks

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >