Search Results

Search found 38088 results on 1524 pages for 'large scale project'.

Page 76/1524 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Fast distributed filesystem for a large amounts of data with metadata in database

    - by undefined hero
    My project uses several processing machines and one storage machine. Currently storage organized with a MSSQL filetable shared folder. Every file in storage have some metadata in database. Processing machines executes tasks for which they needed files from storage and their metadata. After completing task, processing machine puts resulting data back in storage. From there its taken by another processing machine, which also generates some file and put it back in storage. And etc. Everything was fine, but as number of processing machines increases, I found myself bottlenecked myself with storage machines hard drive performance. So I want processing machines to put files in distributed FS. to lift load from storage machines, from which they can take data from each other, not only storage machine. Can You suggest a particular distributed FS which meets my needs? Or there is another way to solve this problem, without it? Amounts of data in FS in one time are like several terabytes. (storage can handle this, but processors cannot). Data consistence is critical. Read write policy is: once file is written - its constant and may be only removed, but not modified. My current platform is Windows, but I'm ready to switch it, if there is a substantially more convenient solution on another one.

    Read the article

  • Windows 2003 Storage Server Hanging on Large File Transfers

    - by user25272
    In one of our offices we have a Dell PowerVault 745N NAS device which acts as the main file server. Its running 32bit Windows 2003 Storage Server SP2 with 3GB RAM. The server holds around 60 users HOME folders, which are mapped via AD. The office clients are a mix of XP SP3, Vista and Windows 7. Occasionally the server will completely hang when transferring large files. When the hang happens the console becomes unresponsive with only the mouse active and blank wallpaper. Sometimes stopping the copy frees the server, sometimes not. The hanging can last around 20 minutes. During this time other servers also become unresponsive with blank wallpaper at the console. If you do manage to get onto another server the taskbar and run commands are unresponsive. This also transcends to the client computers sometimes with explorer crashing. I'm guessing this is due to the HOME folder mapping. Eventually the NAS server with free up and everything will be back to normal. The server is configured as follows: PERC 4/DC DATA 2 - 12 SCSI HDD - RAID5 SHADOWCOPY 2 SCSI HDD - RAID1 CERC SATA DATA 11 4 SATA HDD - RAID5 OS 4 SATA HDD - RAID5 All the drivers and firmware is up to date. I've been through all the diagnostics with Dell and the hardware has come up clean including full HDD tests on the arrays. The server has NOD32 installed as the AV, but the hanging happens when it is uninstalled. There are no errors in the event log when this happens and we don't have any errors logged on any of our ProCurve switches. DNS is fine on the domain and AD from what I can tell is running happily. There are no DFS or NFS shares setup either. All the shares are standard Windows. I've unchecked the allow the computer to turn off this device to save power box under Power Management on the NIC. "Set Link Speed and Duplex to Auto-negotiate 1000 " Increased Receive Descriptors buffer from 256 to 352 (reserves more CPU resource for handling data) I've run network traces using network monitor and have found the following: 417 8.078125 {SMB:192, NbtSS:25, TCP:24, IPv4:23} 192.168.2.244 192.168.5.35 SMB SMB:R; Nt Create Andx - NT Status: System - Error, Code = (52) STATUS_OBJECT_NAME_NOT_FOUND I've tried different cabling; NICs and switch ports all with the same result. Transferring files from other servers on the domain is fine. All I haven't done is run CHKDSK on the drives to look for any file system errors. On the Vista clients I have also run netsh interface tcp set global autotuning=disabled with no result. Could it be that the server has a faulty drive or that the I/O is too much for it to handle? Any ideas why would the hang cause issues with the other servers on the LAN? Many Thanks.

    Read the article

  • Hyper-V Ubuntu Networking Problems Copying Large Amounts of Data

    - by Anonymous
    I am trying to copy a large amount (about 50 GB) of data over my network from a Hyper-V-hosted virtual machine running Ubuntu 11.04 (Natty Narwhal) to another (non-virtual) Ubuntu host that I plan to use for testing upgrades to one of our web applications. The problem I am having is with the virtual machine, which I shall refer to in what follows as "source.host". This machine is running 64-bit Ubuntu Server with the 2.6.38-8-server kernel and the Microsoft Linux Integration Components for Hyper-V kernel modules (hv_utils, hv_timesource, hv_netvsc, hv_blkvsc, hv_storvsc, and hv_vmbus) loaded. It uses a Hyper-V "synthetic network adapter" for its networking interface. To do the copy, I log on to the machine with the data and run the following commands (Call the remote machine "destination.host".): $ cd /path/to/data $ tar -cvf - datafolder/ | ssh [email protected] "cat > ~/data.tar" This runs for a while and then suddenly stops after transferring somewhere from 2-6 GB. The terminal on the source.host machine displays a Write failed: broken pipe error. The odd part is this: after this occurs, the "source.host" machine is no longer able to talk to the rest of the network. I cannot ping any other hosts on the network from the "source.host" machine, and I cannot ping the "source.host" machine from any other host on the network. I am equally unable to access the any of the web services hosted on "source.host". Running ifconfig on "source.host" shows the network adapter to be up and running as usual with the correct IP address and everything. I tried restarting the networking service with $ /etc/init.d/networking restart but the problem does not go away. Restarting the machine makes it capable of talking to the network again -- it can ping and be pinged by other hosts, and the web services are also accessible and usable as normal -- but attempting the copy operation again results in the same failure, requiring another restart. As an experiment, I tried replacing the tar -- ssh pipeline above with a straight scp: $ scp -r datafolder/ [email protected]:~ but to no avail Thinking that the issue might have to do with the kernel packet-send buffers filling up, I tried increasing the buffer size to 12 MB (up from the 128 KB default) with # echo 12582911 > /proc/sys/net/core/wmem_max but this also had no effect. I'm guessing at this point that it might be a problem with the Microsoft synthetic network driver, but I don't really know. Does anyone have any suggestions? Thank you very much in advance!

    Read the article

  • Finding co-maintainers for open source projects

    - by Mike Samuel
    I have a number of open-source projects that have gotten some significant usage and would like to find co-maintainers so that I am not a bottleneck when it comes to maintenance and support requests and to get other perspectives on how the project should evolve. Where should I look for co-maintainers, what should I look for in a co-maintainer, and how should I go about bringing them up to speed on the code and maintainer responsibilities?

    Read the article

  • When to delete a branch in Git

    - by Jo-Herman Haugholt
    I have a script project I've been managing with Git. Besides two main branches, several minor branches have been introduced over time to cover minor features, tweaks or temporary changes. Some of these branches are nearing end-of-life, and I won't be updating them any more. What's the different philosophies for handling branches like this? Should they be removed, or left in the repository unmaintained? If I do, won't I end up with a cluttered repository?

    Read the article

  • When Is It Acceptable to NOT Fix Broken Windows?

    - by Bullines
    In reference to broken windows... Are there times when refactoring is best left for a future activity? For example, if a project to add some new features to an existing internal system is assigned to a team that has not worked with the system until now, and is given a short timeline in which to work with - can it be ever be justifiable to defer major refactorings to existing code for the sake of making the deadline in this scenario?

    Read the article

  • How do great enterprises estimate software development efforts?

    - by Ed Pichler
    I was learning about how to estimate software development effort, and would like to know how successful enterprises estimate their projects. How they do to know how much time a system will spend to be developed? What are the modern techniques to do this? What are the techniques used by these modern enterprises? Some articles and interviews of employees of those enterprises would be interesting. I asked on Project Management site of StackExchange too.

    Read the article

  • INI files or Registry or personal files

    - by Shirish11
    I want to save the configuration of my project. Which includes Screen size Screen Position Folder paths Users settings and so on. The standard places where you can save these are configuration values are: Registry INI files Personal files (like *.cfg) I would like to know how do you choose between these places? Also are there any pros & cons of using any of them?

    Read the article

  • What to do when you inherit an unmaintainable codebase?

    - by GordonM
    I'm currently working at a company with 2 other PHP developers aside from me, and 1 junior developer. The senior developer who originally built the system we're all working on has resigned and will only be here for a matter of weeks. The other developer, who is the only other guy who knows anything about the system, is unhappy here and is looking for a new job. I'm very real danger of being left behind as the only experienced developer on this codebase. Since I've joined this company I've tried to push for better coding standards, project documentation, etc and I do think I've made some headway, but the vast majority of the code is simply unmaintainable and uncommented. A lot of this has to do with the need to get things done fast at points in the project before I joined, but now the technical debt is enormous, even with the two developers who do understand the system on board. Without them, it will simply be impossible to do anything with it. The senior developer is working on trying to at least comment all his code before he leaves but I think the codebase is simply too vast to properly document in the remaining time. Besides, when he does comment it still doesn't make things as clear as it could. If the system was better organized and documented I could probably start refactoring it incrementally, but the whole thing is so tightly coupled that it's very difficult to make any changes in one module without having unintended knock-on effects in other modules. Naturally, there's no unit tests either, and I honestly don't think this codebase could possibly be unit tested anyway given how it's implemented. There also never seems to be enough time to get things done even with 3 developers and 1 junior developer. With one developer and one junior, neither of which had significant input into the early design of the system, I don't see how we could possibly get anything done with keeping the current system working, implementing new features as needed and developing a replacement for the current codebase that is better organized. Is there an approach I can take to cope with this situation, or should I be getting my own CV in order as well at this point? If it was just me and the junior designer who would be left I'd go for the latter option almost without question. However, there's a team of front-end developers and content managers as well, and I'm worried what would become of them if I left and put them in a position where there would be no developers at all. The department might just be closed down altogether under such circumstances, and then I'd have their unemployment on my conscience as well!

    Read the article

  • What is an “implementation plan”?

    - by Abe Miessler
    I was recently given the task of creating an implementation plan document. When I asked for an example of one that I could look at, I was told to look at the Project Plan that had already been created an use that as a base. I'm still a bit confused on what I should be creating. Can anyone point me to a good example out there or to something that explains what this is and more importantly the details about what it should contain.

    Read the article

  • How can teams collaborate on Unity 3D projects?

    - by nosferat
    With a friend of mine, we are planning to develop a small game to get the hang of game development and teamwork. But since Unity 3D barely supports version control (or at least the free version lacks of it) we have no idea how to efficiently manage teamwork. Sharing tasks in a small project is also seems like a challange for us. I would also appreciate any advice that could be useful for beginner indie developers related to teamwork. :)

    Read the article

  • What are some known approaches to collaborative schema design?

    - by Omega
    If a project has multiple developers, each with useful knowledge & experience that can aide in the design of a schema; what are some known processes to collaboratively plan that schema out? Are there any types of meetings that are useful for this purpose? This would be in contrast to circumstances where projects are started and models are developed unilaterally by coincidence rather than as part of a structured understanding of the domain.

    Read the article

  • Find the "largest" dense sub matrix in a large sparse matrix

    - by BCS
    Given a large sparse matrix (say 10k+ by 1M+) I need to find a subset, not necessarily continuous, of the rows and columns that form a dense matrix (all non-zero elements). I want this sub matrix to be as large as possible (not the largest sum, but the largest number of elements) within some aspect ratio constraints. Are there any known exact or aproxamate solutions to this problem? A quick scan on Google seems to give a lot of close-but-not-exactly results. What terms should I be looking for? edit: Just to clarify; the sub matrix need not be continuous. In fact the row and column order is completely arbitrary so adjacency is completely irrelevant. A thought based on Chad Okere's idea Order the rows from largest count to smallest count (not necessary but might help perf) Select two rows that have a "large" overlap Add all other rows that won't reduce the overlap Record that set Add whatever row reduces the overlap by the least Repeat at #3 until the result gets to small Start over at #2 with a different starting pair Continue until you decide the result is good enough

    Read the article

  • maintaining a large list in python

    - by Oren
    I need to maintain a large list of python pickleable objects that will quickly execute the following algorithm: The list will have 4 pointers and can: Read\Write each of the 4 pointed nodes Insert new nodes before the pointed nodes Increase a pointer by 1 Pop nodes from the beginning of the list (without overlapping the pointers) Add nodes at the end of the list The list can be very large (300-400 MB), so it shouldnt be contained in the RAM. The nodes are small (100-200 bytes) but can contain various types of data. The most efficient way it can be done, in my opinion, is to implement some paging-mechanism. On the harddrive there will be some database of a linked-list of pages where in every moment up to 5 of them are loaded in the memory. On insertion, if some max-page-size was reached, the page will be splitted to 2 small pages, and on pointer increment, if a pointer is increased beyond its loaded page, the following page will be loaded instead. This is a good solution, but I don't have the time to write it, especially if I want to make it generic and implement all the python-list features. Using a SQL tables is not good either, since I'll need to implement a complex index-key mechanism. I tried ZODB and zc.blist, which implement a BTree based list that can be stored in a ZODB database file, but I don't know how to configure it so the above features will run in reasonable time. I don't need all the multi-threading\transactioning features. No one else will touch the database-file except for my single-thread program. Can anyone explain me how to configure the ZODB\zc.blist so the above features will run fast, or show me a different large-list implementation?

    Read the article

  • Google Code Jam 2010 Large DataSets Take Too Long to Submit

    - by Travis
    Hey Guys, I'm participating in the 2010 code jam and I solved two of the problems for the small data sets, but I'm not even close to solving the large data sets in the 8 minute time frame. I'm wondering if anyone out there has solved the large data set: What hardware were you running on? What language were you running on? What performance tuning techniques did you do on your code to run as fast as possible? I'm writing the solutions in Ruby, which is not my day to day language, and executing them on my Macbook Pro. My solutions for problem A and problem C are on github at http://github.com/tjboudreaux/codejam2010. I'd appreciate any suggestions that you may have. FWIW, I have alot of experience in C++ from college, my primary language is PHP, and my "sandbox" language is Ruby. Was I just a bit ambitious by taking a shot at this in Ruby, not knowing where the language struggles for performance, or does anyone see anything that's a redflag as to why I can't complete the large dataset in time to submit.

    Read the article

  • WCF, IIS6.0 (413) Request Entity Too Large.

    - by Andrew Kalashnikov
    Hello, guys. I've got annoyed problem. I've got WCF service(basicHttpBinding with Transport security Https). This service implements contract which consists 2 methods. LoadData. GetData. GetData works OK!. My client received pachage ~2Mb size without problems. All work correctly. But when I try load data by bool LoadData(Stream data); - signature of method I'll get (413) Request Entity Too Large. Stack Trace: Server stack trace: ? ServiceModel.Channels.HttpChannelUtilities.ValidateRequestReplyResponse(HttpWebRequest request, HttpWebResponse response, HttpChannelFactory factory, WebException responseException, ChannelBinding channelBinding) System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout) System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) I try this http://blogs.msdn.com/jiruss/archive/2007/04/13/http-413-request-entity-too-large-can-t-upload-large-files-using-iis6.aspx. But it doesn't work! My server is 2003 with IIS6.0. Please help.

    Read the article

  • Copy Small Bitmaps on to Large Bitmap with Transparency Blend: What is faster than graphics.DrawImag

    - by Glenn
    I have identified this call as a bottleneck in a high pressure function. graphics.DrawImage(smallBitmap, x , y); Is there a faster way to blend small semi transparent bitmaps into a larger semi transparent one? Example Usage: XY[] locations = GetLocs(); Bitmap[] bitmaps = GetBmps(); //small images sizes vary approx 30px x 30px using (Bitmap large = new Bitmap(500, 500, PixelFormat.Format32bppPArgb)) using (Graphics largeGraphics = Graphics.FromImage(large)) { for(var i=0; i < largeNumber; i++) { //this is the bottleneck largeGraphics.DrawImage(bitmaps[i], locations[i].x , locations[i].y); } } var done = new MemoryStream(); large.Save(done, ImageFormat.Png); done.Position = 0; return (done); The DrawImage calls take a small 32bppPArgb bitmaps and copies them into a larger bitmap at locations that vary and the small bitmaps might only partially overlap the larger bitmaps visible area. Both images have semi transparent contents that get blended by DrawImage in a way that is important to the output. I've done some testing with BitBlt but not seen significant speed improvement and the alpha blending didn't come out the same in my tests. I'm open to just about any method including a better call to bitblt or unsafe c# code.

    Read the article

  • Sorting a very large text file in Java

    - by Alice
    Hi, I have a large text file I need to sort in Java. The format is: word [tab] frequency [new line] The algorithm for sorting is: Read some of the file, filtering for purlely alphabetic words. Once you have X number of alphabetic words, call Collections.sort and write the result to a file. Repeat until you have finished reading the file. Start reading two sorted files, comparing line by line for the word with higher frequency, and writing at the same time to a new file as to not load much into your memory Repeat until all files are merged into one large file Right now I've divided the large file into smaller ones (sorted by descending frequency) with 10,000 lines each. I know I need to somehow merge these files back together, but I'm not sure how to go about this. I've created a LinkedList to keep track of all the files created. The algorithm says to compare each line in the two files, except I've tried a case where , say file1 = 8,6,5,3,1 and file2 = 9,8,8,8,8. Then if I compare them line by line I would get file3 = 9,8,8,6,8,5,8,3,8,1 which is incorrectly sorted (they should be in decreasing order). I think I'm misunderstanding some part of the algorithm. If someone could point out what I should do instead, I'd greatly appreciate it. Thanks. edit: Yes this is an assignment. We aren't allowed to increase memory unfortunately :(

    Read the article

  • What, if anything, to do about bow-shaped burndowns?

    - by Karl Bielefeldt
    I've started to notice a recurring pattern to our team's burndown charts, which I call a "bowstring" pattern. The ideal line is the "string" and the actual line starts out relatively flat, then curves down to meet the target like a bow. My theory on why they look like this is that toward the beginning of the story, we are doing a lot of debugging or exploratory work that is difficult to estimate remaining work for. Sometimes it even goes up a little as we discover a task is more difficult once we get into it. Then we get into implementation and test which is more predictable, hence the curving down graph. Note I'm not talking about a big scale like BDUF, just the natural short-term constraint that you have to find the bug before you can fix it, coupled with the fact that stories are most likely to start toward the beginning of a two-week iteration. Is this a common occurrence among scrum teams? Do people see it as a problem? If so, what is the root cause and some techniques to deal with it?

    Read the article

  • Is there a good web-based project management app with scheduling?

    - by Andykiteman
    Ideally something as intuitive as basecamp, with good usability and accessibility. The best I've seen is huddle.net but it's still weak in several areas. Must have: Projects - ability to add people & tasks and schedule tasks to people Calendar - showing when people are busy or available Role based access - Admins and non-admins History - ability to look back at all history Anyone seen a product that's worth a look? Clarification: Must be hosted i.e. not require my own hardware or IT staff I'm looking for an app to schedule people with specific tasks at specific times and monitor the outcomes. I'm already using Mingle (for stories), Basecamp (to run the business) and Exceptional (to track bugs). I'm not looking for a bug-tracking system or a story management application (I already looked at VersionOne, but chose Mingle due to it's nicer UI) My response to the answer being auto-selected: I still don't feel the answer (chosen for me) is the correct one. It's a useful list but little more, and doesn't provide the solution I was seeking.

    Read the article

  • MSI not running the current version of the code in .NET Setup project.

    - by Swami
    I'm not sure what happened, but my installer is in a weird state--when I install my MSI, it doesn't seem to be running the current version of the code (I'm using Custom Actions). I verified it by placing some MessageBox.Shows and sure enough, they are not popping up. It's possible I may have had a few unsuccessful installs previously which may have put the installer in a bad state...but how do I go about resolving this? Thanks...

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >