Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 48/457 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • How to map a Entity Data Model conceptual model property to a storage model column using the "Serial

    - by codekaizen
    I have a conceptual model in EDM where one of the entities has a property which is essentially a big value object whose properties aren't really useful as columns in the datamodel. I'd like to apply the Serialized LOB pattern to it so that I can fit it into a 192 byte binary column. How do I map this in the EDM v4? Is it even possible at this time? Actually, is it possible in any ORM?

    Read the article

  • what changes when your input is giga/terabyte sized?

    - by Wang
    I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny. I write Python, so I've spent the last few hours reading about HDF5, and Numpy, and PyTable, but I still feel like I'm not really grokking what a terabyte-sized data set actually means for me as a programmer. For example, someone pointed out that with larger data sets, it becomes impossible to read the whole thing into memory, not because the machine has insufficient RAM, but because the architecture has insufficient address space! It blew my mind. What other assumptions have I been relying in the classroom that just don't work with input this big? What kinds of things do I need to start doing or thinking about differently? (This doesn't have to be Python specific.)

    Read the article

  • what's the best (most effecient) way in asp .net to return a whole page into tabbed content?

    - by ijjo
    what i want to do is every time i click on a tab, the content area is replaced by pretty much a whole new page. i don't want a full page load so i want to do it in ajax, but i'm used to sending back small jason data via page methods. i'm not sure how i would construct a whole new page and return that via ajax and i would like to simply assign the whole content returned to a div and be done with it. what's the best way to do this with the least amount of overhead (i know there are some inefficient ways the scriptmanager does ajax)? or is it better to load the tabbed content in an iframe? fyi i'm already using jquery to call lightweight pagemethods on my asp net page and that works great.

    Read the article

  • Load big XML files to mySQL database (PHP)

    - by Kees
    Hello There, For a new project I need to load big XML files (200MB+) to a mySQL database. There are +- 20 feeds i need to match with that (not all fields are the same). Now when i want to catch the XML I get this error: Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 171296569 bytes) in E:\UsbWebserver\Root\****\application\libraries\MY_xml.php on line 21 Is there an easy solution for this? It's not possible to get te feed in parts of a few MB's each. Thank you very much! P.s. has somebody an idea to match xml-feeds easy?

    Read the article

  • How can I better implement A star algorithm with a very large set of nodes?

    - by Stephen
    I'm making a game with nodejs in which many enemies must converge on the player as the player moves around a relatively open space (right now it is an open field with few obstacles, but eventually there may be some small buildings in the field with 1 or 2 rooms). It's a multiplayer game using websockets, so the server needs to keep track of enemies and players. I found this javascript A* library which I've modified to be used on the server as a nodejs module. The library utilizes a Binary Heap to track the nodes for the algorithm, so it should be pretty fast (and indeed, with a small grid, say 100x100 it is lightning fast). The problem is that my game is not really tile-based. As the player moves around the map, he is moving on a more or less 1-to-1 per-pixel coordinate system (the player can move in 8 directions, 1 or 2 pixels at a time). In preliminary tests, on an 800x600 field, the path-finding can take anywhere from 400 to 1000 ms. Multiply that by 10 enemies and the game starts to get pretty choppy. I have already set it up so that each enemy will only do a path-finding call once per second or even as slow as once every 2 seconds (they have to keep updating their path because the players can move freely). But even with this long interval, there are noticeable lag spikes or chops every couple of seconds as the enemies update their paths. I'm willing to approach the problem of path-finding differently, if there's another option. I'm assuming that the real problem is the enormous grid (800x600). It also occurs to me that maybe the large arrays are to blame, as I've read that V8 has trouble with large arrays.

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • How to use Pixel Bender (pbj) in ActionScript3 on large Vectors to make fast calculations?

    - by Arthur Wulf White
    Remember my old question: 2d game view camera zoom, rotation & offset using 'Filter' / 'Shader' processing? I figured I could use a Pixel Bender Shader to do the computation for any large group of elements in a game to save on processing time. At least it's a theory worth checking. I also read this question: Pass large array to pixel shader Which I'm guessing is about accomplishing the same thing in a different language. I read this tutorial: http://unitzeroone.com/blog/2009/03/18/flash-10-massive-amounts-of-3d-particles-with-alchemy-source-included/ I am attempting to do some tests. Here is some of the code: private const SIZE : int = Math.pow(10, 5); private var testVectorNum : Vector.<Number>; private function testShader():void { shader.data.ab.value = [1.0, 8.0]; shader.data.src.input = testVectorNum; shader.data.src.width = SIZE/400; shader.data.src.height = 100; shaderJob = new ShaderJob(shader, testVectorNum, SIZE / 4, 1); var time : int = getTimer(), i : int = 0; shaderJob.start(true); trace("TEST1 : ", getTimer() - time); } The problem is that I keep getting a error saying: [Fault] exception, information=Error: Error #1000: The system is out of memory. Update: I managed to partially workaround the problem by converting the vector into bitmapData: (Using this technique I still get a speed boost of 3x using Pixel Bender) private function testShader():void { shader.data.ab.value = [1.0, 8.0]; var time : int = getTimer(), i : int = 0; testBitmapData.setVector(testBitmapData.rect, testVectorInt); shader.data.src.input = testBitmapData; shaderJob = new ShaderJob(shader, testBitmapData); shaderJob.start(true); testVectorInt = testBitmapData.getVector(testBitmapData.rect); trace("TEST1 : ", getTimer() - time); }

    Read the article

  • Do large number of internal broken links affect SEO?

    - by TheBigK
    We've a WordPress blog and had disqus plugin in stalled for several months. Around late August this year, the plugin created a ton of URLs that linked to non-existent location on our website. For example - Correct URL: domain.com/correct-URL/ Disqus created - domain.com/correct-URL/344322/ - Throws 404 domain.com/correct-URL/433466/ - Throws 404 So essentially, Google found a LARGE number of broken links that pointed to unknown locations on our own domain. As the count of those errors (404) rose, our site suffered massive drop in traffic and crawl rate dropped to 10% of what it was earlier. I wish to know - Can large number of (we've over 99k of them) internal broken links cause rankings to drop? I've fixed the issue in one go by creating 301 redirects for each bad URL to correct URL and removing disqus. Google however drops the count by ~1000 daily, as I mark errors as 'fixed' in Google Webmaster Tools. Is there any way to speed this up? Should I setup custom crawl rate to 'Fast' in GWT to make Google crawl our website faster? I'd appreciate your inputs and experience sharing.

    Read the article

  • For a large website developed in PHP, is it necessary to have a framework?

    - by Martin
    I am wondering if it is necessary to have a framework or if it is a must-have if I plan to make a large website. Large website could mean a lot of things: in other words, multiple dynamic web pages (40-50 dynamic pages, mysql content) and a lot of visitors (+- a million hits per month). The site will be hosted in a dedicated server environment. I know that it could simplify coding for a developer team, that it includes libraries and a lot of advantages. But I just feel that I don't need that. I think that learning how it works, managing it and installing it would take more time and I could use that time to code. I write PHP the simplest way I could (with performance in mind) and I try to reuse my code/functions/classes most of the time and I make sure that if another developer joins the team, that he won't be lost in the code. I am also planning to use MemCached or another Cache for PHP. As I said, the site will be hosted in a dedicated server environment but will be entirely managed by the hosting company. I am pretty sure the control panel for me to control the basic stuff will be Cpanel. For a developer like me that only knows PHP, Javascript, HTML, CSS, MYSQL and really basic server management, I feel that it seems to complicated to have a framework. Am I wrong? Is it worth the time to learn all about it? Thank you for your opinions and suggestions.

    Read the article

  • What is a good way to refactor a large, terribly written code base by myself? [closed]

    - by AgentKC
    Possible Duplicate: Techniques to re-factor garbage and maintain sanity? I have a fairly large PHP code base that I have been writing for the past 3 years. The problem is, I wrote this code when I was a terrible programmer and now it's tens of thousands of lines of conditionals and random MySQL queries everywhere. As you can imagine, there are a ton of bugs and they are extremely hard to find and fix. So I would like a good method to refactor this code so that it is much more manageable. The source code is quite bad; I did not even use classes or functions when I originally wrote it. At this point, I am considering rewriting the whole thing. I am the only developer and my time is pretty limited. I would like to get this done as quickly as possible, so I can get back to writing new features. Since rewriting the code would take a long time, I am looking for some methods that I can use to clean up the code as quickly as possible without leaving more bad architecture that will come back to haunt me later. So this is the basic question: What is a good way for a single developer to take a fairly large code base that has no architecture and refactor it into something with reasonable architecture that is not a nightmare to maintain and expand?

    Read the article

  • HTML5 - Does it have the power to handle a large 2D game with a huge world?

    - by user15858
    I have been using XNA game studio, but due to private reasons (as well as the ability to publish anywhere & my heavy interest in isogenic engine), I would like to switch to HTML5. However, I have very high 2D graphic demands for my game. The game itself will have a HDD size of anywhere between 6GB (min) to 12GB (max) which would be a full game deployed offline. The size of the images aren't significantly large, so streaming would be entirely possible if only those assets required were streamed as needed. The game has a massive file size because of the sheer amount of content. For some images or spritesheets, they would be quite massive. (ex. a very large Dragon, which if animated in a spritesheet would be split into two 4096x4096 sheets or one 8192x8192 sheet). Most assets would be very small, and about 7MB for a full character with 15 animations in every direction (all animations not required immediately) so in the size of a few hundred KB to download before the game loads. My question, however, is if the graphical power of HTML5 is enough to animate several characters on screen at once, when it flips through frames quite rapidly. All my sprites have about 25 frames per animation, 5 directions (a spritesheet for each direction & animation), and run at 30fps. Upon changing direction, animation, or a new character entering, spritesheets would change and be constantly loading/unloading. If I pack all directions in a single sheet, it would be about 2048x2048 per sheet. Most frameworks have no problem with this, but I am afraid from what I read that HTML5's graphical capabilities will limit me. Since it takes significant time simply to animate characters in any language, I'd like a quick answer.

    Read the article

  • What is the best way to manage large 3d worlds (i.e minecraft style)?

    - by SomeXnaChump
    After playing minecraft I was marvelling a bit at their large worlds but at the same time finding it extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume its fairly slow because: A) Its written in Java, and as most of the actual spatial partitioning and other memory management activities happen in there it would be slower than a native C++ version. B) They are not partitioning their world very well I could be wrong on both assumptions, however it got me thinking about the best way to manage large worlds. As it is more of a true 3d world, where a block can exist in any part of the world, it is basically a big 3d array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc). Now I am assuming to make this sort of world performant you would need to: a) Use a tree of some variety (oct/kd/bsp) to split all the cubes out, it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. b) Use some algorithm to work out if the blocks within the scene can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. c) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees I guess there is no right answer to this, but I would be interested to see peoples opinions on the subject.

    Read the article

  • What is the best practice for reading a large number of custom settings from a text file?

    - by jawilmont
    So I have been looking through some code I wrote a few years ago for an economic simulation program. Each simulation has a large number of settings that can be saved to a file and later loaded back into the program to re-run the same/similar simulation. Some of the settings are optional or depend on what is being simulated. The code to read back the parameters is basically one very large switch statement (with a few nested switch statements). I was wondering if there is a better way to handle this situation. One line of the settings file might look like this: #RA:1,MT:DiscriminatoryPriceKDoubleAuction,OF:Demo Output.csv,QM:100,NT:5000,KP:0.5 //continues... And some of the code that would read that line: switch( Character.toUpperCase( s.charAt(0) ) ) { case 'R': randSeed = Integer.valueOf( s.substring(3).trim() ); break; case 'M': marketType = s.substring(3).trim(); System.err.println("MarketType: " + marketType); break; case 'O': outputFileName = s.substring(3).trim() ; break; case 'Q': quantityOfMarkets = Integer.valueOf( s.substring(3).trim() ); break; case 'N': maxTradesPerRound = Integer.valueOf( s.substring(3).trim() ); break; case 'K': kParameter = Float.valueOf( s.substring(3).trim() ); break; // continues... }

    Read the article

  • What electronic scrum/kanban board do you use and recommend for distributed teams?

    - by Derick Bailey
    I have a coworker on a team that is fairly distributed, fairly large (for our company) and wants to take advantage of visual management tools like scrum / kanban boards. Since they are a somewhat distributed team, though, all of the issue management / work management must be done via an electronic tool (we currently use Trac). What issue / work management tools, with a visualization of a scrum / kanban board, do you use for your distributed scrum / kanban teams? would you recommend it, and if so, why?

    Read the article

  • What eletronic scrum/kanban board do you use and recommend for distributed teams?

    - by Derick Bailey
    I have a coworker on a team that is fairly distributed, fairly large (for our company) and wants to take advantage of visual management tools like scrum / kanban boards. Since they are a somewhat distributed team, though, all of the issue management / work management must be done via an electronic tool (we currently use Trac). What issue / work management tools, with a visualization of a scrum / kanban board, do you use for your distributed scrum / kanban teams? would you recommend it, and if so, why? Thanks.

    Read the article

  • what are the duties of a software control management (scm) engineer in a large company?

    - by Alex. S.
    I'm curious about what are the canonical responsibilities of such specialized role. Normally, I expected this to be part of the tasks of a normal developer, but in large companies I know this role is to be fulfilled by an engineer in his own. In my current company, there is a possibility for a new opening in a SCM position, so I could apply, but first I would like to hear about what, in your experience, characterize best this role.

    Read the article

  • Is there a more efficient way to filter large arrays than preg_match()?

    - by hozza
    I have a log that our web application builds. Each month it contains around 16,000 entries of a string with about the average sentence worth of text. To filter/search through these in our admin panel the script uses preg_match() but this seems to be taking ages and timing out on the 30sec limit. I have isolated that it is indeed the preg_match() that causes the time out. Is there a more efficient way to search through values in a large array for a users input?

    Read the article

  • What's the fastest way to store/access large files?

    - by philfreo
    I do a lot of video editing on my Mac and need a way to store very large (30 GB) files, and don't have room on my HD. A USB/Firewire external hard drive would work, but it seems way too slow for consistently working with such large files. I've also considered buying another computer, with a large hard drive, and putting it on the same network with a shared folder. What's the fastest / most efficient way to do this? Please consider USB 2.0 speeds, hard drive read times, ethernet speeds, etc. Are there other options I should consider?

    Read the article

  • Solaris 11 ? Zone ???????????????

    - by Homma
    ???? Solaris 11 ????????? Global Zone ?? Non-global Zone ??????????????????????????? Global Zone ??????????????? Solaris 11 ???????????????????Global Zone ?? solaris-large-server ?????????????????????? ????????????????????????? http://docs.oracle.com/cd/E26924_01/html/E25785/gihfn.html#gkkqw > ?????????????????????????????????? > AI ????????solaris-large-server ?????????? > ?????????? solaris-large-server ???????????????????????? solaris-large-server ???????????????????????????? # pkg search -o fmri -H '*/solaris-large-server:depend:group:' archiver/gnu-tar compress/bzip2 compress/gzip compress/p7zip compress/unzip compress/zip crypto/pwgen developer/build/gnu-make ... Non-global Zone ??????????????? Solaris 11 ? Non-global Zone ??????????????? solaris-small-server ????????????????????????????????????????????? zone_default.xml ?????????????? # grep pkg /usr/share/auto_install/manifest/zone_default.xml <name>pkg:/group/system/solaris-small-server</name> solaris-small-server ???????????????????????? solaris-small-server ???????????????????????????? # pkg search -o fmri -H '*/solaris-small-server:depend:group:' compress/bzip2 compress/gzip compress/p7zip compress/unzip compress/zip developer/debug/mdb ... Non-global Zone ? solaris-large-server ??????????? Non-global Zone ? solaris-large-server ?????????????????????????????????????? # pkg install solaris-large-server Non-global Zone ?????????????????????????? Zone ????????????? solaris-large-server ??????????????????????????????? AI ????????????? ???? AI ???????????????????????? http://docs.oracle.com/cd/E26924_01/html/E25829/z.pkginst.ov-14.html#glxbn > AI ??????????????????????????????? > ??????????????? ??????

    Read the article

  • Transferring at a large software company: ask boss before or after?

    - by ZNZ
    Out of college I joined a large, well-known software company's consulting arm. I've gained lots of experience with some of the company's products but would like to start doing more development work rather than implementation work (traveling less would be nice, too!). The company has some product development positions that I think I could be a good fit for, but I'm unsure of how to bring it up with my manager. After a bit Googling, there are many conflicted answers on how best to handle transferring departments. When should I tell my current manager that I am interested in pursuing other positions in the company? Before or after applying?

    Read the article

  • Has anyone used RemObjects' Hydra to mix a large Delphi project with new C# additions?

    - by robsoft
    (Hopefully this is deemed suitable for Programmers, not StackOverflow - I could imagine it getting closed at SO because there's no obvious 'right' answer.) We have a large Delphi 2007 VCL project that uses things like DBXpress, Report Builder, DevExpress and TMS components (both visual and non-visual) etc. For reasons I won't bore you with, the company would like to start adding new modules to the program using .Net (via C# in particular). Rewriting from scratch isn't an option and given the heavy use of Report Builder and various other bits of Delphi-specific 3rd party code, I suspect that using something like TurnSharp to regenerate a C# project wouldn't work well either. Ideally we want to keep our Win32 VCL Delphi code but add new modules (plug-ins, sections of contained functionality like wizards etc) via C#. So we're considering RemObjects' Hydra, and in the next few weeks will probably have a go at evaluating it on a smaller-but-representative project first. I wondered if anyone had experience of doing this kind of thing with Hydra...?

    Read the article

  • Why won't Ubuntu copy large files to FAT32 flash Drives?

    - by yurividal
    Since I installed 11.10 I am unable to copy large files (say 1gb or more) to ANY usb drive that is formated as FAT. The file starts copying, but soon an error appears, saying "Unable to Copy" . "Error splicing file: Input/output error". I am able to do it via terminal, using the cp command. I use Gnome3, but the same error has happened in Unity as well. Apparently it works if I format the USB drive as NTFS or EXT3, EXT4. But, for many appliances, FAT is necessary. The problem is also not with the USB port, because it works under Windows. It did not happen before, when I had 10.04 installed.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >