Search Results

Search found 14122 results on 565 pages for 'cable management'.

Page 167/565 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • Any ideas for developing a Risc Processor friendly string allocator?

    - by Richard Fabian
    I'm working on some tools to enable high throughput data-oriented development, and one thing that I've not got an immediate answer for is how you go about allocating strings quickly. On risc processors you've got another problem of implementation that the CPU doesn't like branching, which is what I'm trying to minimise or avoid. Also, cache coherence is important on most CPUs, so that's gotta be influential in the design too. So, how would you go about reducing the overhead for a generic string allocator? Sometimes it's easier to solve a more explicit problem, so any ideas for string sizes of 5-30?

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • How do you keep track of your programming TODOs?

    - by Simucal
    I'm one of those people who can't get anything done without a to-do list. If it isn't on the list it doesn't exist. Notepad Method: When I'm programming I've been keeping notepad open with a list of to-do's for my current project. I'll constantly re-arrange these based off priority and I cross them off and move them to the completed section when I'm finished with that particular task. Code Comments: Some programmers pepper their projects source code with: // TODO: Fix this completely atrocious code before anyone sees it Plus, I know that there are some tools that show you a list of all TODOs in your code as well. Website Task Tracker: Remember The Milk Backpack Manymoon Voo2do Gmail Tasks TeuxDeux TodoDodo Ta-da lists ... and many more What have you found to be the best method of keeping track of your to-do lists for multiple projects? Related Question: What can someone do to get organized around here? Related Question: Getting Organized; the to-do list.

    Read the article

  • Add fields to the Site information section on Drupal 6.12

    - by Shadi Almosri
    Hiya, I've been shifting through the drupal documentation and forums but it's all a little daunting. If anyone has a simple or straight forward method for adding fields to the Site information page in the administration section i'd really appreciate it. As a background, i'm just trying to add user customizable fields site wide fields/values.

    Read the article

  • confluence experiences?

    - by Nickolay
    We're considering moving from trac to Atlassian Confluence as our knowledge base solution. ("We" are a growing IT consulting company, so most users are somewhat technical.) There are many reasons we're looking for an alternative to trac: ACL, better wiki refactoring tools (Move/rename, Delete (retaining history), What links here, Attachments [change tracking], easier way to track changes (i.e. better "timeline" interface), etc.), "templates"/"macros", WYSIWYG/Word integration. Confluence looks to be very nice from the feature list POV. But toying with it for a little I had the impression that its interface may seem somewhat confusing to the new users (and I recently saw a comment to that effect on SO). I hope that can be fixed by installing/creating a simpler skin, though. If you've used Confluence, what are your thoughts on it? Do you agree it's not as easy to use as other software, can anything be done about it? What are its other shortcomings? Would you choose a different wiki if you had another chance? [edit] my own experiences

    Read the article

  • Large tables of static data with DBGhost

    - by Paulo Manuel Santos
    We are thinking of restructuring our database development and deployment processes by using DBGhost, we want to move away from the central development database and bring the database to the source control. One of the problems we have is a big table with static data (containing translated language strings), it has close to 200K rows. I know that our best solution is to move these stings into resource files, but until we implement that, will DbGhost be able to maintain all this static data and generate our development and deployment databases in a short time? And if not is there a good alternative to filling up this table whenever we need to?

    Read the article

  • Programming time schedule for porting a program.

    - by Lothar
    I'm working on a large program which has an abstracted GUI API. It is very GUI based, many dialogs and a few nasty features which rely heavily on the message flow of the GUI (correct sequences of focus/mouse/active handling etc.) - not easy to port I now want to port it from the currently used FOX Toolkit to native Cocoa/MFC. I give myself a timeframe until the end of the year but my main work will be to continue development work with the existing toolkit, but there is no planned release for end customers before both tasks are done. My question is how should i spend my time? Stop working on the main program and do a 90% port (about 3 month) of the GUI first Splitting everything into smaller sessions of one month each. Assigning Monday/Tuesday to the GUI project and the rest of the week for the app. Finishing the App first, then port. I think there are three arguments which i need to balance. Motivation, i want to see something going on on both projects Brain Input Overflow, both tasks require a lot of detail information in my brain and sometimes enough is just enough. I guess the porting is intervowen so porting would also require a lot of code changes in the existing code and the new code that will be written in the meantime.

    Read the article

  • malloc and delete in C++, opinions

    - by Alexander
    In C++ using delete to free memory obtained with malloc() doesn't necessarily cause a program to blow up. Do you guys think a warning or perhaps even an assertion failure should be produced if delete is used to free memory obtained using malloc()?? Why do you think that Stroustrup did not had this feature on C++?

    Read the article

  • Java memory mapped files and swap

    - by MarkS
    I'm looking at some memory mapped files in Java. Let's say I have a heap size set to 2gb, and I memory map a file that is 50gb - far more than the physical memory on the machine. The OS will cache parts of that 50gb file in the os file cache, the java process will have 2gb of heap space. What I'm curious about is how does the OS decide how much of the 50gb file to cache? For instance, if I have another java process, also with a 2gb heap size, will that 2gb be swapped out to allow the os to cache parts of the memory mapped file? Will parts of the heap space of the first process be swapped out to allow the OS to cache? Is there any way to tell the OS not to swap heap space for OS caching? If the OS doesn't swap out main processes, how does it determine how big its file cache should be?

    Read the article

  • Alternatives to MS project server

    - by Ajaxx
    I manage a small group and I'd keep my work breakdown in project. However, it's difficult to provide my team with an adequate view into the project and ability to report on their progress. I looked at MS Project Server (the sharepoint webpart) but it's an expensive proposition. Has anyone had any experience with any other tool (commercial is fine) that helps team view and report on their work as managed by MS Project? FWIW, I have looked at OpenProj and it appears to be a decent solution for viewing project files on the desktop. Anything web-based, keeping in mind that I'd like people to report on their work not just view their work.

    Read the article

  • MySQL script to delete data in chunks until everything lower then id has been deleted

    - by Chriswede
    I need an MySQL Skript which does the following: delete chunks of the database until it has deleted all link_id's greater then 10000 exmaple: x = 10000 DELETE FROM pligg_links WHERE link_id > x and link_id < x+10000 x = x + 10000 ... So it would delete DELETE FROM pligg_links WHERE link_id > 10000 and link_id < 20000 then DELETE FROM pligg_links WHERE link_id > 20000 and link_id < 30000 until all id's less then 10000 have been removed I need this because the database is very very big (more then a gig) thank in advance

    Read the article

  • Sprint velocity calculations

    - by jase
    Need some advice on working out the team velocity for a sprint. Our team normally consists of about 4 developers and 2 testers. The scrum master insists that every team member should contribute equally to the velocity calculation i.e. we should not distinguish between developers and testers when working out how much we can do in a sprint. The is correct according to Scrum, but here's the problem. Despite suggestions to the contrary, testers never help with non-test tasks and developers never help with non-dev tasks, so we are not cross functional team members at all. Also, despite various suggestions, testers normally spend the first few days of each sprint waiting for something to test. The end result is that typically we take on far more dev work than we actually have capacity for in the sprint. For example, the developers might contribute 20 days to the velocity calculation and the testers 10 days. If you add up the tasks after sprint planning though, dev tasks add up to 25 days and test tasks add up to 5 days. How do you guys deal with this sort of situation?

    Read the article

  • Any foundation to administrate an Android open source application?

    - by Nicolas Raoul
    Our open source application is quite popular, and we are many developers. The app uses my Android Market account, and I shared the keys with a developer. But if both of us disappear, the application's Market account will be lost, and all users trapped. Giving the keys to all developers is not a solution either, for security reasons. Is there a foundation (like in Mozilla Foundation or Apache Foundation) that could accept to hold our Android Market account and release new versions in accordance with their own guidelines and our community consensus? There are quite a lot of Open Source foundations, but I could not find any that tackles this particular aspect of Android applications.

    Read the article

  • Parsing multiple files at a time in Perl

    - by sfactor
    I have a large data set (around 90GB) to work with. There are data files (tab delimited) for each hour of each day and I need to perform operations in the entire data set. For example, get the share of OSes which are given in one of the columns. I tried merging all the files into one huge file and performing the simple count operation but it was simply too huge for the server memory. So, I guess I need to perform the operation each file at a time and then add up in the end. I am new to perl and am especially naive about the performance issues. How do I do such operations in a case like this. As an example two columns of the file are. ID OS 1 Windows 2 Linux 3 Windows 4 Windows Lets do something simple, counting the share of the OSes in the data set. So, each .txt file has millions of these lines and there are many such files. What would be the most efficient way to operate on the entire files.

    Read the article

  • Displaying/scrolling through heaps of pictures in the browser

    - by user347256
    I want to be able to browse through heaps of images in the browser, fast. THe easy way (just load 2000 images and scroll) slows down the scrolling a lot, assumedly because there's too much images to be kept in memory. I'd love to hear thoughts on strategies to be able to quickly scroll through 10000s of images (as if you were on your desktop) in the browser. What would expected bottlenecks be? How to address them? How to fake things so that the user experience is still good? Examples in the wild?

    Read the article

  • release vs setting-to-nil to free memory

    - by Dan Ray
    In my root view controller, in my didReceiveMemoryWarning method, I go through a couple data structures (which I keep in a global singleton called DataManager), and ditch the heaviest things I've got--one or maybe two images associated with possibly twenty or thirty or more data records. Right now I'm going through and setting those to nil. I'm also setting myself a boolean flag so that various view controllers that need this data can easily know to reload. Thusly: DataManager *data = [DataManager sharedDataManager]; for (Event *event in data.eventList) { event.image = nil; event.thumbnail = nil; } for (WondrMark *mark in data.wondrMarks) { mark.image = nil; } [DataManager sharedDataManager].cleanedMemory = YES; Today I'm thinking, though... and I'm not actually sure all that allocated memory is really being freed when I do that. Should I instead release those images and maybe hit them with a new alloc and init when I need them again later?

    Read the article

  • How do I manage dependencies for automated builds on my build server?

    - by Tom Pickles
    I'm trying to implement continuous integration into our day to day workings. In our team, we're moving from just building our code in Visual Studio on our workstations and deploying, to using MSBuild.exe and automating on our build server (which is Jenkins) without the use of Visual Studio. We have external dependencies to references such as Automap in our projects. Because the automap (for example) dll isn't on the build server, the msbuild execution fails, for obvious reasons. There are other dll's which I need to be part of the build, I'm just using automap as an example. So what's the best way to get any dependencies onto the build server as part of the automated build? I've seen references to using a 'lib' folder, but I don't really understand where I should be putting it (in my project, filesystem, SVN ...?), and how the build server will get to it. I've also read that NuGet can do something with dependencies, but my build server isn't connected to the internet, and I don't understand how I can get my build to pull a NuGet package I may have created, and how it works together. Edit: I'm using subversion and we cannot use TeamCity as we would have to buy it and there's zero chance of funding.

    Read the article

  • universal content manager

    - by ankur
    I found one limitation in Oracle UCM. Well it might not be limitation but I am not able to figure it out yet: I didn't find mapping between metadata and content type. What if I wish to associate different set of metadata with different content type which is likely the case? Thanks.

    Read the article

  • Basic question on retain/release semantics from Apple's reference library

    - by davetron5000
    I have done Objective-C way back when, and have recently (i.e. just now) read the documentation on Apple's site regarding the use of retain and release. However, there is a bit of code in their Creating an iPhone Application page that has me a bit confused: - (void)setUpPlacardView { // Create the placard view -- it calculates its own frame based on its image. PlacardView *aPlacardView = [[PlacardView alloc] init]; self.placardView = aPlacardView; [aPlacardView release]; // What effect does this have on self.placardView?! placardView.center = self.center; [self addSubview:placardView]; } Not seeing the entire class, it seems that self.placardView is also a PlacardView * and the assignment of it to aPlacardView doesn't seem to indicate it will retain a reference to it. So, it appears to me that the line I've commented ([aPlacardView release];) could result in aPlacardView having a retain count of 0 and thus being deallocated. Since self.placardView points to it, wouldn't that now point at deallocated memory and cause a problem?

    Read the article

  • When to release the UIImage ?

    - by ragnarius
    I use the following code to draw a subimage UIImage* subIm = getSubImage( large, rect ); [subIm drawInRect:self.bounds]; where getSubImage is defined as follows UIImage* getSubImage(UIImage* uim, CGRect rc){ CGImageRef imref = CGImageCreateWithImageInRect(uim.CGImage, rc); UIImage* sub = [UIImage imageWithCGImage:imref]; CGImageRelease(imref); NSLog(@"subimage retainCount=%d", [sub retainCount]); // is 1 return sub; }//getSubImage Is the code correct? Is it safe to "CGImageRelease" imref? Has sub "CGImageRetained" imref? Should I release subIm (I get an error if I do)? Is subIm contained in the autorelease-pool, and , if so, how do I know this? In general, can one check if an object is contained in the autorelease pool (for debugging purpose)?

    Read the article

  • Design: How to declare a specialized memory handler class

    - by Michael Dorgan
    On an embedded type system, I have created a Small Object Allocator that piggy backs on top of a standard memory allocation system. This allocator is a Boost::simple_segregated_storage< class and it does exactly what I need - O(1) alloc/dealloc time on small objects at the cost of a touch of internal fragmentation. My question is how best to declare it. Right now, it's scope static declared in our mem code module, which is probably fine, but it feels a bit exposed there and is also now linked to that module forever. Normally, I declare it as a monostate or a singleton, but this uses the dynamic memory allocator (where this is located.) Furthermore, our dynamic memory allocator is being initialized and used before static object initialization occurs on our system (as again, the memory manager is pretty much the most fundamental component of an engine.) To get around this catch 22, I added an extra 'if the small memory allocator exists' to see if the small object allocator exists yet. That if that now must be run on every small object allocation. In the scheme of things, this is nearly negligable, but it still bothers me. So the question is, is there a better way to declare this portion of the memory manager that helps decouple it from the memory module and perhaps not costing that extra isinitialized() if statement? If this method uses dynamic memory, please explain how to get around lack of initialization of the small object portion of the manager.

    Read the article

  • Trying to calculate large numbers in Python with gmpy. Python keeps crashing?

    - by Ryan Peschel
    I was recommended to use gmpy to assist with calculating large numbers efficiently. Before I was just using python and my script ran for a day or two and then ran out of memory (not sure how that happened because my program's memory usage should basically be constant throughout.. maybe a memory leak?) Anyways, I keep getting this weird error after running my program for a couple seconds: mp_allocate< 545275904->545275904 > Fatal Python error: mp_allocate failure This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. Also, python crashes and Windows 7 gives me the generic python.exe has stopped working dialog. This wasn't happening with using standard python integers. Now that I switch to gmpy I am getting this error just seconds in to running my script. I thought gmpy was specialized in dealing with large number arithmetic? For reference, here is a sample program that produces the error: import gmpy2 p = gmpy2.xmpz(3000000000) s = gmpy2.xmpz(2) M = s**p for x in range(p): s = (s * s) % M I have 10 gigs of RAM and without gmpy this script ran for days without running out of memory (still not sure how that happened considering s never really gets larger.. Anyone have any ideas? EDIT: Forgot to mention I am using Python 3.2

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >