Search Results

Search found 35784 results on 1432 pages for 'number format'.

Page 621/1432 | < Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >

  • Handbrake-powered VidCoder gets a native 64-bit version

    A while back, VidCoder -- the Windows video disc ripping program -- added support for Blu-ray discs. With Handbrake's engine under the hood, VidCoder offers an easy-to-use interface and simple batch processing of your video files. With the release of version 0.8, there's also now a native 64-bit version for those of you running Windows x64. A number of stability tweaks have also been introduced. As Baz pointed out in our comments last time, VidCoder is particular useful on netbooks. If you've got a 1024x600 screen, Handbrake may not even launch for you -- but VidCoder will fire up just fine. Take the new 64-bit version for a spin, and share your thoughts in the comments. Download VidCoderHandbrake-powered VidCoder gets a native 64-bit version originally appeared on Download Squad on Fri, 07 Jan 2011 17:00:00 EST. Please see our terms for use of feeds.Permalink | Email this | Comments

    Read the article

  • How can I quantify the amount of technical debt that exists in a project?

    - by Erik Dietrich
    Does anyone know if there is some kind of tool to put a number on technical debt of a code base, as a kind of code metric? If not, is anyone aware of an algorithm or set of heuristics for it? If neither of those things exists so far, I'd be interested in ideas for how to get started with such a thing. That is, how can I quantify the technical debt incurred by a method, a class, a namespace, an assembly, etc. I'm most interested in analyzing and assessing a C# code base, but please feel free to chime in for other languages as well, particularly if the concepts are language transcendent.

    Read the article

  • Buy vs. Build - FTP Service

    - by Joel Martinez
    We have a need to FTP files that are generated by our system, so we're trying to decide whether we should spend the time to build something that meets our criteria (relatively easy, .NET has FTP functionality built in, among other more advanced libs from 3rd parties). Or if we should buy something off the shelf. Our requirements are roughly: Must be able to trigger a file send programmatically Needs to retry N number of times (configurable) Queryable status of FTP requests Callback on completion or fail of an FTP request I don't need to be sold on the relative simplicity of building something like that for myself. However I do want to do the due diligence of seeing what products are available ... because if something does exist that matches the requirements above, I wouldn't mind paying for it :-) Any thoughts or links would be greatly appreciated. Thanks!

    Read the article

  • How to install an older version of Java

    - by Alex Spurling
    I updated my installation of the sun-java6-jdk package today to version 6.24-1build0.10.10.1 after being prompted by the update manager. However this now causes some compilation failures so I'd like to revert back to the previous version that I had installed. I've tried using Synaptic but the 'Force Version' menu command is disabled. I've tried the following command to install the previous version sudo apt-get install sun-java6-jdk=6.22-0ubuntu1~10.10 But I'm not sure that I have the correct version: Reading package lists... Done Building dependency tree Reading state information... Done E: Version ‘6.22-0ubuntu1~10.10’ for ‘sun-java6-jdk’ was not found I've taken this version number from this changelog: https://launchpad.net/ubuntu/+source/sun-java6/+changelog Is this the correct way to install a previous version of a package? Have I got the correct version from the sun-java6 change log?

    Read the article

  • Adding a ppa repo and get key signed - no valid OpenPGP data - proxy issue?

    - by groovehunter
    I want to get a ppa key signed I tried apt-key adv --keyserver keyserver.ubuntu.com --recv-keys A258828C Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv-keys A258828C gpg: requesting key A258828C from hkp server keyserver.ubuntu.com gpgkeys: HTTP fetch error 7: couldn't connect to host gpg: no valid OpenPGP data found. gpg: Total number processed: 0 and wget -q http://ppa.launchpad.net/panda3d/ppa/ubuntu/dists/lucid/Release.gpg -O- | apt-key add - gpg: no valid OpenPGP data found I am behind a proxy , in apt.conf it is configured correctly Acquire::http::Proxy "http://proxy.mycompany.de:3128"; I also tried setting proxy export http_proxy="proxy.mycompany.de:3128" export https_proxy="proxy.mycompany.de:3128"

    Read the article

  • Book: Pro SQL Server 2008 Service Broker: Klaus Aschenbrenner

    - by Greg Low
    I've met Klaus a number of times now and attended a few of his sessions at conferences. Klaus is doing a great job of evangelising Service Broker. I wish the SQL Server team would give it as much love. Service Broker is a wonderful technology, let down by poor resourcing. Microsoft did an excellent job of building the plumbing for this product in SQL Server 2005 but then provided no management tools and no prescriptive guidance. Everyone then seemed surprized that the takeup of it was slow. I even...(read more)

    Read the article

  • Support for 15,000 Partitions in SQL Server 2008 SP2 and SQL Server 2008 R2 SP1

    In SQL Server 2008 and SQL Server 2008 R2, the number of partitions on tables and indexes is limited to 1,000. This paper discusses how SQL Server 2008 SP2 and SQL Server 2008 R2 SP1 address this limitation by providing an option to increase the limit to 15,000 partitions. It describes how support for 15,000 partitions can be enabled and disabled on a database. It also talks about performance characteristics, certain limitations associated with this support, known issues, and their workarounds. This support is targeted to enterprise customers and ISVs with large-scale decision support or data warehouse requirements. The Future of SQL Server MonitoringMonitor wherever, whenever with Red Gate's SQL Monitor. See it live in action now.

    Read the article

  • Google I/O 2012 - What's New in the Google Drive SDK

    Google I/O 2012 - What's New in the Google Drive SDK Josh Hudgins, John Day-Richter In this talk, we will introduce a number of major new features and platforms to the Google Drive SDK. We will discuss what we feel is a revolution in the way developers write collaborative applications. We will also announce a new API to make managing files in Google Drive even easier for developers, replacing some legacy APIs in the process. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 556 6 ratings Time: 55:14 More in Science & Technology

    Read the article

  • Best stats tool for cross-domain tracking

    - by kidbrax
    We build a webapp that allows users to run the app under their own subdomain. So we run the app under search.domainX.com, search.domainY.com and so on. They each have their own Google Analytics to track individual stats. But we want to know what general traffic for all clients of our app. So we want to know stuff like "among all our clients we had x number of views." What is the best way tool to track that sort of thing. We prefer a snippet based solution similar to Google Analytics if possible.

    Read the article

  • Cant Install TOR on Ubuntu Netbook 10.10 [closed]

    - by Prateek Mishra
    Possible Duplicate: How to install tor? I downloaded the tar.gz file from TORproject .org and unzipped it. I clicked everything inside the directories but nothing happened. I also tried to install the addon from here http://bit.ly/bSSNea . The addon is installed but I cant see the TOR button anywhere. I checked relevant the option in the preferences section of toolsaddons. How do I install it? EDIT - Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv-keys 886DDD89 gpg: requesting key 886DDD89 from hkp server keyserver.ubuntu.com gpg: key 886DDD89: public key "deb.torproject.org archive signing key" imported gpg: no ultimately trusted keys found gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1)

    Read the article

  • Flash Webcam non responsive Ubuntu 11.10

    - by powerbuoy
    I've got the same problem as this gentleman: https://answers.launchpad.net/ubuntu/+source/flashplugin-nonfree/+question/176541 Where the webcam settings / access does not work at all / is completely unresponsive in Ubuntu 11.10. I've tried webcam access in Facebook, Google+, my own code + a number of tutorials/demos and none work. What happens is the settings dialogue is completely unresponsive. Clicking tabs or buttons does nothing. In the question linked to a suggested answer is to run Unity 2D. Unfortunately this does not work for me (the exact same thing happens). I've also tried Gnome 3 which also does not work. Note that it is only the webcam settings that don't work. YouTube videos and annoying banners work just fine. Does anyone know of a workaround for this (except going back to 11.04) or if they've fixed this in 12.04? - also, are any of you experiencing the same thing?

    Read the article

  • How to design application for scaling the application?

    - by Muhammad
    I have one application which handles hardware events connected on the same computer's PCIe slots. The maximum number of PCIe slots on motherboard are two. I have utilized both slots. Now for scaling the application I need either more PCIe slots in same computer or I use another computer. So consider I am using another computer with same application and hardware connected on the PCIe Slots. Now my problem is that I want to design application over it which can access both computers hardware devices and does the process on it. The processed data should be send back to the respective PC's hardware. Please refer the attached diagram for expansion.

    Read the article

  • Error when setting Piwik analytics

    - by bertran
    I've uploaded the latest version of Piwik unto my web server, which is hosted by go daddy.com, on a linux hosting plan. I'm setting it up (accessing it from my browser as instructed) and I have the "Piwikinstallation" page open on step 3 (database set-up ) of 9. I don't know what to imput in the field "database server"... the default is the number 127.0.0.1 When I leave that input as is, and click "Next" leaving the gives the error: "Error when trying to connect to database server: SQLSTATE[HY000] [2013] Lost connection to MySQL server at 'reading initial communication packet', system error: 111" and changing that input to "localhost" gives me another error: "Error when trying to connect to database server:SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)"

    Read the article

  • Using a DisplayLink USB video adapter on Ubuntu 12.10

    - by Jason R
    The line of USB video adapters made by DisplayLink has a somewhat sordid history under Linux. In past Ubuntu releases, the process of getting them to work has been somewhat difficult, inspiring a number of past questions on this site: example 1 example 2 example 3 However, there are some indications that version 3.5 of the Linux kernel (which is used by 12.10) contains better support for these adapters, which should make them easier to use. I currently have a single-GPU machine (it is an Nvidia adapter) with dual monitor outputs. I would like to add the DisplayLink adapter to drive a third external monitor. How can I set this up on Ubuntu 12.10?

    Read the article

  • Tuxedo 12c

    - by JuergenKress
    Tuxedo 12c (12.1.1) release is now generally available. This major release includes a significant number of new features, In the case you missed the launch webcast – you can watch it on.demand. Key new Features include: Cloud Ready Infrastructure Optimized for Exalogic with 8X throughput Management/Monitoring Integrated with Enterprise Manager 12c For Mainframe COBOL Applications running on CICS, IMS, Batch New Messaging Solution: Tuxedo Message Queue 12c Ease of Application Development Solaris Studio IDE for Developing Tuxedo Applications Extend C, C++, COBOL Applications with Java POJOs Accelerated Migration of Large-scale Mainframe Applications At our WebLogic Community Workspace you can get the latest ppt presentations for your customer meetings: Tux ART 12c Launch Webcast Hasan Ajay v18.pptx Tux12claunch-techwebcast_v11.pptx Tuxedo_on_exalogic_external_v3.pptx For the more Tuxedo information, please visit the WebLogic Community Workspace (WebLogic Community membership required). WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Tuxedo,Tuxedo 12c,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Client/Server Application Using Google App Engine

    - by Kevin Zhang
    Can someone please advise me what is the possible solution of using GAE to make a Client/Serer Application? As far as I know, GAE is designed to do web applications. What I want to do is to have a Java Client(Swing based) deployed on a number of computers and deploy the server on GAE. I found an example on GAE website which teaches how to make a SOAP service using GAE, but I don't know whether using SOAP is a good idea for client/server applications. Can someone give me some hints about how to design this system and what technology should be used? Any advices are welcome. Many thanks.

    Read the article

  • How do functional languages handle random numbers?

    - by Electric Coffee
    What I mean about that is that in nearly every tutorial I've read about functional languages, is that one of the great things about functions, is that if you call a function with the same parameters twice, you'll always end up with the same result. How on earth do you then make a function that takes a seed as a parameter, and then returns a random number based on that seed? I mean this would seem to go against one of the things that are so good about functions, right? Or am I completely missing something here?

    Read the article

  • ntfsresize volume and size information

    - by antonio
    I am going to resize my sda2 NTFS partition. When gathering info with ntfsresize, I get: ntfsresize --info /dev/sda2 ntfsresize v2013.1.13 (libntfs-3g) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 21999993344 bytes (22000 MB) Current device size: 23622320128 bytes (23623 MB) Checking filesystem consistency ... Accounting clusters ... Space in use : 10673 MB (48.5%) Collecting resizing constraints ... You might resize at 10672590848 bytes or 10673 MB (freeing 11327 MB). Please make a test run using both the -n and -s options before real resizing! Can you tell me what is the difference between volume and device size? As for device size, 23622320128 bytes / 1000^2 = 23622.3 MB. Why is 23623 MB reported instead of 23622? Note that parted confirms this value: parted /dev/sda2 unit MB p Model: Unknown (unknown) Disk /dev/sda2: 23622MB Sector size (logical/physical): 512B/512B Partition Table: loop Disk Flags: Number Start End Size File system Flags 1 0.00MB 23622MB 23622MB ntfs

    Read the article

  • GPT Not mounting using "normal" GPT mounting techniques 12.04

    - by Roy Markham
    I've got two 2TB drivess: one MBR and the other GPT. sudo blckid /dev/sdb1 returns a blank. gdisk shows: Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Warning! Secondary partition table overlaps the last partition by 1970 blocks! You will need to delete this partition or resize it in another utility. Disk /dev/sdb: 3907027055 sectors, 1.8 TiB Logical sector size: 512 bytes Disk identifier (GUID): 38A1113D-B5E9-4B69-ABFF-ACB27AFB3DDD Partition table holds up to 128 entries First usable sector is 34, last usable sector is 3907027021 Partitions will be aligned on 8-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 34 262177 128.0 MiB 0C01 Microsoft reserved part 2 264192 3907028991 1.8 TiB 0700 Basic data partition mounting via fstab or -t gives same error when using NTFS or NTFS-3g "NTFS signature is missing" GParted says one partition is overwriting another, yet windows shows no errors at all. The drive is also mounted easily via MacOs (triple boot)

    Read the article

  • Save match details to SQLite or XML?

    - by trizz
    I'm making a (conceptual) system to simulate any kind of sports match (like soccer,basketball,etc) with actions (for example pass,pass,out,pass,score) so it will be like a real report. The main statistics (play time, number of actions etc.) I'm saving to a MySQL database, but the report itself, can contain more than 1000 actions per match. To avoid millions of records in my database I'm thinking of saving the detailled report in a SQLite database or a XML file. For every match played, a file will be created. When a user request the match details, I read the file for details. What is the best choice for this purpose? SQLite or XML?

    Read the article

  • Goals for 2010 Retrospective

    - by Brian Jackett
    As we approach the end of 2010 I’d like to take a  few minutes to reflect back on this past year and revisit the goals that I set for myself at the beginning of the year (click here to see those goals).  I feel it is important to track your goals not only to see if you accomplished them but also to see what new directions in life you pursued.  Once we enter into 2011 I’ll follow up with a new post on goals for the new year. Professional Blog – This year I intended to write at least 2 posts a month.  Looking back I far surpassed that goal by writing 47 posts (this one being my 48th).  As with many things in life, quantity does not mean quality.  A good example is a number of my posts announcing upcoming speaking engagements and providing links to presentation slides and scripts.  That aside, I like to at least keep content relatively fresh on this blog  which I was able to accomplish.  At the same time I’ve gotten much more comfortable in my blogging style and it has become much easier to write. Speaking – I didn’t define a clear goal for speaking engagements, but had a rough idea of wanting to speak at 2-3 events.  Once again I far exceeded that number by speaking at 10 separate events and delivering 12+ presentations.  I’m very thankful for all of the opportunities that I was given and all of the wonderful people I have met as a result. Volunteering – This year I intended to help out with the COSPUG (now Buckeye SPUG) steering committee and Stir Trek conference.  I fulfilled both goals and as well as taking on lead organizer duties for the first ever SharePoint Saturday Columbus.  Each of these events and groups turned out to be successful and I was glad to be a part of them all.  I look forward to continuing to volunteer with each next year in some capacity. Android Development – My goal for getting into Android development was a late addition, but one I didn’t necessarily fulfill.  I spent a couple nights downloading the tools, configuring my environment, and going through some “simple” tutorials.  I say “simple” because in my opinion the tutorials were not laid out very well, took a long time to get running properly, and confused me more than helped.  After about a week I was frustrated with the process and didn’t think it was a good use of my time.  On a side note, I’ve dabbled in Windows Phone 7 development over the past few months and have been very excited by how easy and intuitive it was to get started and develop some proof of concepts. Personal Getting in Shape – I had intended to play on recreational sports leagues and work out on a semi-regular basis.  For the most part I fulfilled this goal by playing on various softball and volleyball leagues as well as using the gym.  At the same time I had some major setbacks.  In the spring I badly sprained my ankle and got hit in the knee with a softball which kept me inactive for almost 2 months.  More recently I broke my knuckle (click here to read about it) which I am still recovering from. Volunteering – On the volunteering front I kept my commitments at my parish’s high school youth group.  As for other volunteering opportunities I got involved with a great organization called Columbus Gives Back (website).  I’ve volunteered with them a few times and really enjoy their goal to provide opportunities to people with busy schedules.  They  offer a variety of events typically after work hours and spread out around Columbus with no set commitments on time you need to put in.  If you have the time or motivation I highly recommend them. House/Condo – I had been thinking of buying a house or condo this past summer, but decided to extend my apartment lease for another year instead.  I have begun the search for a place in the past few weeks and am excited begin the process of owning a home. Conclusion     This year I was able to set and achieve many of my goals.  For next year I’ll try to put more specific numbers to all of my goals.  If any of you readers set goals for 2011 feel free to send me a link as I’d love to see what you are aiming to accomplish.  Have a great end of 2010 and best wishes for the start of 2011!       -Frog Out

    Read the article

  • Automatically analyze excel files

    - by dole doug
    I have to replicate a manual generation of a large number of excel files. I started to manually track the relations between cells ( files, formulas, etc). I also had a talk with the person which generates those files. For now I have a general understanding about how the excel files are generated, but "devil is in the details". I assume that I can write a script which will generate the hierarchy between cells and files, but this might require the same effort as manually noticing the relations. Also, I'm afraid that I'm not too experienced and my app is more prone to error approach than a manual analyze. How to handle this problem? Do you know about an open source project which analyze the excel files in a recursive mode following the formulas ?

    Read the article

  • Best way to store a large amount of game objects and update the ones onscreen

    - by user3002473
    Good afternoon guys! I'm a young beginner game developer working on my first large scale game project and I've run into a situation where I'm not quite sure what the best solution may be (if there is a lone solution). The question may be vague (if anyone can think of a better title after having read the question, please edit it) or broad but I'm not quite sure what to do and I thought it would help just to discuss the problem with people more educated in the field. Before we get started, here are some of the questions I've looked at for help in the past: Best way to keep track of game objects Elegant way to simulate large amounts of entities within a game world What is the most efficient container to store dynamic game objects in? I've also read articles about different data structures commonly used in games to store game objects such as this one about slot maps, but none of them are really what I'm looking for. Also, if it helps at all I'm using Python 3 to design the game. It has to be Python 3, if I could I would use C++ or Unityscript or something else, but I'm restricted to having to use Python 3. My game will be a form of side scroller shooter game. In said game the player will traverse large rooms with large amounts of enemies and other game objects to update (think some of the larger areas in Cave Story or Iji). The player obviously can't see the entire room all at once, so there is a viewport that follows the player around and renders only a selection of the room and the game objects that it contains. This is not a foreign concept. The part that's getting me confused has to do with how certain game objects are updated. Some of them are to be updated constantly, regardless of whether or not they can be seen. Other objects however are only to be updated when they are onscreen (for example, an enemy would only be updated to react to the player when it is onscreen or when it is in a certain range of the screen). Another problem is that game objects have to be easily referable by other game objects; something that happens in the player's update() method may affect another object in the world. Collision detection in games is always a serious problem. I need a way of containing the game objects such that it minimizes the number of cases when testing for collisions against one another. The final problem is that of creating and destroying game objects. I think this problem is pretty self explanatory. To store the game objects then I've considered a number of different methods. The original method I had was to simply store all the objects in a hash table by an id. This method was simple, and decently fast as it allows all the objects to be looked up in O(1) complexity, and also allows them to be deleted fairly easily. Hash collisions would not be a major problem; I wasn't originally planning on using computer generated ids to store the game objects I was going to rely on them all using ids given to them by the game designer (such names would be strings like 'Player' or 'EnemyWeapon4'), and even if I did use computer generated ids, if I used a decent hashing algorithm then the chances of collisions would be around 1 in 4 billion. The problem with using a hash table however is that it is inefficient in checking to see what objects are in range of the viewport. Considering the fact that certain game objects move (as well as the viewport itself), the only solution I could think of in order to only update objects that are in the viewport would be to iterate through every object in the hash table and check if it is in the viewport or not, updating only the ones that are in the valid area. This would be incredibly slow in scenarios where the amount of game objects exceeds 500, or even 200. The second solution was to store everything in a 2-d list. The world is partitioned up into cells (a tilemap essentially), where each cell or tile is the same size and is square. Each cell would contain a list of the game objects that are currently occupying it (each game object would be inserted into a cell depending on the center of the object's collision mask). A 2-d list would allow me to take the top-left and bottom-right corners of the viewport and easily grab a rectangular area of the grid containing only the cells containing entities that are in valid range to be updated. This method also solves the problem of collision detection; when I take an entity I can find the cell that it is currently in, then check only against entities in it's cell and the 8 cells around it. One problem with this system however is that it prohibits easy lookup of game objects. One solution I had would be to simultaneously keep a hash table that would contain all the positions of the objects in the 2-d list indexed by the id of said object. The major problem with a 2-d list is that it would need to be rebuilt every single game frame (along with the hash table of object positions), which may be a serious detriment to game speed. Both systems have ups and downs and seem to solve some of each other's problems, however using them both together doesn't seem like the best solution either. If anyone has any thoughts, ideas, suggestions, comments, opinions or solutions on new data structures or better implementations of the existing data structures I have in mind, please post, any and all criticism and help is welcome. Thanks in advance! EDIT: Please don't close the question because it has a bad title, I'm just bad with names!

    Read the article

  • ban an IP temporarily after x-many incorrect password attempts

    - by sova
    My new web server got hacked (sigh). I have physical access to my machine (in the near future). It seems like the only changes was a new user account and a borked sudoers file. It seems as though the password was discovered by dictionary searching (I didn't pick it). After I fix these problems (or do a full reinstall?) I want to add a mechanism to ban an IP (for maybe 24 hours or some time limit) after getting the password wrong x number of times, but I'm not a unix sysadmin or anything, so I'm not really sure where to get started. The machine is running Lucid Lynx, from an Ubuntu minimal installation. Thanks,I appreciate your help guys. Hopefully this is the right place for this question.

    Read the article

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

< Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >