Search Results

Search found 38088 results on 1524 pages for 'large scale project'.

Page 47/1524 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Zoom in Java Swing application

    - by Shirky
    Hi there, I am looking for ways to zoom in a Java Swing application. That means that I would like to resize all components in a given JPanel by a given factor as if I would take an screenshot of the UI and just applied an "Image scale" operation. The font size as well as the size of checkboxes, textboxes, cursors etc. has to be adjusted. It is possible to scale a component by applying transforms to a graphics object: protected Graphics getComponentGraphics(Graphics g) { Graphics2D g2d=(Graphics2D)g; g2d.scale(2, 2); return super.getComponentGraphics(g2d); } That works as long as you don't care about self-updating components. If you have a textbox in your application this approach ceases to work since the textbox updates itself every second to show the (blinking) cursor. And since it doesn't use the modified graphics object this time the component appears at the old location. Is there a possibility to change a components graphics object permanently? There is also a problem with the mouse click event handlers. The other possibility would be to resize all child components of the JPanel (setPreferredSize) to a new size. That doesn't work for checkboxes since the displayed picture of the checkbox doesn't change its size. I also thought of programming my own layout manager but I don't think that this will work since layout managers only change the position (and size) of objects but are not able to zoom into checkboxes (see previous paragraph). Or am I wrong with this hypothesis? Do you have any ideas how one could achieve a zoomable Swing GUI without programming custom components? I looked for rotatable user interfaces because the problem seems familiar but I also didn't find any satisfying solution to this problem. Thanks for your help, Chris

    Read the article

  • Compress Large Video Files with DivX / Xvid and AutoGK

    - by DigitalGeekery
    Have you ever recorded home video on a camcorder only to find the video size is enormous? What if you wanted to share a video clip on YouTube or another video sharing site, but the file size was bigger than the maximum upload size? Today we’ll look at a way to compress certain video files, such as MPEG and AVI, with Auto Gordian Knot (AutoGK). AutoGK is a free application that runs on Windows. It supports Mpeg1, Mpeg2, Transport Streams, Vobs, and virtually any codec used for an .AVI file. AutoGK will accept as input the following file types: MPG, MPEG, VOB, VRO, M2V, DAT, IFO, TS, TP, TRP, M2T, and AVI. Files are output as .AVI files and are converted using the DivX or XviD codecs. Installing and Using AutoGK Download and install AutoGK (link below) Open the AutoGK. You’ll need to navigate a few wizard screens, but you can just accept the defaults.   Choose your video file by clicking on the folder to the right of the Input file text box.   Browse for and select your video file and click “Open.”   For this example, we’ll be working with an .AVI file that’s 167MB in size.   The output file is copied into the same directory as the input file by default, but you can change this if you choose. If the input file is also .AVI, AutoGK will append an _agk to the output file so that the original is not overwritten. Next, you’ll see any audio tracks listed. You can unselect the check box if you’d like to remove the audio track. You can choose one of the Predefined size options… Or, select a Custom size in MB or Target Quality in percentage. For our example, we’ll be compressing our 167MB file to 35MB. Click on Advanced Settings. Here you can choose your codec, if you have a preference, as well as output resolution and output audio. If you’d like to use the DivX codec, you’ll need to download and install it separately. (See link below) Typically you’ll want to keep the defaults. Click “OK.” Now you’re ready to add your file conversion job to the Job queue. Click Add Job to add it to the queue. You can add multiple files conversions to the job queue and  convert them in one batch. Click Start to begin the conversion process. The process will begin. You’ll be able to see the progress in the Log window on the bottom left. When the conversion is complete you’ll see a “Job finished” and the total time in the log window.   Check your output file to see it’s compressed size. Test your video just to make sure the output quality is satisfactory.   Note:  Conversion times can vary greatly depending on the size of the file and your computer hardware. Files that are several GBs in size may take several hours to compress. AutoGK is no longer being actively developed but is still a wonderful DivX/XviD conversion tool. It can also be used to compress and convert non-copy protected DVDs. Downloads AutoGordianKnot DivX (optional) Similar Articles Productive Geek Tips Use Your Mac Mini as a Media Server Part 2Make Disk Cleanup Compress Older(or Newer) Files on XPMysticgeek Blog: Exclusive Look Inside Vreel – Including Interview With Vreel Founder!Friday Fun: Watch HD Video Content with MeevidConvert a DVD Movie Directly to AVI with FairUse Wizard 2.9 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox)

    Read the article

  • Large File Upload in SharePoint 2010

    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). This feed URL has been discontinued. Please update your reader's URL to : http://feeds.feedburner.com/winsmarts Read full article .... ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What is the economic rationale behind programmers who work on a open source project (free) instead of a commercial project (not free)?

    - by Kim Jong Woo
    I can't understand why some people dedicate so much hour into a completely open source project without closing it and yielding greater profit from it. I don't think profiting from your code is evil, I think it's a great motivator. Why do some people feel that commercial software and generating money from it is bad? There seems to be this black and white thinking that open source = good, commercial = bad. I hardly find this convincing, and often commercial companies which are supported by sales produce very good results. An open source software in the same niche can't compete against the corporation. Of course, sometimes this is completely the other way around where private companies produce inferior product compared to open source counterparts. So help me understand, why do programmers open source their code when there is commercial prospects for it? Shouldn't the rational programmer or human being make every effort to capitalize on their opportunity cost? Working on a open source project for months when you could've spent the same number of hours at commidity wage or some other monetary compensation?

    Read the article

  • AWS Large Instance: /mnt does not show all the space that should be available

    - by Emile Baizel
    I just created a Large (m1.large) 64 bit instance which comes with 850 GB instance storage. Look at the Large Instance http://aws.amazon.com/ec2/instance-types/ A 'df -h' from the root folder gives me the output below. The /mnt is where I'm thinking the instance storage is but here it is only showing me 414G. I have set up two servers and both are showing the same numbers. root@ip-11-11-11-11:/# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.9G 1.1G 6.5G 14% / none 3.7G 112K 3.7G 1% /dev none 3.7G 0 3.7G 0% /dev/shm none 3.7G 48K 3.7G 1% /var/run none 3.7G 0 3.7G 0% /var/lock /dev/sdb 414G 199M 393G 1% /mnt

    Read the article

  • Custom Profile Provider with Web Deployment Project

    - by Ben Griswold
    I wrote about implementing a custom profile provider inside of your ASP.NET MVC application yesterday. If you haven’t read the article, don’t sweat it.  Most of the stuff I write is rubbish anyway. Since you have joined me today, though, I might as well offer up a little tip: you can run into trouble, like I did, if you enable your custom profile provider inside of an application which is deployed using a Web Deployment Project.  Everything will run great on your local machine and you’ll probably take an early lunch because you got the code running in no time flat and the build server is happy and all tests pass and, gosh, maybe you’ll just cut out early because it is Friday after all.  But then the first user hits the integration machine and, that’s right, yellow screen of death. Lucky you, just as you’re walking out the door, the user kindly sends the exception message and stack trace: Value cannot be null. Parameter name: type Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Stack Trace: [ArgumentNullException: Value cannot be null. Parameter name: type] System.Activator.CreateInstance(Type type, Boolean nonPublic) +2796915 System.Web.Profile.ProfileBase.CreateMyInstance(String username, Boolean isAuthenticated) +76 System.Web.Profile.ProfileBase.Create(String username, Boolean isAuthenticated) +312 User error?  Not this time. Damn! One hour later… you notice the harmless “Treat as library component (remove the App_Code.compiled file)” setting on the Output Assemblies Tab of your Web Deployment Project. You have no idea why, but you uncheck it.  You test and everything works great both locally and on the integration machine.  Application users think you’re the best and you’re still going to catch the last half hour of happy hour.  Happy Friday.

    Read the article

  • Unit testing internal methods in a strongly named assembly/project

    - by Rohit Gupta
    If you need create Unit tests for internal methods within a assembly in Visual Studio 2005 or greater, then we need to add an entry in the AssemblyInfo.cs file of the assembly for which you are creating the units tests for. For e.g. if you need to create tests for a assembly named FincadFunctions.dll & this assembly contains internal/friend methods within which need to write unit tests for then we add a entry in the FincadFunctions.dll’s AssemblyInfo.cs file like so : 1: [assembly: System.Runtime.CompilerServices.InternalsVisibleTo("FincadFunctionsTests")] where FincadFunctionsTests is the name of the Unit Test project which contains the Unit Tests. However if the FincadFunctions.dll is a strongly named assembly then you will the following error when compiling the FincadFunctions.dll assembly :      Friend assembly reference “FincadFunctionsTests” is invalid. Strong-name assemblies must specify a public key in their InternalsVisibleTo declarations. Thus to add a public key token to InternalsVisibleTo Declarations do the following: You need the .snk file that was used to strong-name the FincadFunctions.dll assembly. You can extract the public key from this .snk with the sn.exe tool from the .NET SDK. First we extract just the public key from the key pair (.snk) file into another .snk file. sn -p test.snk test.pub Then we ask for the value of that public key (note we need the long hex key not the short public key token): sn -tp test.pub We end up getting a super LONG string of hex, but that's just what we want, the public key value of this key pair. We add it to the strongly named project "FincadFunctions.dll" that we want to expose our internals from. Before what looked like: 1: [assembly: System.Runtime.CompilerServices.InternalsVisibleTo("FincadFunctionsTests")] Now looks like. 1: [assembly: System.Runtime.CompilerServices.InternalsVisibleTo("FincadFunctionsTests, 2: PublicKey=002400000480000094000000060200000024000052534131000400000100010011fdf2e48bb")] And we're done. hope this helps

    Read the article

  • How to grow from single server setup

    - by Jenkz
    I'm looking for resources on how to grow our server setup. We currently have one dedicated server with Rackspace in the UK of the following spec: HPDL385_G2_PrevGen HP Single Dual Core Opteron 2214 (2.2Ghz) 4GB RAM 2x 10,000 SCSI Drives in RAID 1 Our traffic is up to 550,000 UVs per month. The site runs off a PHP and MySQL setup. The database gets an absolute hammering, we have many complex queries joining multilpe tables. We are using APC for PHP caching. I'm getting to the stage where I've done as much DB and query optimisation as I can and wonder what the next step should be...... I've looked at memcache, but I've got the impression that his requires a large amount of RAM and ideally a dedicated box.... So is the next step to have two boxes; one for database, one for Apache? Or is there a step I've overlooked. Our load is usually around the 2 mark, but right now it's up at 20!

    Read the article

  • How to grow from single server setup

    - by Jenkz
    I'm looking for resources on how to grow our server setup. We currently have one dedicated server with Rackspace in the UK of the following spec: HPDL385_G2_PrevGen HP Single Dual Core Opteron 2214 (2.2Ghz) 4GB RAM 2x 10,000 SCSI Drives in RAID 1 Our traffic is up to 550,000 UVs per month. The site runs off a PHP and MySQL setup. The database gets an absolute hammering, we have many complex queries joining multilpe tables. We are using APC for PHP caching. I'm getting to the stage where I've done as much DB and query optimisation as I can and wonder what the next step should be...... I've looked at memcache, but I've got the impression that his requires a large amount of RAM and ideally a dedicated box.... So is the next step to have two boxes; one for database, one for Apache? Or is there a step I've overlooked. Our load is usually around the 2 mark, but right now it's up at 20!

    Read the article

  • Best way to convert existing project to be open source in GitHub

    - by Tom
    I've been working on a personal closed source project for some time and would like to make it open source. I've never created my own open source project before so it will be a good learning experience. I have been using GitHub as source control, so once I've written some decent docs on how to use and develop for it etc, it should be as simple as switching the repo to be public right? I guess my main question is around licencing. I was thinking of going with Apache 2.0 licence just because it seems to be widely used. It requires the licence header to be attached to all the source files, but if I do that now then all the other commits in the past will have it missing. Does that mean some one could pull an earlier version and it wouldn't have a licence? Is it best to start a new repo with the initial commit containing all the code with licence headers? Or maybe is there some advanced Git functionality that allows me to apply the licence header to all existing commits some how? Cheers.

    Read the article

  • How to pause/resume transfer of large files?

    - by Olivier Lalonde
    I recently had to copy about 20 GB of data split between about 20 files from my laptop to an external hard drive. Since this operation takes quite a while (at ~560kb/s), I was wondering if there was any way to pause the transfer and resume it later (in case, I need to interrupt the transfer). As a side question, is there any performance difference between copying from the terminal vs copying from Nautilus?

    Read the article

  • Designing a large database with multiple sources

    - by CatchingMonkey
    I have been tasked with redesigning, or at worst optimising the structure of a database for a data warehouse. Currently, the database has 4 other source databases (which is due to expand to X number of others), all of which have their own data structures, naming conventions etc. At the moment an overnight SSIS package pulls the data from the various source and then for each source coverts the data into a standardised, usable format. These tables are then appended to each other creating a 60m row, 40 column beast!. This table is then used in a variety of ways from an OLAP cube to a web front end. The structure has been in place for a very long time, and the work I have been able to prove the advantages of normalisation, and this is the way I would like to go. The problem for me is that the overnight process takes so long I don't then want to spend additional time normalising the last table into something usable. Can anyone offer any insight or ideas into the best way to restructure or optimise the database efficiently? Edit: All the databases are MS SQL Server 2008 R2 Thanks in advance CM

    Read the article

  • Reducing brightness of large areas containing bright colours

    - by intuited
    I do most of my work in either a terminal or a web browser. I prefer my terminals to use bright colours on dark. I would really prefer that web pages tended to look this way as well, but that's not under my control. The problem is that when I switch from a light-on-dark terminal to a dark-on-light web page (like this one), my eyes have to adjust to the overall rise in screen brightness. Apparently this is bad for your eyes, in addition to being painful and annoying. It would seem to be possible for some layer of the interface to adjust the displayed colours for parts of the screen, or perhaps for particular windows, to reduce the brightness of the brighter areas of the screen. Can this be done, possibly with a Compiz extension?

    Read the article

  • How do you dive into large code bases?

    - by miku
    What tools and techniques do you use for exploring and learning an unknown code base? I am thinking of tools like grep, ctags, unit-tests, functional test, class-diagram generators, call graphs, code metrics like sloccount and so on. I'd be interested in your experiences, the helpers you used or wrote yourself and the size of the codebase, with which you worked with. I realize, that this is also a process (happening over time) and that learning can mean "can give a ten minute intro" to "can refactor and shrink this to 30% of the size". Let's leave that open for now.

    Read the article

  • Narrowing down my large keyword list for new PPC campaign

    - by gijoemike
    If I have a list of 100 keywords that are candidates for a PPC campaign (my list is actually 1000+). What is the best approach to narrowing this down to the top 5-10 keywords I should start with? I'm also wondering if my top chosen keywords for PPC campaign should be my main keywords for SEO site optimization for organic traffic. I also have another question on this site asking: How does one estimate where a competitor is getting most of their traffic from? Thanks. The website isn't created yet, but will be up in January.

    Read the article

  • Project collision shapes to plane for 2.5D collision detection

    - by Jkh2
    I am working on a top down 2.5D game. In the game anything that overlaps on the screen should be 'colliding' with each other regardless of whether they are on the same plane in the 3D world. This is illustrated below from a side-ways view: The orange and green circles are spheres floating in the 3D world. They are projected onto a plane parallel to the viewport plane (y = 0 in the image) and if they overlap there is a collision event between them. These spheres are attached to other meshes to represent the sphere bounding boxes for collisions. The way I plan to implement this at the moment is the following: Get the 3D world position at the center of the sphere. Use Camera.WorldToViewportPoint to project the point to the viewport plane. Move a Sphere Collider with the radius of the sphere to that point. Test for collisions using unity colliders. My question is how to extend this to work for rotated cuboids. For instance if I have two rotated cuboids, if I follow the logic above it would not work as intended as the cuboids may not collide but they could still be intersected on the view plane. An example is below: Is there a way to project a cuboid that would be aligned with the plane? Would it be a valid cuboid for all rotations if I did this?

    Read the article

  • Best method to organize/manage dependencies in the VCS within a large solution

    - by SnOrfus
    A simple scenario: 2 projects are in version control The application The test(s) A significant number of checkins are made to the application daily. CI builds and runs all of the automation nightly. In order to write and/or run tests you need to have built the application (to reference/load instrumented assemblies). Now, consider the application to be massive, such that building it is prohibitive in time (an entire day to compile). The obvious side effect here, is that once you've performed a build locally, it is immediately inconsistent with latest. For instance: If I were to sync with latest, and open up one of the test projects, it would not locally build until I built the application. This is the same when syncing to another branch/build/tag. So, in order to even start working, I need to wait a day to build the application locally, so that the assemblies could be loaded - and then those assemblies wouldn't be latest. How do you organize the repository or (ideally) your development environment such that you can continually develop tests against whatever the current build is, or a given specific build, while minimizing building the application as much as possible?

    Read the article

  • protecting css selectors on large website

    - by Tim
    I have content that appears within a corporate website inside an iframe. Several departments contribute their own CSS files to manage the overall UI and design. My problem is that they may use selectors for elements like td (for instance), without notice. Of course that will affect my own content in the frame unless I add a class to every td. I'm just using td as an example: the generic style for any element could change without notice. Is there any method/convention/practice I can use to protect my own styling?

    Read the article

  • How do you dive into large code bases?

    - by miku
    What tools and techniques do you use for exploring and learning an unknown code base? I am thinking of tools like grep, ctags, unit-tests, functional test, class-diagram generators, call graphs, code metrics like sloccount and so on. I'd be interested in your experiences, the helpers you used or wrote yourself and the size of the codebase, with which you worked with. I realize, that this is also a process (happening over time) and that learning can mean "can give a ten minute intro" to "can refactor and shrink this to 30% of the size". Let's leave that open for now.

    Read the article

  • Rendering large and high poly meshes

    - by Aurus
    Consider an huge terrain that has a lot polygons, to render this terrain I thought of following techniques: Using height-map instead of raw meshes: Yes, but I want to create a lot of caves and stuff that simply wont work with height-maps. Using voxels: Yes, but I think that this would be to much since I don't even want to support changing terrain.. Split into multiple chunks and do some sort of LOD with the mesh: Yes, but how would I do that? Tessellation usually creates more detail not less. Precompute the same mesh in lower poly version (like Mudbox does) and depending on the distance it renders one of these meshes: Graphic memory is limited and uploading only the chunks won't solve that problem since the traffic would be too high. IMO the last one sounds really good, but imagine the following process: Upload and render the chunks depending on the current player position. [No problem] Player will walk straight forward Now we maybe have to change on of the low poly chunk with the high poly one So, Remove the low poly chunk and load the high poly chunk [Already to much traffic here, I think] I am not very experienced in graphic programming and maybe the upper process is totally okay but somehow I think it is too much. And how about the disk space it would require.. I think 3 kind of levels would be fine but isn't that also too much? (I am using OpenGL but I don't think that this is important)

    Read the article

  • How to make the run button run the project, not the file, in Eclipse

    - by Roy T.
    I'm using the Spring IDE, a variant of Eclipse to create a Java project. One big irritation I have is that when I press the run button Eclipse tries to run the current file, which usually fails because it doesn't have a main method. I've set up run configurations in the hope that would make the play button default to the run configuration instead of the current file, but that doesn't work either. Now to run my application correctly I have to press the little arrow next to play, select my favorite run configuration and then it works, this is only two extra clicks but it's tedious, the button is small and I feel like I shouldn't have to perform these extra steps. I mean what is the point of run configurations and projects if it still tries to run a file by default? Even more preferably I wouldn't even want to touch the mouse but just press Ctrl+F11, but this has the same behavior. All above applies to debugging as well btw. So my question is this: how do I make the run and debug buttons (and their short keys) default to the project's run configuration instead of to trying (and failing) to run only the current file? Much like it is in Visual Studio and other IDEs?

    Read the article

  • Reading a large SQL Errorlog

    - by steveh99999
    I came across an interesting situation recently where a SQL instance had been configured with the Audit of successful and failed logins being written to the errorlog. ie This meant… every time a user or the application connected to the SQL instance – an entry was written to the errorlog. This meant…  huge SQL Server errorlogs. Opening an errorlog in the usual way, using SQL management studio, was extremely slow… Luckily, I was able to use xp_readerrorlog to work around this – here’s some example queries..   To show errorlog entries from the currently active log, just for today :- DECLARE @now DATETIME DECLARE @midnight DATETIME SET @now = GETDATE() SET @midnight =  DATEADD(d, DATEDIFF(d, 0, getdate()), 0) EXEC xp_readerrorlog 0,1,NULL,NULL,@midnight,@now   To find out how big the current errorlog actually is, and what the earliest and most recent entries are in the errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT COUNT(*) AS 'Number of entries in errorlog', MIN(logdate) AS 'ErrorLog Starts', MAX(logdate) AS 'ErrorLog Ends' FROM #temp_errorlog DROP TABLE #temp_errorlog To show just DBCC history  information in the current errorlog :- EXEC xp_readerrorlog 0,1,'dbcc'   To show backup errorlog entries in the current errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT * from #temp_errorlog WHERE ProcessInfo = 'Backup' ORDER BY Logdate DROP TABLE #temp_errorlog XP_Errorlog is an undocumented system stored procedure – so no official Microsoft link describing the parameters it takes – however,  there’s a good blog on this here And, if you do have a problem with huge errorlogs – please consider running system stored procedure  sp_cycle_errorlog on a nightly or regular basis.  But if you do this,  remember to change the amount of errorlogs you do retain – the default of 6 might not be sufficient for you….

    Read the article

  • How to deal with large open worlds?

    - by Mr. Beast
    In most games the whole world is small enough to fit into memory, however there are games where this is not the case, how is this archived, how can the game still run fluid even though the world is so big and maybe even dynamic? How does the world change in memory while the player moves? Examples for this include the TES games (Skyrim, Oblivion, Morrowind), MMORPGs (World of Warcraft), Diablo, Titan Quest, Dwarf Fortress, Far Cry.

    Read the article

  • Large sparse (stiff) ODE system needed for testing

    - by macydanim
    I hope this is the right place for this question. I have been working on a sparse stiff implicit ODE solver and have finished the code so far. I now tested the solver with the Van der Pol equation, and another stiff problem, which is of dimension 4. But to perform better tests I am searching for a bigger system. I'm thinking of the order N = 100...1000, if possible stiff and sparse. Does anybody have an example I could use? I really don't know where to search.

    Read the article

  • new project; entire node.js app

    - by Jared
    I have been looking into Node.js, express and Nowjs and love how easy it is to have real time interactions between clients. My background is mostly from CodeIgniter MVC using PHP and MYSql. I want to re make a current web project of mine from scratch to make everything better and more real time with this newer technology. After researching and doing test examples I want to use node.js , express and Nowjs for the real time interactions once someone connects to the socket.io to pull data back to clients. But use Code Igniter for the control of the site and user management , possible shopping cart/store , pretty much everything else. This is purely due to time constraints and that I am already familiar with doing it that way. I have been looking at MongoDB as an alternative to MySql, Basically the app is going to be multiple chat rooms all on one page. with the ability of notifications and private messaging. Lots of data transfer and images. before I started piecing it together I wanted to get people who have already done something similar. My model would use Code Igniter and MySQL to render the page and then connect them onto a node.js server and broadcast using express and nowjs would using a mongoDB be better than mySQL for tons of messages and data being stored or MYSQL? Also does it make since to not make the whole site on Node.js , kinda piece it together like that? I was asked to re post this somewhere else as it was not up to the format for SO, OP from here http://stackoverflow.com/questions/12649469/new-project-need-some-start-up-advice-node-js-app#comment17062924_12649469

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >