Search Results

Search found 685 results on 28 pages for 'growth'.

Page 21/28 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • managing a high traffic media sharing website

    - by Jordan Westerman
    i'm in the process of developing a website that i predict will generate a lot of traffic. the site will be similar to many other sites offering free media streaming: mp3's. we are going to start with a pretty minimal amount of media to share, but the basic idea is that artists will set up a profile page with music they have made available for consumers to visit the page and listen to the music. we are starting with just a handful of artists, but i think that this project will generate more and more artist pages. eventually i'd like to set it up so consumers can create personalized playlists. how can i best prepare server space and bandwidth capabilities? i have a small team of web designers and programmers working on the site, as i am pretty illiterate when it comes to site management. as the ring leader of this organization, i am more or less looking for financial requirements and monthly burn rate estimates. i don't have a ton of capitol to start with, putting together a business plan, but i am seeking investments. i have a game plan to grow fast enough to be successful, and slow enough to manage the financial growth requirements. any questions i may have failed to ask myself? is it realistic to start this project on a shared server, and upgrade? any financial advice you think i can use? i really appreciate any advice given, as this is my first business venture. thank you all in advance. Jordan Westerman D.B.A. Badfish Productions, LLC

    Read the article

  • Simple end-to-end load and bottleneck monitoring for DB-based web sites

    - by T.J. Crowder
    What tools do you use / would you recommend for monitoring a Linux-based, DB-based website's servers for bottlenecks and load? The obvious goal being to know when growth has gotten to the point where it's necessary to scale up (or out) one or more of the bits and pieces because the current system won't be managing the load if an observed trend continues. I'm looking for general recommendations based on standard Linux load metrics, disk I/O metrics, network I/O metrics, etc., but if specifics are helpful: It'll be Tomcat6 using APR (possibly with a Varnish or similar caching and balancing front-end), MySQL, and either Ubuntu 8.04 LTS or 10.04 LTS depending on timing. I know about top, vmstat, iostat, bwmon and the like that collect and parse info from the /proc file system (et. al.); and obviously MySQL provides a lot of queriable performance information. I could use those directly, probably automating periodic monitoring logs with scripts and such. But I have a suspicion that I'd be reinventing a wheel... For example, Hyperic HQ seems to be along the lines of what I'm looking for. Others? Meta: I tend to think of "recommendation" questions as needing to be CW because there's no one right answer, but I see a lot of these here that aren't CWs, so I haven't marked it as one. I'll happily do so if enough people think I should.

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by gft74
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

  • Router recommendation to virtualize 800 IPs

    - by delerious010
    I've recently been looking at getting some new load balancers for our environment as we are expecting to double our client base in the next 12 months. Currently we have 400 public IPS serving 800 clusters ( 2 clusters / IP due to ports ) on Coyote Point Balancers, and distributing connections to 3 web servers serving about 6GBytes outgoing, 2Gbytes in per day. If we double, this would be about 800 IPs, possibly 1600 clusters, and about 6 servers per cluster ( for a total of 9600 so called "real servers" using Barracuda's lingo ). Due to the amount of clusters, most solutions I've looked at ( Coyote, Barracuda, Loadbalancer.org ) seem to be unsure whether they'll be able to handle our planned growth, mostly due to health checks performed on the servers ... which makes total sense when you think of it. So the fine folk at loadbalancer.org recommended that we may be better off offload the 400-800 public IPs, which we require for SSL eCommerce solutions, over to a forward facing router. From that point on, the router could do some mangling to route EXT_IP:443 to INT_IP:INT_PORT which would then allow us to reduce the Load Balancer configuration to 1 or 2 clusters, thus resolving the health check problem. Does this idea make sense to yall ? Or would you have other recommendations to make ? Secondly, what router would you recommend for such an undertaking ? I'd be looking at something that has some form of failover mechanism built in. On a totally unrelated note, I've got to admit that I'm extremely pleased with the responses I got from loadbalancer.org. Their responses to my inquiries were surprisingly helpful ( i.e. I didn't feel as if I was taking to a sales guy trying to push something ). ( No I don't work for them, and sadly nor are they sending me free gear ).

    Read the article

  • Performance associated with storing millions of files on NTFS

    - by Tim Brigham
    Does anyone have a method / formula, etc that I could use - hopefully based on both current and projected numbers of files - to project the 'right' length of the split and the number of nested folders? Please note that although similar it isn't quite the same as Storing a million images in the filesystem. I'm looking for a way to help make the theories outlined more generic. Assumptions I have 'some' initial number of files. This number would be arbitrary but large. Say 500k to 10m+. I have considered the underlying physical hardware disk IO requirements that would be necessary to support such an endeavor. Put another way As time progresses this store will grow. I want to have the best balance of current performance and as my needs increase. Say I double or triple my storage. I need to be able to address both current needs and projected future growth. I need to both plan ahead and not sacrifice too much of current performance. What I've come up with I'm already thinking about using a hash split every so many characters to split things out across multiple directories and keeping the trees even, very similar as outlined in the comments in the question above. It also avoids duplicate files, which would be critical over time. I'm sure that the initial folder structure would be different based on what I've outlined, and depending on the initial scale. As far as I can figure there isn't a one size fits all solution here. It would be horrendously time intensive to work something out experimentally.

    Read the article

  • Four disks - RAID 10 or two mirrored pairs?

    - by ewwhite
    I have this discussion with developers quite often. The context is an application running in Linux that has a medium amount of disk I/O. The servers are HP ProLiant DL3x0 G6 with four disks of equal size @ 15k rpm, backed with a P410 controller and 512MB of battery or flash-based cache. There are two schools of thought here, and I wanted some feedback... 1). I'm of the mind that it makes sense to create an array containing all four disks set up in a RAID 10 (1+0) and partition as necessary. This gives the greatest headroom for growth, has the benefit of leveraging the higher spindle count and better fault-tolerance without degradation. 2). The developers think that it's better to have multiple RAID 1 pairs. One for the OS and one for the application data, citing that the spindle separation would reduce resource contention. However, this limits throughput by halving the number of drives and in this case, the OS doesn't really do much other than regular system logging. Additionally, the fact that we have the battery RAID cache and substantial RAM seems to negate the impact of disk latency... What are your thoughts?

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by user31459
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

  • Replication keeps popping up on SharePoint databases

    - by Ddono25
    My typical discovery scenario: We receive an alert that the transaction log is growing quickly. We are in Simple Recovery so I go to check it out. Log is already sized to 100GB and is at 80% capacity. I run the "Whats using my log files" script from SQL Server Central and see that Replication is enabled on the database. We do not set up replication, and I don't think Replication can be done on SharePoint content db's as Replication is not supported (requires PK on all tables). This has been occurring on random servers (about 5 so far, all within the past three weeks) and it only occurs on Content Databases. sp_removedbreplication does not always work in removing the Replication either. We have found that we need to run the sp_removedbreplication, change all db owners to SA and reset Recovery Mode to Simple to completely eradicate any vestiges of this bug. How would Replication be enabling itself? We have never set up Replication on these servers. There is no evidence of any type of Replication other than the 'log_reuse_wait_desc' from the DMV query and log growth. Any help on this ghost would be appreciated!

    Read the article

  • About to go live: virtual dedicated server or cloud?

    - by morpheous
    I am about to launch my startup company, and we will be going live in a few weeks time. We have really tight budgetary constraints, since we are bootstrapping - and would prefer not to raise external capital. I cant use shared hosting because I need more control of the server machine (for technical reasons - e.g. using proprietary extensions to PHP, Apache and in the database layer as well) - but want to control costs and dont want to go fully private server route, until we have determined the market size etc. So the only real alternatives AFAIK is between virtual server and the cloud. At the moment, cloud services seem a bit "vague" to me. My understanding is that they allow an entity to outsource its IT infrastructure, which in my mind (at least), is indistinguishable from what a hosting provider provides (at least from a functional point of view) - I would like to seek some clarification on exactly what the difference between the two is. Back to my original question, my requirements are: IT infrastructure that can scale with growth Ability to have control of the machine (for e.g. to install our internally developed libraries etc) Backup software that is flexible and comprehensive enough (yet simple to use), that allows a (secured) backup strategy to be implemented. On this issue, I have always wondered where the actual backed up data was stored (since the physical machines are remote, and one cant get access to any actual tapes etc backed onto). I would also like some advice and recommendations in this area. Regarding data size, I am expecting the dataset to be increasing by a few megabytes of data (originally, say 10Mb, in about a years time, possibly 50Mb) every day. As an aside, I have decided to deploy on a Debian server (most of my additional libraries etc were compiled and built on a Debian machine). Mindful of all of the above, I would like some advice (and reason) as to which route to take. I would also like some advice on which backup software to use, from people who have walked a similar path.

    Read the article

  • ASP.NET Session State SQL Server 2008 R2 Freezes with High CPU Usage

    - by jtseng
    Our ASP.Net website uses SQL Server as the session state provider. We currently host the database on SQL Server 2005 since it does not play well on 2008 R2. We would like to know why, and how to fix it. hardware setup Our current session state server has SQL Server 2005 with the files hosted on a single local disk. It is one of our oldest servers since it has served us well, and we never felt the need to upgrade it. The database is about 2 GB holding 6000 sessions. (The sessions are a little big, but we need it.) We have another server with SQL Server 2008 R2 with a much faster CPU, much more RAM, and a much faster hard disk. situation One day, we have a huge surge in traffic. The transaction log growth on SQL Server freezes the server for 10's of seconds, allowing only a few requests through in minutes. So we load up the new server with ASPState with very large data and log files and point all of our applications to the new server. It chugs along fine for about 5 minutes, and then the CPU usage jumps up to 50% of the 16 cores that Standard Edition can use and freezes for 10's of seconds at a time. The files do not record any autogrowth events. The disk queue is nice and low. RAM usage is low. CPU usage on our old server has never been higher than 5%. What happened on the new server? Alternatively, I would like to hear success stories with ASP.NET session state server running on SQL Server 2008 R2 with an average write load of 30MB/sec with bursts up to 200MB/sec.

    Read the article

  • How to use SMO.Scripter to generate a "full-script" of DB?

    - by ssg
    What I'm trying to do is a very simple task; I'd like to create a script to generate a database along with tables, SPs and UDFs. This is done with a couple of clicks on SSMS interface. However db.Script() only scripts CREATE DATABASE. Ok, so I iterate over objects one by one and script them individually. Now, what I have is an arbitrary order of CREATEs naturally failing during execution because dependent objects aren't created first. Ok so I set WithDependencies flag so dependent objects ARE scripted first. However this causes redundant CREATE scripts for objects that are already created, and causes around 20x growth in SQL file and generation time. Not to mention the errors hit during execution. I don't know if there is a way to mark objects "already walked in dependency tree", it doesn't seem likely. I might be missing a bigger picture somewhere, but MSDN recommends "Scripter" to generate scripts like the one I want. I had used Transfer class before to transfer table definitions but it fails to create a failsafe script. It doesn't make sense to use a Transfer object to generate a script anyway. I want to do this the way it should be done, and without losing my faith in SMO.

    Read the article

  • When am I ready for contracting work?

    - by Kirk Broadhurst
    What skills are required to work on a temp / contracting basis, and how does a developer know when they are ready to work in these circumstances? I have some colleagues who are suggesting contracting work is 'the way to go'; the pay is significantly better. It appears that when a permanent position pays (X times $10k) per year, the corresponding contract position pays almost $X per hour - which is close to twice as much. I look forward to doing this type of work as an experienced expert, but worry that by doing so right now I'd be turning away learning and development opportunities. My assumptions about contract work are the following: less / zero money invested on training and development less concern for job satisfaction and learning ("they won't be here in 12 months") possibly less concern about the overall quality of the project ("we won't be here in 12 months") There are similar questions on stack overflow at the moment but what I've really looking for is: What does the developer give up by moving to contract work? Is it preferable to have structured learning in a permanent position rather than seat-of-the-pants learning in a contract position? What skill leve should the contractor have before making the move? Is there still the same kind of growth in contracting that there is in permanent positions? Rightly or wrongly, I see this as a choice between $ (contracting) and learning / development (permanent). Is this fair?

    Read the article

  • ServerIdentity memory leak with IHttpAsyncHandler

    - by Anton
    I have a .NET web application that consists of a single HTTP handler class that implements IHttpAsyncHandler. All requests to this handler are handled asynchronously, though some requests are short-lived and some are long-lived (nothing over a few seconds). The problem is that memory consumption grows over time as requests are handled. All profiling results point to an unbounded growth of String objects held by instances of System.Runtime.Remoting.ServerIdentity. Every String value is different, but they all look similar to: /dd41c00e_1566_4702_b660_c81cdea18a43/vigefresi5pfv8n0ekddg57z_1154.rem There is nothing in my application that uses ServerIdentity directly, and unless I am mistaken, the ServerIdentity instances are proportional to the number of incoming requests. If this is an internal .NET structure, it looks like the CLR is not cleaning up after itself. What could be causing the leak? UPDATE A little less than half of the String objects are being held by System.Runtime.Remoting. The remaining String objects are being held by System.Runtime.Serialization and look similar to: +1sgess5rjcrgbmp3kqr6bmv_3474.rem Also, the problem only seems to occur when lots of simultaneous HTTP web requests arrive.

    Read the article

  • DDD and Client/Server apps

    - by Christophe Herreman
    I was wondering if any of you had successfully implemented DDD in a Client/Server app and would like to share some experiences. We are currently working on a smart client in Flex and a backend in Java. On the server we have a service layer exposed to the client that offers CRUD operations amongst some other service methods. I understand that in DDD these services should be repositories and services should be used to handle use cases that do not fit inside a repository. Right now, we mimic these services on the client behind an interface and inject implementations (Webservices, RMI, etc) via an IoC container. So some questions arise: should the server expose repositories to the client or do we need to have some sort of a facade (that is able to handle security for instance) should the client implement repositories (and DDD in general?) knowing that in the client, most of the logic is view related and real business logic lives on the server. All communication with the server happens asynchronously and we have a single threaded programming model on the client. how about mapping client to server objects and vice versa? We tried DTO's but reverted back to exposing the state of our objects and mapping directly to them. I know this is considered bad practice, but it saves us an incredible amount of time) In general I think a new generation of applications is coming with the growth of Flex, Silverlight, JavaFX and I'm curious how DDD fits into this.

    Read the article

  • Memory leak of java.lang.ref.WeakReference objects inside JDK classes

    - by mandye
    The following simple code reproduces the growth of java.lang.ref.WeakReference objects in the heap: public static void main(String[] args) throws Exception { while (true) { java.util.logging.Logger.getAnonymousLogger(); Thread.sleep(1); } } Here is the output of jmap command within a few seconds interval: user@t1007:~ jmap -d64 -histo:live 29201|grep WeakReference 8: 22493 1079664 java.lang.ref.WeakReference 31: 1 32144 [Ljava.lang.ref.WeakReference; 106: 17 952 com.sun.jmx.mbeanserver.WeakIdentityHashMap$IdentityWeakReference user@t1007:~ jmap -d64 -histo:live 29201|grep WeakReference 8: 23191 1113168 java.lang.ref.WeakReference 31: 1 32144 [Ljava.lang.ref.WeakReference; 103: 17 952 com.sun.jmx.mbeanserver.WeakIdentityHashMap$IdentityWeakReference user@t1007:~ jmap -d64 -histo:live 29201|grep WeakReference 8: 23804 1142592 java.lang.ref.WeakReference 31: 1 32144 [Ljava.lang.ref.WeakReference; 103: 17 952 com.sun.jmx.mbeanserver.WeakIdentityHashMap$IdentityWeakReference Note that jmap command forces FullGC. JVM settings: export JVM_OPT="\ -d64 \ -Xms200m -Xmx200m \ -XX:MaxNewSize=64m \ -XX:NewSize=64m \ -XX:+UseParNewGC \ -XX:+UseConcMarkSweepGC \ -XX:MaxTenuringThreshold=10 \ -XX:SurvivorRatio=2 \ -XX:CMSInitiatingOccupancyFraction=60 \ -XX:+UseCMSInitiatingOccupancyOnly \ -XX:+CMSParallelRemarkEnabled \ -XX:+DisableExplicitGC \ -XX:+CMSClassUnloadingEnabled \ -XX:+PrintGCTimeStamps \ -XX:+PrintGCDetails \ -XX:+PrintTenuringDistribution \ -XX:+PrintGCApplicationConcurrentTime \ -XX:+PrintGCApplicationStoppedTime \ -XX:+PrintGCApplicationStoppedTime \ -XX:+PrintClassHistogram \ -XX:+ParallelRefProcEnabled \ -XX:SoftRefLRUPolicyMSPerMB=1 \ -verbose:gc \ -Xloggc:$GCLOGFILE" java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) Server VM (build 16.0-b13, mixed mode) Solaris 10/Sun Fire(TM) T1000

    Read the article

  • Trouble using latex in Matplotlib / Scipy etc.

    - by ajhall
    I'm having some issues with my first attempts at using matplotlib and scipy to make some scatter plots of my data (too many variables, trying to see many things at once). Here's some code of mine that is working fairly well... import numpy from scipy import * import pylab from matplotlib import * import h5py FileID = h5py.File('3DiPVDplot1.mat','r') # (to view the contents of: list(FileID) ) group = FileID['/'] CurrentsArray = group['Currents'].value IvIIIarray = group['IvIII'].value PFarray = group['PF'].value growthTarray = group['growthT'].value fig = pylab.figure() ax = fig.add_subplot(111) cax = ax.scatter(IvIIIarray, growthTarray, PFarray, CurrentsArray, alpha=0.75) cbar = fig.colorbar(cax) ax.set_xlabel('Cu / III') ax.set_ylabel('Growth T') ax.grid(True) pylab.show() I tried to change the code to include latex fonts and interpreting, none of it seems to work for me, however. Here's an example attempt that didn't work: import numpy from scipy import * import pylab from matplotlib import * import h5py rc('text', usetex=True) rc('font', family='serif') FileID = h5py.File('3DiPVDplot1.mat','r') # (to view the contents of: list(FileID) ) group = FileID['/'] CurrentsArray = group['Currents'].value IvIIIarray = group['IvIII'].value PFarray = group['PF'].value growthTarray = group['growthT'].value fig = pylab.figure() ax = fig.add_subplot(111) cax = ax.scatter(IvIIIarray, growthTarray, PFarray, CurrentsArray, alpha=0.75) cbar = fig.colorbar(cax) ax.set_xlabel(r'Cu / III') ax.set_ylabel(r'Growth T') ax.grid(True) pylab.show() I'm using fink installed python26 with corresponding packages for scipy matplotlib etc. I've been using iPython and manual work instead of scripts in python. Since I'm completely new to python and scipy, I'm sure I'm making some stupid simple mistakes. Please enlighten me! I greatly appreciate the help!

    Read the article

  • How to allow users to define financial formulas in a C# app

    - by Peter Morris
    I need to allow my users to be able to define formulas which will calculate values based on data. For example //Example 1 return GetMonetaryAmountFromDatabase("Amount due") * 1.2; //Example 2 return GetMonetaryAmountFromDatabase("Amount due") * GetFactorFromDatabase("Discount"); I will need to allow / * + - operations, also to assign local variables and execute IF statements, like so var amountDue = GetMonetaryAmountFromDatabase("Amount due"); if (amountDue > 100000) return amountDue * 0.75; if (amountDue > 50000) return amountDue * 0.9; return amountDue; The scenario is complicated because I have the following structure.. Customer (a few hundred) Configuration (about 10 per customer) Item (about 10,000 per customer configuration) So I will perform a 3 level loop. At each "Configuration" level I will start a DB transaction and compile the forumlas, each "Item" will use the same transaction + compiled formulas (there are about 20 formulas per configuration, each item will use all of them). This further complicates things because I can't just use the compiler services as it would result in continued memory usage growth. I can't use a new AppDomain per each "Configuration" loop level because some of the references I need to pass cannot be marshalled. Any suggestions?

    Read the article

  • Does a CS PhD Help for Software Engineering Career?

    - by SiLent SoNG
    I would like to seek advice on whether or not to take a PhD offer from a good university. My only concern is the PhD will take at least 4 year's commitment. During the period I won't have good monetary income. I am also concerned whether the PhD will help my future career development. My career goal is software engineering only. Some of the PhD info: The PhD is CS related. The research area is Information Retrieval, Machine Learning, and Nature Language Processing. More specifically, the research topic is Deep Web search. Some of backgrounds: I worked in Oracle for 3 years in database development after obtained a CS degree from some good university. In last year I received an email telling an interesting project from a professor and thereafter I was lured into his research team. The team consists of 4 PhD students; those students have little or no industry experiences and their coding skills are really really bad. By saying bad I mean they do not know some common patterns and they do not know pitfalls of the programming languages as well as idioms for doing things right. I guess a at least 4 year commitment is worth of serious consideration. I am 27 at this moment. If I take the offer, that implies I will be 31+ upon graduation. Wah... becoming.. what to say, no longer young. Hence, here I am seeking advice on whether it is good or not to take the PhD offer, and whether a CS PhD is good for my future career growth as a software engineer? I do not intent to go for academia.

    Read the article

  • What should a self-taught programmer with no degree learn/read?

    - by sjbotha
    I am a self-taught programmer and I do do not have any degrees. I started pretty young and I've got about 7 years of actual programming work experience. I believe I'm a pretty good programmer, but I admit that I have not played much with algorithms or delved into any really low-level aspects of programming such as how compilers work. I have worked with other programmers with and without degrees. Some were good and some not; having a degree didn't seem to make any difference as to which pot they fell into. Since then I've come to realize that it does depend on the school where the degree is obtained. Some people suggest that you really should get a degree; that there are things you'll learn in the process that you won't learn in the real world. Of course there is personal growth and discipline learned from completing a task of that magnitude, but let's just concentrate on the technical knowledge. What would I have been taught in a GOOD CS course that would aid me today and what can I read to fill the gap? I've heard the book "Algorithms" mentioned and I plan on reading that. What other books would you recommend? Edit: Clarification on 'actual work experience': Have worked for 2 small companies on teams with fewer than 5 people. About 2 years experience with Perl, Python, PHP, C, C++. About 5 years experience in Java, Applets, RMI, T-SQL, PL/SQL, VB6. 7 years experience in HTML, Javascript, bash, SQL. Most recently in Java designed and helped build an N-tier Java app with web frontend and RMI.

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

  • SQL n:m Inheritance join

    - by Nightmares
    I want to join a table which contains n:m relationship between groups. (Groups are defined in a separate table). This table only has entries listing a member_group_id and a parent_group_id. Given this structure: id(int) | member_group_id(int) | parent_group_id(int) The "base" query looks like this: select p1.group_id, p2.group_id, p1.member_group_id, p2.member_group_id from group_member_group as p1 join group_member_group as p2 on p2.member_group_id = p1.member_group_id The "base" query correctly shows all relationships (I checked by doing it manually.) The problem is when I try to apply a where clause to this query to filter for a specific group as "point of origin" (the first group for which I want all parent groups) it returns only the closest parents. For example like this: select p1.group_id, p2.group_id, p1.member_group_id, p2.member_group_id from group_member_group as p1 join group_member_group as p2 on p2.member_group_id = p1.member_group_id where p1.group_id = 1 Can anyone give a clue how I can fix this? Or a different approach to realize this. (I suppose I could always do this in my C++ source code on the server side but I would have to transfer a entire table which has a high growth potential to the application server.) UPDATE: select p1.group_id, p2.group_id, p1.member_group_id, p2.member_group_id from group_member_group as p1 join group_member_group as p2 on p2.group_id = p1.member_group_id Typing mistake confirmed. Now I don't get past first level of inheritance period. Thanks at denied for pointing that out.

    Read the article

  • Have you dealt with space hardening?

    - by Tim Post
    I am very eager to study best practices when it comes to space hardening. For instance, I've read (though I can't find the article any longer) that some core parts of the Mars rovers did not use dynamic memory allocation, in fact it was forbidden. I've also read that old fashioned core memory may be preferable in space. I was looking at some of the projects associated with the Google Lunar Challenge and wondering what it would feel like to get code on the moon, or even just into space. I know that space hardened boards offer some sanity in such a harsh environment, however I'm wondering (as a C programmer) how I would need to adjust my thinking and code if I was writing something that would run in space? I think the next few years might show more growth in private space companies, I'd really like to at least be somewhat knowledgeable regarding best practices. Can anyone recommend some books, offer links to papers on the topic or (gasp) even a simulator that shows you what happens to a program if radiation, cold or heat bombards a board that sustained damage to its insulation? I think the goal is keeping humans inside of a space craft (as far as fixing or swapping stuff) and avoiding missions to fix things. Furthermore, if the board maintains some critical system, early warnings seem paramount.

    Read the article

  • How can I assign a name to a task in TPL

    - by mehrandvd
    I'm going to use lots of tasks running on my application. Each bunch of tasks is running for some reason. I would like to name these tasks so when I watch the Parallel Tasks window, I could recognize them easily. With another point of view, consider I'm using tasks at the framework level to populate a list. A developer that use my framework is also using tasks for her job. If she looks at the Parallel Tasks Window she will find some tasks having no idea about. I want to name tasks so she can distinguish the framework tasks from her tasks. It would be very convenient if there was such API: var task = new Task(action, "Growth calculation task") or maybe: var task = Task.Factory.StartNew(action, "Populating the datagrid") or even while working with Parallel.ForEach Parallel.ForEach(list, action, "Salary Calculation Task" Is it possible to name a task? Is it possible to give ???Parallel.ForEach a naming structure (maybe using a lambda) so it creates tasks with that naming? Is there such API somewhere that I'm missing? I've also tried to use an inherited task to override it's ToString(). But unfortunately the Parallel Tasks window doesn't use ToString()! class NamedTask : Task { private string TaskName { get; set; } public NamedTask(Action action, string taskName):base(action) { TaskName = taskName; } public override string ToString() { return TaskName; } }

    Read the article

  • Freelance web hosting - what are good LAMP choices?

    - by tkotitan
    I think it's best if I ask this question with an example scenario. Let's say your mom-and-pop local hardware store has never had a website, and they want you, the freelance developer to build them a website. You have all the skills to run a LAMP setup and admin a system, so the difficult question you ask yourself is - where will I host it? As you aren't going to host it out of the machine in your apartment. Let's say you want to be able to customize your own system, install the version of PHP you want, and manage your own database. Perhaps the best kind of hosting is to get a virtual machine so you can customize the system as you see fit. But this essentially a "set it and forget it" site you make, bill by the hour for, and then are done. In other words, the hosting should not be an issue. Given the requirements of hosting a website: Unlimited growth potential needing good amounts of bandwidth to handle visitors Wide range of system and programming options allowing it to be portable Relatively cheap (not necessarily the cheapest) or reasonable scaling cost Reliable hosting with good support Hosted entirely on the host company's hardware Who would you pick to host this website? Yes I am asking for a business/company recommendation. Is there a clear answer for this scenario, or a good source that can reliably give the current answer? I know there are all kinds of schemes out there. I'm just wondering if any one company fills the bill for freelancers and stands out in such a crowded market.

    Read the article

  • World Economic Crisis. IT prospects

    - by Andrew Florko
    There was alike question in 2008, 2 years passed. Please, share your expectations about IT market and employment in the next year or two (or so far you can predict). IMHO Russia (my native country) fully met Crisis in spring, 2008. Stock markets shrank 3(!) times during half a year. Many developers were fired those days but I suppose just because business was shocked and freezed some projects. Developers expected +20% salary growth per year in 2004-2007 (Developer salary in Moscow was about 2-3K$ in early 2008). Then there was 30% (very subjective) salary cut-off in 2008 and salaries were frozen till 2009. Now things are slowly coming back to 2008. Looking in the future I expect pessimistic scenario and another crash. Our economic depends more and more on oil & gas every year. IT that serves industry will be shrinked because we can't compete to China in real production. Due to high currency board (rubble is strong compared to dollar) we can't rely on offshore programming. Our officials are concerned on innovative economic breakthrough but it's an ordinary budget money assignemtn in practice. I don't believe in innovations either because who require innovations if you have debts and tomorrow is vapor?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >