Search Results

Search found 416 results on 17 pages for 'redundancy'.

Page 10/17 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Announcement: Oracle Database Appliance 2.4 patch update now available

    - by uwes
    The Oracle Database Appliance 2.4 patch is now available from My Oracle Support (MOS).  If you search for the Oracle Database Appliance 2.4.0.0.0 Kit under Patches it will display the newly uploaded bundles. The patch highlights include: Normal redundancy (double-mirroring) option providing 6TB of usable storage Enhanced Diagnostics - Trace File Analyzer and ODACHK Also, if you review the README, you may see content that says:        "The grid infrastructure and database patching, both are rolling upgradable. During our patching, we patch the node 1 first and when completed, we patch the node 2." I would like to clarify that the 'infrastructure' updates (OS, Firmware, ILOM, etc) will require a  short downtime of the ODA while it is applied.  When you update the grid infrastructure (--gi), the appliance manager verifies that the infrastructure was updated so you cannot just patch the GI without first updating the infrastructure. The high level update patch steps include (but not limited to): Download patch update to your ODA The --infra (infrastructure) is updated and ODA Databases are down and the ODA is/may be rebooted ODA and GI/Databases are restarted Issue the command to update the Grid Infrastructure/databases (The order of the steps are completed automatically and you cannot control when the nodes are brought up and down during the patching) Node 1 -- shutdown databases and GI Node 1 -- patch GI/database Node 1 -- bring up databases and GI Node 2 -- shutdown databases and GI Node 2 -- patch GI/database Node 2 -- bring up databases and GI A replay from Friday's with Sohan on the 2.4 release can be found here.  The PDF of the presentation is here. The Data Sheet, WP, and 2.4 Configurator are available on the ODA OTN site.

    Read the article

  • Unit testing statically typed functional code

    - by back2dos
    I wanted to ask you people, in which cases it makes sense to unit test statically typed functional code, as written in haskell, scala, ocaml, nemerle, f# or haXe (the last is what I am really interested in, but I wanted to tap into the knowledge of the bigger communities). I ask this because from my understanding: One aspect of unit tests is to have the specs in runnable form. However when employing a declarative style, that directly maps the formalized specs to language semantics, is it even actually possible to express the specs in runnable form in a separate way, that adds value? The more obvious aspect of unit tests is to track down errors that cannot be revealed through static analysis. Given that type safe functional code is a good tool to code extremely close to what your static analyzer understands. However a simple mistake like using x instead of y (both being coordinates) in your code cannot be covered. However such a mistake could also arise while writing the test code, so I am not sure whether its worth the effort. Unit tests do introduce redundancy, which means that when requirements change, the code implementing them and the tests covering this code must both be changed. This overhead of course is about constant, so one could argue, that it doesn't really matter. In fact, in languages like Ruby it really doesn't compared to the benefits, but given how statically typed functional programming covers a lot of the ground unit tests are intended for, it feels like it's a constant overhead one can simply reduce without penalty. From this I'd deduce that unit tests are somewhat obsolete in this programming style. Of course such a claim can only lead to religious wars, so let me boil this down to a simple question: When you use such a programming style, to which extents do you use unit tests and why (what quality is it you hope to gain for your code)? Or the other way round: do you have criteria by which you can qualify a unit of statically typed functional code as covered by the static analyzer and hence needs no unit test coverage?

    Read the article

  • Versioning millions of files with distributed SCM

    - by C. Lawrence Wenham
    I'm looking into the feasibility of using off-the-shelf distributed SCMs such as Git or Mercurial to manage millions of XML files. Each file would be a commercial transaction, such as a purchase order, that would be updated perhaps 10 times during the lifecycle of the transaction until it is "done" and changes no more. And by "manage", I mean that the SCM would be used to not just version the files, but also to replicate them to other machines for redundancy and transfer of IP. Lets suppose, for the sake of example, that a goal is to provide good performance if it was handling the volume of orders that Amazon.com claimed to have at its peak in December 2010: about 150,000 orders per minute. We're expecting the system to be distributed over many servers in order to get reasonable performance. We're also planning to use solid-state drives exclusively. There is a reason why we don't want to use an RDBMS for primary storage, but it's a bit beyond the scope of this question. Does anyone have first-hand experience with the performance of distributed SCMs under such a load, and what strategies were used? Open-source preferred, since the final product is to be FOSS, too.

    Read the article

  • Building a Redundant / Distrubuted Application

    - by MattW
    This is more of a "point me in the right direction" question. I (and my team of 3) have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • Review: Data Modeling 101

    I just recently read “Data Modeling 101”by Scott W. Ambler where he gave an overview of fundamental data modeling skills. I think this article was excellent for anyone who was just starting to learn or refresh their skills in regards to the modeling of data.  Scott defines data modeling as the act of exploring data oriented structures.  He goes on to explain about how data models are actually used by defining three different types of models. Types of Data Models Conceptual Data Model  Logical Data Model (LDMs) Physical Data Model(PDMs) He further expands on modeling by exploring common data modeling notations because there are no industry standards for the practice of data modeling. Scott then defines how to actually model data by expanding on entities, attributes, identities, and relationships which are the basic building blocks of data models. In addition he discusses the value of normalization for redundancy and demoralization for performance. Finally, he discuss ways in which Developers and DBAs can become better data modelers through the use of practice, and seeking guidance from more experienced data modelers.

    Read the article

  • What's the proper way to merge two projects in source control software

    - by Mallow
    I'm using Fossil-SCM to maintain my projects. Since I don't work in a team I usually have just a very linear branch of development: 1.0 - 1.1 - 1.2 I'm wondering what the procedure is when you have one project who's task is about to be given to a related project. And thereby rendering the first project obsolete. Although I tend to rewrite most of my code if I don't remember having already written it, I still would like to keep the code archived. And I'ld rather not have a fossil repo that just is dead. Can I merge it? Is that the proper way of handling this? For example the code was extracting data from an excel file in order to format an HTML page. Now, I've convinced my employer to move their excel spreadsheet into a database to decrease redundancy, increase efficiency and yaddy yadda. Since I can now make logical queries that don't have to jump hoops to preform using the database I won't need the extra vbs files that originally manipulated the excel file. Technically I would be porting part of the existing code into the current new project. Since it already has it's own trunk, would it be advisable to combine the trunk of a different project to this one, and how would I do that exactly?? SO I guess my tree would look like this, and I haven't seen examples of software branching that resemble this inverted tree before so I'm wondering what the norm for a situation like this?

    Read the article

  • Setting up Cluster Configuration using an existing web server as a Primary Node?

    - by RapidWebs
    Thanks in advance for any help which is issued! I am having a slight issue, and need help with the decision making process when it comes to setting up my Cluster Configuration, consisting on a line of Ubuntu Servers (12.04). We currently have a Primary node, which resides in the US within a Datacenter, but we are going to be using this for all serious bandwidth and resource intensive websites, and through a configuration of Virtualmin + Webmin, will be setup as a sort of pseudo-cluster, using Virtualmins Cluster Modules. Anyways, on to the issue: We also have a business line setup locally, with three servers. here are their specs: Intel P4 2.4 ghz, 1GB Ram, 110 gb sata, Ubuntu 12.04* AMD 1.3 ghz, 512MB Ram, 20 GB IDE P3 Xeon 800mhz (dual physical processors), 1GB Ram, 3 * 25 GB Raid Configuration (one in use for host operating system). The first machine is currently IN USE and is serving virtual hosts off a sub-domain. My question is this: How can I integrate the Secondary node (which will be the Primary node per say, in this smaller configuration...) which is currently in use, into the cluster configuration w/ the other two servers for: Sharing Resources Redundancy (HA?) NFS /w the two Raid Disks without having the FORMAT the secondary node, and start fresh moving all my services in to a DRBD network drive or something similar, and than restoring all active virtualmin's Virtual hosts. the idea is that I want minimal downtime to people currently being served from server2.mywebsite.com, and from what I understand, all services need to be on a NFS so that they can be mounted on demand and accessed from the other machine taking over (i.e. Heartbeat + DRBD Config.) but my issue is that i already have all these services installed to their default directory structure: how can i most easily setup this NFS and HA system, move all my desires services to this new drive, and do it with minimal down time, and without breaking Virtualmin and everything else on my server? even just some pointers, a thread i could read, or a step by step check list or run down of commands i could issue to get started would be great! thanks!

    Read the article

  • OpenGL sprites and point size limitation

    - by Srdan
    I'm developing a simple particle system that should be able to perform on mobile devices (iOS, Andorid). My plan was to use GL_POINT_SPRITE/GL_PROGRAM_POINT_SIZE method because of it's efficiency (GL_POINTS are enough), but after some experimenting, I found myself in a trouble. Sprite size is limited (to usually 64 pixels). I'm calculating size using this formula gl_PointSize = in_point_size * some_factor / distance_to_camera to make particle sizes proportional to distance to camera. But at some point, when camera is close enough, problem with size limitation emerges and whole system starts looking unrealistic. Is there a way to avoid this problem? If no, what's alternative? I was thinking of manually generating billboard quad for each particle. Now, I have some questions about that approach. I guess minimum geometry data would be four vertices per particle and index array to make quads from these vertices (with GL_TRIANGLE_STRIP). Additionally, for each vertex I need a color and texture coordinate. I would put all that in an interleaved vertex array. But as you can see, there is much redundancy. All vertices of same particle share same color value, and four texture coordinates are same for all particles. Because of how glDrawArrays/Elements works, I see no way to optimise this. Do you know of a better approach on how to organise per-particle data? Should I use buffers or vertex arrays, or there is no difference because each time I have to update all particles' data. About particles simulation... Where to do it? On CPU or on a vertex processors? Something tells me that mobile's CPU would do it faster than it's vertex unit (at least today in 2012 :). So, any advice on how to make a simple and efficient particle system without particle size limitation, for mobile device, would be appreciated. (animation of camera passing through particles should be realistic)

    Read the article

  • Recommended storage scheme for home server? (LVM/JBOD/RAID 5...)

    - by j-g-faustus
    Are there any guidelines for which storage scheme(s) makes most sense for a multiple-disk home server? I am assuming a separate boot/OS disk (so bootability is not a concern, this is for data storage only) and 4-6 storage disks of 1-2 TB each, for a total storage capacity in the range 4-12 TB. The file system is ext4, I expect there will be only one big partition spanning all disks. As far as I can tell, the alternatives are individual disks pros: works with any combination of disk sizes; losing a disk loses only the data on that disk; no need for volume management. cons: data management is clumsy when logical units (like a "movies" folder) are larger than the capacity of any single drive. JBOD span pros: can merge disks of any size. cons: losing a disk loses all data on all disks LVM pros: can merge disks of any size; relatively simple to add and remove disks. cons: losing a disk loses all data on all disks RAID 0 pros: speed cons: losing one drive loses all data; disks must be same size RAID 5 pros: data survives losing one disk cons: gives up one disk worth of capacity; disks must be same size RAID 6 pros: data survives losing two disks cons: gives up two disks worth of capacity; disks must be same size I'm primarily considering either LVM or JBOD span simply because it will let me reuse older, smaller-capacity disks when I upgrade the system. The runner-up is RAID 0 for speed. I'm planning on having full backups to a separate system, so I expect the extra redundancy from RAID levels 5 or 6 won't be important. Is this a fair representation of the alternatives? Are there other considerations or alternatives I have missed? And what would you recommend?

    Read the article

  • Building a Redundant / Distributed Application

    - by MattW
    This is more of a "point me in the right direction" question. My team of three and I have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • Pair Programming, for or against? [on hold]

    - by user1037729
    I believe it has many advantages over individual programming: Pros By pairing senior with relatively junior staff, the more junior can get up to speed with both project and computing experience, and the senior will re-think the problem in order to communicate with the junior, thus re-checking his own thinking (rubber duck principle!). At least 2 people will know about any single piece of work, if one person is away the other can cover, or if some one leaves a project knowledge transfer is easier. Two brains on a complex task is more effective, communication keeps the work free flowing and provides redundancy in decision making. Code is effectively reviewed as its being written, no need for a separate reviewing phase which requires a context switch as someone who has not been working on the piece in question would be required to understand and review the related code. Reviewing code on your own which you haven't written or architected is not fun, hence counter productive. Cons Less bandwith for performing tasks, lets say we have 4 devs, pair programming requires 2 devs per task, so we would be doing 2 tasks concurrently as a posed to 4. I believe this "Con" does not stand up as the pair programmed task would complete sooner and comes with a review built in for free! Ie the pair programming task would be more efficient and thus free up resources earlier. Less flexibility to chop and change tasks as two developers are tied into a task, when flexibility is required this could be a problem.

    Read the article

  • Exadata???DiskGroup

    - by Liu Maclean(???)
    Exadata???Asm Diskgroup ???????: 1.??dcli -g /home/oracle/cell_group -l root cellcli -e list griddisk ????active?griddisk [root@dm01db01 ~]# dcli -g /home/oracle/cell_group -l root cellcli -e list griddisk dm01cel01: DATA_DM01_CD_00_dm01cel01 active dm01cel01: DATA_DM01_CD_01_dm01cel01 active dm01cel01: DATA_DM01_CD_02_dm01cel01 active dm01cel01: DATA_DM01_CD_03_dm01cel01 active dm01cel01: DATA_DM01_CD_04_dm01cel01 active dm01cel01: DATA_DM01_CD_05_dm01cel01 active dm01cel01: DATA_DM01_CD_06_dm01cel01 active dm01cel01: DATA_DM01_CD_07_dm01cel01 active dm01cel01: DATA_DM01_CD_08_dm01cel01 active dm01cel01: DATA_DM01_CD_09_dm01cel01 active dm01cel01: DATA_DM01_CD_10_dm01cel01 active dm01cel01: DATA_DM01_CD_11_dm01cel01 active dm01cel01: DBFS_DG_CD_02_dm01cel01 active dm01cel01: DBFS_DG_CD_03_dm01cel01 active dm01cel01: DBFS_DG_CD_04_dm01cel01 active dm01cel01: DBFS_DG_CD_05_dm01cel01 active dm01cel01: DBFS_DG_CD_06_dm01cel01 active dm01cel01: DBFS_DG_CD_07_dm01cel01 active dm01cel01: DBFS_DG_CD_08_dm01cel01 active dm01cel01: DBFS_DG_CD_09_dm01cel01 active dm01cel01: DBFS_DG_CD_10_dm01cel01 active dm01cel01: DBFS_DG_CD_11_dm01cel01 active dm01cel01: RECO_DM01_CD_00_dm01cel01 active dm01cel01: RECO_DM01_CD_01_dm01cel01 active dm01cel01: RECO_DM01_CD_02_dm01cel01 active dm01cel01: RECO_DM01_CD_03_dm01cel01 active dm01cel01: RECO_DM01_CD_04_dm01cel01 active dm01cel01: RECO_DM01_CD_05_dm01cel01 active dm01cel01: RECO_DM01_CD_06_dm01cel01 active dm01cel01: RECO_DM01_CD_07_dm01cel01 active dm01cel01: RECO_DM01_CD_08_dm01cel01 active dm01cel01: RECO_DM01_CD_09_dm01cel01 active dm01cel01: RECO_DM01_CD_10_dm01cel01 active dm01cel01: RECO_DM01_CD_11_dm01cel01 active dm01cel02: DATA_DM01_CD_00_dm01cel02 active dm01cel02: DATA_DM01_CD_01_dm01cel02 active dm01cel02: DATA_DM01_CD_02_dm01cel02 active dm01cel02: DATA_DM01_CD_03_dm01cel02 active dm01cel02: DATA_DM01_CD_04_dm01cel02 active dm01cel02: DATA_DM01_CD_05_dm01cel02 active dm01cel02: DATA_DM01_CD_06_dm01cel02 active dm01cel02: DATA_DM01_CD_07_dm01cel02 active dm01cel02: DATA_DM01_CD_08_dm01cel02 active dm01cel02: DATA_DM01_CD_09_dm01cel02 active dm01cel02: DATA_DM01_CD_10_dm01cel02 active dm01cel02: DATA_DM01_CD_11_dm01cel02 active dm01cel02: DBFS_DG_CD_02_dm01cel02 active dm01cel02: DBFS_DG_CD_03_dm01cel02 active dm01cel02: DBFS_DG_CD_04_dm01cel02 active dm01cel02: DBFS_DG_CD_05_dm01cel02 active dm01cel02: DBFS_DG_CD_06_dm01cel02 active dm01cel02: DBFS_DG_CD_07_dm01cel02 active dm01cel02: DBFS_DG_CD_08_dm01cel02 active dm01cel02: DBFS_DG_CD_09_dm01cel02 active dm01cel02: DBFS_DG_CD_10_dm01cel02 active dm01cel02: DBFS_DG_CD_11_dm01cel02 active dm01cel02: RECO_DM01_CD_00_dm01cel02 active dm01cel02: RECO_DM01_CD_01_dm01cel02 active dm01cel02: RECO_DM01_CD_02_dm01cel02 active dm01cel02: RECO_DM01_CD_03_dm01cel02 active dm01cel02: RECO_DM01_CD_04_dm01cel02 active dm01cel02: RECO_DM01_CD_05_dm01cel02 active dm01cel02: RECO_DM01_CD_06_dm01cel02 active dm01cel02: RECO_DM01_CD_07_dm01cel02 active dm01cel02: RECO_DM01_CD_08_dm01cel02 active dm01cel02: RECO_DM01_CD_09_dm01cel02 active dm01cel02: RECO_DM01_CD_10_dm01cel02 active dm01cel02: RECO_DM01_CD_11_dm01cel02 active dm01cel03: DATA_DM01_CD_00_dm01cel03 active dm01cel03: DATA_DM01_CD_01_dm01cel03 active dm01cel03: DATA_DM01_CD_02_dm01cel03 active dm01cel03: DATA_DM01_CD_03_dm01cel03 active dm01cel03: DATA_DM01_CD_04_dm01cel03 active dm01cel03: DATA_DM01_CD_05_dm01cel03 active dm01cel03: DATA_DM01_CD_06_dm01cel03 active dm01cel03: DATA_DM01_CD_07_dm01cel03 active dm01cel03: DATA_DM01_CD_08_dm01cel03 active dm01cel03: DATA_DM01_CD_09_dm01cel03 active dm01cel03: DATA_DM01_CD_10_dm01cel03 active dm01cel03: DATA_DM01_CD_11_dm01cel03 active dm01cel03: DBFS_DG_CD_02_dm01cel03 active dm01cel03: DBFS_DG_CD_03_dm01cel03 active dm01cel03: DBFS_DG_CD_04_dm01cel03 active dm01cel03: DBFS_DG_CD_05_dm01cel03 active dm01cel03: DBFS_DG_CD_06_dm01cel03 active dm01cel03: DBFS_DG_CD_07_dm01cel03 active dm01cel03: DBFS_DG_CD_08_dm01cel03 active dm01cel03: DBFS_DG_CD_09_dm01cel03 active dm01cel03: DBFS_DG_CD_10_dm01cel03 active dm01cel03: DBFS_DG_CD_11_dm01cel03 active dm01cel03: RECO_DM01_CD_00_dm01cel03 active dm01cel03: RECO_DM01_CD_01_dm01cel03 active dm01cel03: RECO_DM01_CD_02_dm01cel03 active dm01cel03: RECO_DM01_CD_03_dm01cel03 active dm01cel03: RECO_DM01_CD_04_dm01cel03 active dm01cel03: RECO_DM01_CD_05_dm01cel03 active dm01cel03: RECO_DM01_CD_06_dm01cel03 active dm01cel03: RECO_DM01_CD_07_dm01cel03 active dm01cel03: RECO_DM01_CD_08_dm01cel03 active dm01cel03: RECO_DM01_CD_09_dm01cel03 active dm01cel03: RECO_DM01_CD_10_dm01cel03 active dm01cel03: RECO_DM01_CD_11_dm01cel03 active ??????????griddisk, ?????’cellcli -e drop griddisk’ ?’cellcli -e create griddisk’????griddisk ,??????drop DBFS_DG???griddisk 2.??ASM???create disk group ?????CELL?IP,????????????? [root@dm01db02 ~]# cat /etc/oracle/cell/network-config/cellip.ora cell="192.168.64.131" cell="192.168.64.132" cell="192.168.64.133" SQL> create diskgroup DATA_MAC normal redundancy 2 DISK 3 'o/192.168.64.131/RECO_DM01_CD_*_dm01cel01' 4 ,'o/192.168.64.132/RECO_DM01_CD_*_dm01cel02' 5 ,'o/192.168.64.133/RECO_DM01_CD_*_dm01cel03' 6 attribute 7 'AU_SIZE'='4M', 8 'CELL.SMART_SCAN_CAPABLE'='TRUE', 9 'compatible.rdbms'='11.2.0.2', 10 'compatible.asm'='11.2.0.2' 11 / 3. MOUNT ???DISKGROUP ALTER DISKGROUP DATA_MAC mount ; 4.???crsctl start/stop resource ora.DATA_MAC.dg ?????

    Read the article

  • Using Multiple TinyMCE Packages in a web site asp.net

    - by ProgNet
    Hi all, I got a detailed answer to my previous question about TinyMCE editor (link text) The answer raised a new question . In case I want to use more then one TinyMCE package in my ASP.Net Project For example to use the Development Package and the .Net Package(Or use the Development Package , the .Net Package and the JQuery Package ). what steps do I need to take to include them in my solution? In TinyMCE web site Installation page ( wiki.moxiecode.com/index.php/TinyMCE:Installation ) they wrote : "You should extract TinyMCE in your wwwroot or site domain root folder." But the problem is all the different packages unzip to the same folder name that is tinymce. There can't be more then one folder with the tinymce name in the site domain root folder. (files will be overwritten) Of course I can rename them for example to TinyMCEDev and TinyMceDotNet ,but what about code redundancy issues etc ? Thanks

    Read the article

  • Trying to find a good strategy using Git for personal development on local/personal machine

    - by AJ
    A noob here. I have a personal Macbook and I want to use Git to track the changes etc. I want to just init a repo on my macbook and work there. Is this a good idea? What if: I have a main repo somewhere in my Macbook HD, like, /Users/user/projects/project1 and clone it to another area on my macbook where I actually perform development? But there is a lot of redundancy in this. I am a little confused and want to know what are the usual steps folks take in a similar personal development environment. Thanks a lot.

    Read the article

  • Normalize or Denormalize in high traffic websites

    - by Inam Jameel
    what is the best practice for database design for high traffic websites like this one stackoverflow? should one must use normalize database for record keeping or normalized technique or combination of both? is it sensible to design normalize database as main database for record keeping to reduce redundancy and at the same time maintain another denormalized form of database for fast searching? or main database should be denormalize and one can make normalized views in the application level for fast database operations? or beside above mentioned approach? what is the best practice of designing high traffic websites???

    Read the article

  • Client-side Replication for SQL Server?

    - by Mighty Z
    I'd like to have some degree of fault tolerance / redundancy with my SQL Server Express database. I know that if I upgrade to a pricier version of SQL Server, I can get "Replication" built in. But I'm wondering if anyone has experience in managing replication on the client side. As in, from my application: Every time I need to create, update or delete records from the database -- issue the statement to all n servers directly from the client side Every time I need to read, I can do so from one representative server (other schemes seem possible here, too). It seems like this logic could potentially be added directly to my Linq-To-SQL Data Context. Any thoughts?

    Read the article

  • Using Spotlight as the "database" of an application

    - by vicvicvic
    I'm developing an OS X application to organize "things" (as iTunes is to music and iPhoto to photos). Instead of having my own database and index, I'm considering using Spotlight to essentially serve this purpose. Has anyone tried this? Is it wise? The main benefit, as I see it, would be simplicity and avoiding redundancy. It seems a bit wasteful to implement my own index machinery when OS X comes with one built in. I have little experience working with Spotlight, however. From a user's perspective, I do know that it has been slow and imprecise in older versions of OS X. I also have a gut-feeling that since it's aimed at searching the whole filesystem, using it for "local" purposes becomes hackish. Obviously, my applications's index needs to constantly be up-to-date. Can mdimport be used for this?

    Read the article

  • Controlling azure worker roles concurrency in multiple instance

    - by NER1808
    I have a simple work role in azure that does some data processing on an SQL azure database. The worker basically adds data from a 3rd party datasource to my database every 2 minutes. When I have two instances of the role, this obviously doubles up unnecessarily. I would like to have 2 instances for redundancy and the 99.95 uptime, but do not want them both processing at the same time as they will just duplicate the same job. Is there a standard pattern for this that I am missing? I know I could set flags in the database, but am hoping there is another easier or better way to manage this. Thanks

    Read the article

  • Using xsl:key to store result of boolean expression

    - by hielsnoppe
    In my transformation there is an expression some elements are repeatedly tested against. To reduce redundancy I'd like to encapsulate this in an xsl:key like this (not working): <xsl:key name="td-is-empty" match="td" use="not(./node()[normalize-space(.) or ./node()])" /> The expected behaviour is the key to yield a boolean value of true in case the expression is evaluated successfully and otherwise false. Then I'd like to use it as follows: <xsl:template match="td[not(key('td-is-empty', .))]" /> Is this possible and in case yes, how?

    Read the article

  • Firefox extension dev: observing preferences, avoid multiple notifications

    - by Michael
    Let's say my Firefox extension has multiple preferences, but some of them are grouped, like check interval, fail retry interval, destination url. Those are used in just single function. When I subscribe to preference service and add observer, the observe callback will be called for each changed preference, so if by chance user changed all of the settings in group, then I will have to do the same routine for the same subsystem as many times as I have items in that preferences group. It's a great redundancy! What I want is observe to be called just once for group of preferences. Say extensions.myextension.interval1 extensions.myextension.site extensions.myextension.retry so if one or all of those preferences are changed, I receive only 1 notification about it. In other words, no matter how many preferences changed in branch, I want the observe callback to called once.

    Read the article

  • Firefox extension dev: observing preferences, avoid multiple notifications

    - by Michael
    Let's say my Firefox extension has multiple preferences, but some of them are grouped, like check interval, fail retry interval, destination url. Those are used in just single function. When I subscribe to preference service and add observer, the observe callback will be called for each changed preference, so if by chance user changed all of the settings in group, then I will have to do the same routine for the same subsystem as many times as I have items in that preferences group. It's a great redundancy! What I want is observe to be called just once for group of preferences. Say extensions.myextension.interval1 extensions.myextension.site extensions.myextension.retry so if one or all of those preferences are changed, I receive only 1 notification about it. In other words, no matter how many preferences changed in branch, I want the observe callback to called once.

    Read the article

  • Multitenant shared user account?

    - by jpartogi
    Dear all, Based on your experience, which is the route to go for a multi-tenant user login? One user login per account. Which means if there is one user that has access to multiple account, there will be redundancy of record in the database One user login for all account that she has privileges to. Which means one user record has access to multiple account if she has privileges to that account. From your experience, which one is better and why? I was thinking to choose the latter, but I don't know whether it will cause security issue or less flexibility. Thank you for sharing your experience.

    Read the article

  • Load Balancing and Failover for Read-Only PostgreSQL Database

    - by Eric J.
    Scenario Multiple application servers host web services written in Java, running in SpringSource dm Server. To implement a new requirement, they will need to query a read-only PostgreSQL database. Issue To support redundancy, at least two PostgreSQL instances will be running. Access to PostgreSQL must be load balanced and must auto-fail over to currently running instances if an instance should go down. Auto-discovery of newly running instances is desirable but not required. Research I have reviewed the official PostgreSQL documentation on this issue. However, that focuses on the more general case of read/write access to the database. Top google results tend to lead to older newsgroup messages or dead projects such as Sequoia or DB Balancer, as well as one active project PG Pool II Question What are your real-world experiences with PG Pool II? What other simple and reliable alternatives are available?

    Read the article

  • How to preserve data integrity while minimizing the transmission size

    - by user1500578
    we have sensors in the wild that send their data to a server every day via TCP/IP, either through 3G or through satellite for the physical layer. The sensors can automatically switch from one to the other depending on their location and the quality of the signal with the local 3G operator. Given that the 3G and satellite communications are very expensive, we want to minimize the amount of data to send. But also, we want to protect ourselves from lost data. What would be the best strategy to ensure with reasonable certainty that the integrity of our data is preserved, while minimizing the amount of redundancy, i.e the amount of data transmitted ? I've read about the zfec codec, but I'm not sure if we need to transmit all the chunks, or if we need to send a hash code along each chunk.

    Read the article

  • foreign key constraints on primary key columns - issues ?

    - by zzzeek
    What are the pros/cons from a performance/indexing/data management perspective of creating a one-to-one relationship between tables using the primary key on the child as foreign key, versus a pure surrogate primary key on the child? The first approach seems to reduce redundancy and nicely constrains the one-to-one implicitly, while the second approach seems to be favored by DBAs, even though it creates a second index: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key references parent(id), data varchar(50) ) pure surrogate key: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key, parent_id integer unique references parent(id), data varchar(50) ) the platforms of interest here are Postgresql, Microsoft SQL Server.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >