Search Results

Search found 2993 results on 120 pages for 'distributed transactions'.

Page 35/120 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • Is Ubuntu MAAS free? Will it remain like that?

    - by Bruno Pereira
    Ubuntu MAAS, very cool, awesome in fact, looks like a unique tool for several jobs. It looks free, but part of its documentation starts already with clauses that would scare anyone with interest in it: Documentation is copy righted by Canonical; Documentation must be used only for non-commercial purposes; If documentation is distributed within the non-commercial clause you must retain copyright; It just sounds a lot for a guide on how to install MAAS + Juju + Openstack and that scares me a bit. Under what license is Ubuntu MAAS distributed and what would be the reasoning for being so worried about copyrighting a guide like that so heavily? Is Ubuntu MAAS free? Will it continue like that?

    Read the article

  • Basic memcached question

    - by Aadith
    I have been reading up on distributed hashing. I learnt that consistent hashing is used for distributing the keys among cache machines. I also learnt that, a key is duplicated on mutiple caches to handle failure of cache hosts. But what I have come across on memcached doesn't seem to be in alignment with all this. I read that all cache nodes are independent of each other and that if a cache goes down, requests go to DB. Theres no mention of cache miss on a host resulting in the host directing the request to another host which could either be holding the key or is nearer to the key. Can you please tell me how these two fit together? Is memcached a very preliminary form of distributed hashing which doesnt have much sophistication?

    Read the article

  • On what name should I claim copyright in open source software?

    - by ONOZ
    When I want to use the Apache 2.0 licence in my project, I should include this in the comments of my source code: Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. What name should I fill in for [name of copyright owner]? I am currently working alone on this project, but I'm going to release the source code so there might be other contributors in the near future.

    Read the article

  • WS-Eventing for WCF (Indigo)

    This article describes the design, implementation and usage of the WS-Eventing for distributed applications driven by new MS communication model WCF (Windows Communication Foundation)

    Read the article

  • Hadoop:Only master node does the work

    - by user287722
    I've setup a Hadoop 2.2 cluster with 1 master node(namenode and secondary namenode) and 3 slave nodes(datanode and namenode on each one).All of the machines use Linux Mint 64bit. When I run my MapReduce program, writen in Java, I can only see that master node is using extra CPU and RAM. Slave nodes are not doing a thing. I've checked the logs from all of the namenodes and there is nothing wrong with the namenodes on slave nodes. Resource Manager is running and all of the slave nodes can see the Resource Manager. I used this http://n0where.net/hadoop-2-2-multi-node-cluster-setup/ tutorial to configure my nodes. Datanodes are working in terms of distributed data storing but I can't see any indication of distributed data processing. Do I have to configure the xml configuration files in some other way so all of the machines will process data while I'm running my MapReduce Job?

    Read the article

  • Oracle Announces Release of PeopleSoft HCM 9.1 Feature Pack 2

    - by Jay Zuckert
    Big things sometimes come in small packages.  Today Oracle announced the availability of PeopleSoft HCM 9.1 Feature Pack 2 which delivers a new HR self service user experience that fundamentally changes the way managers and employees interact with the HCM system.  Earlier this year we reviewed a number of new concept designs with our Customer Advisory Boards.  With the accelerated feature pack development cycle we have adopted, these innovations are  now available to all 9.1 customers without the need for an upgrade.   There are no new products that need to be licensed for the capabilities below. For more details on Feature Pack 2, please see the Oracle press release. Included in Feature Pack 2 is a new search-based menu-free navigation that allows managers to search for employees by name and take actions directly from the secure search results.  For example, a manager can now simply type in part of an employee’s first or last name and receive meaningful results from documents related to performance, compensation, learning, recruiting, career planning and more.   Delivered actions can be initiated directly from these search results and the actions are securely tied to HCM security and user role.  The feature pack also includes new pages that will enable managers to be more productive by aggregating key employee data into a single page.  The new Manager Dashboard and Talent Summary provide a consolidated view of data related to a manager’s team and individual team members, respectively.   The Manager Dashboard displays information relevant to their direct reports including team learning, objective alignment, alerts, and pending approvals requiring their attention.  The Talent Summary provides managers with an aggregated view of talent management-related data for an individual employee including performance history, salary history, succession options, total rewards, and competencies.   The information displayed in both the Manager Dashboard and Talent Summary is configurable by system administrators and can be personalized by each of your managers. Other Feature Pack 2 enhancements allow organizations to administer Matrix or Dotted-Line Relationship Management, which addresses the challenge of tracking and maintaining project-based organizations that cut across the enterprise and geographic regions.  From within the Company Directory and Org Viewer organization charts, managers now have access to manager self-service transactions from related actions.  More than 70 manager and employee self-service transactions have been tied into the related action framework accessible from Org Viewer, Manager Dashboard, Talent Summary and Secure Enterprise Search (SES) results.  In addition to making it easier to access manager self-service transactions, the feature pack delivers streamlined transaction pages making everyday tasks such as promoting an employee faster and more efficient. With the delivery of PeopleSoft HCM 9.1 Feature Pack 2, Oracle continues to deliver on its commitment to our PeopleSoft customers.  With this feature pack, HCM 9.1 customers will be able to deploy the newest functionality quickly, without a major release upgrade, and realize added value from their existing PeopleSoft investment.    For customers newly deploying 9.1, a new download with all of Feature Pack 2  will be available early next year.   This will aslo include recertified upgrade paths from 8.8, 8.9 and 9.0, for customers in the upgrade process.

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • Oracle GoldenGate Active-Active Part 1

    - by Nick_W
    My name is Nick Wagner, and I'm a recent addition to the Oracle Maximum Availability Architecture (MAA) product management team.  I've spent the last 15+ years working on database replication products, and I've spent the last 10 years working on the Oracle GoldenGate product.  So most of my posting will probably be focused on OGG.  One question that comes up all the time is around active-active replication with Oracle GoldenGate.  How do I know if my application is a good fit for active-active replication with GoldenGate?   To answer that, it really comes down to how you plan on handling conflict resolution.  I will delve into topology and deployment in a later blog, but here is a simple architecture: The two most common resolution routines are host based resolution and timestamp based resolution. Host based resolution is used less often, but works with the fewest application changes.  Think of it like this: any transactions from SystemA always take precedence over any transactions from SystemB.  If there is a conflict on SystemB, then the record from SystemA will overwrite it.  If there is a conflict on SystemA, then it will be ignored.  It is quite a bit less restrictive, and in most cases, as long as all the tables have primary keys, host based resolution will work just fine.  Timestamp based resolution, on the other hand, is a little trickier. In this case, you can decide which record is overwritten based on timestamps. For example, does the older record get overwritten with the newer record?  Or vice-versa?  This method not only requires primary keys on every table, but it also requires every table to have a timestamp/date column that is updated each time a record is inserted or updated on the table.  Most homegrown applications can always be customized to include these requirements, but it's a little more difficult with 3rd party applications, and might even be impossible for large ERP type applications.  If your database has these features - whether it’s primary keys for host based resolution, or primary keys and timestamp columns for timestamp based resolution - then your application could be a great candidate for active-active replication.  But table structure is not the only requirement.  The other consideration applies when there is a conflict; i.e., do I need to perform any notification or track down the user that had their data overwritten?  In most cases, I don't think it's necessary, but if it is required, OGG can always create an exceptions table that contains all of the overwritten transactions so that people can be notified. It's a bit of extra work to implement this type of option, but if the business requires it, then it can be done. Unless someone is constantly monitoring this exception table or has an automated process in dealing with exceptions, there will be a delay in getting a response back to the end user. Ideally, when setting up active-active resolution we can include some simple procedural steps or configuration options that can reduce, or in some cases eliminate the potential for conflicts.  This makes the whole implementation that much easier and foolproof.  And I'll cover these in my next blog. 

    Read the article

  • Problem with MS DTC on SQL2008 win server 2k8 with linked server from sql2000 win server 2k

    - by user31648
    Hi, We have migrated our db from sql2000 win server 2k to sql2008 win server 2k8. We have linked server from sql2000 win server 2k. By our opinion the problem is with DTC and we have made a lot of setting that we found as solution for our problem, but still the problem exist. There is no any error or worning or information niether in the sql log nor in win event viewer. The application is hanging out and at the end the time out exception is shown. What we have done till now: Enable Network DTC Access with inbound and outbound with No Authentication Required on win 2k8 We have opened RPC dynamic port allocation through registry on 2k and 2k8 We have entered subkey TurnOffRpcSecurity in the registry HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSDTC and made it enable on 2k and 2k8 We have added exception for DTC in firewall for all entities What we have notice that when we restart SQL service and make the first try for our transaction the following is shown: "Attempting to initialize Microsoft Distributed Transaction Coordinator (MS DTC). This is an informational message only. No user action is required." and after it: "Recovery of any in-doubt distributed transactions involving Microsoft Distributed Transaction Coordinator (MS DTC) has completed. This is an informational message only. No user action is required." Does someone have any idea what else can be done in order to solve the problem? Thanks in advance. Regards, Snezana

    Read the article

  • Why is TransactionScope operation is not valid?

    - by Cragly
    I have a routine which uses a recursive loop to insert items into a SQL Server 2005 database The first call which initiates the loop is enclosed within a transaction using TransactionScope. When I first call ProcessItem the myItem data gets inserted into the database as expected. However when ProcessItem is called from either ProcessItemLinks or ProcessItemComments I get the following error. “The operation is not valid for the state of the transaction” I am running this in debug with VS 2008 on Windows 7 and have the MSDTC running to enable distributed transactions. The code below isn’t my production code but is set out exactly the same. The AddItemToDatabase is a method on a class I cannot modify and uses a standard ExecuteNonQuery() which creates a connection then closes and disposes once completed. I have looked at other posting on here and the internet and still cannot resolve this issue. Any help would be much appreciated. using (TransactionScope processItem = new TransactionScope()) { foreach (Item myItem in itemsList) { ProcessItem(myItem); } processItem.Complete(); } private void ProcessItem(Item myItem) { AddItemToDatabase(myItem); ProcessItemLinks(myItem); ProcessItemComments(myItem); } private void ProcessItemLinks(Item myItem) { foreach (Item link in myItem.Links) { ProcessItem(link); } } private void ProcessItemComments(Item myItem) { foreach (Item comment in myItem.Comments) { ProcessItem(comment); } } Here is top part of the stack trace. Unfortunatly I cant show the build up to this point as its company sensative information which I can not disclose. Hope this is enough information. at System.Transactions.TransactionState.EnlistPromotableSinglePhase(InternalTransaction tx, IPromotableSinglePhaseNotification promotableSinglePhaseNotification, Transaction atomicTransaction) at System.Transactions.Transaction.EnlistPromotableSinglePhase(IPromotableSinglePhaseNotification promotableSinglePhaseNotification) at System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction tx) at System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction tx) at System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) at System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open()

    Read the article

  • SQL Query for generating matrix like output querying related table in SQL Server

    - by Nagesh
    I have three tables: Product ProductID ProductName 1 Cycle 2 Scooter 3 Car Customer CustomerID CustomerName 101 Ronald 102 Michelle 103 Armstrong 104 Schmidt 105 Peterson Transactions TID ProductID CustomerID TranDate Amount 10001 1 101 01-Jan-11 25000.00 10002 2 101 02-Jan-11 98547.52 10003 1 102 03-Feb-11 15000.00 10004 3 102 07-Jan-11 36571.85 10005 2 105 09-Feb-11 82658.23 10006 2 104 10-Feb-11 54000.25 10007 3 103 20-Feb-11 80115.50 10008 3 104 22-Feb-11 45000.65 I have written a query to group the transactions like this: SELECT P.ProductName AS Product, C.CustName AS Customer, SUM(T.Amount) AS Amount FROM Transactions AS T INNER JOIN Product AS P ON T.ProductID = P.ProductID INNER JOIN Customer AS C ON T.CustomerID = C.CustomerID WHERE T.TranDate BETWEEN '2011-01-01' AND '2011-03-31' GROUP BY P.ProductName, C.CustName ORDER BY P.ProductName which gives the result like this: Product Customer Amount Car Armstrong 80115.50 Car Michelle 36571.85 Car Schmidt 45000.65 Cycle Michelle 15000.00 Cycle Ronald 25000.00 Scooter Peterson 82658.23 Scooter Ronald 98547.52 Scooter Schmidt 54000.25 I need result of query in MATRIX form like this: Customer |------------ Amounts --------------- Name |Car Cycle Scooter Totals Armstrong 80115.50 0.00 0.00 80115.50 Michelle 36571.85 15000.00 0.00 51571.85 Ronald 0.00 25000.00 98547.52 123547.52 Peterson 0.00 0.00 82658.23 82658.23 Schmidt 45000.65 0.00 54000.25 99000.90 Please help me to acheive the above result in SQL Server 2005. Using mulitple views or even temporory tables is fine for me.

    Read the article

  • Should I define a single "DataContext" and pass references to it around or define muliple "DataConte

    - by Nate Bross
    I have a Silverlight application that consists of a MainWindow and several classes which update and draw images on the MainWindow. I'm now expanding this to keep track of everything in a database. Without going into specifics, lets say I have a structure like this: MainWindow Drawing-Surface Class1 -- Supports Drawing DataContext + DataServiceCollection<T> w/events Class2 -- Manages "transactions" (add/delete objects from drawing) Class3 Each "Class" is passed a reference to the Drawing Surface so they can interact with it independently. I'm starting to use WCF Data Services in Class1 and its working well; however, the other classes are also going to need access to the WCF Data Services. (Should I define my "DataContext" in MainWindow and pass a reference to each child class?) Class1 will need READ access to the "transactions" data, and Class2 will need READ access to some of the drawing data. So my question is, where does it make the most sense to define my DataContext? Does it make sense to: Define a "global" WCF Data Service "Context" object and pass references to that in all of my subsequent classes? Define an instance of the "Context" for each Class1, Class2, etc Have each method that requires access to data define its own instance of the "Context" and use closures handle the async load/complete events? Would a structure like this make more sense? Is there any danger in keeping an active "DataContext" open for an extended period of time? Typical usecase of this application could range from 1 minute to 40+ minutes. MainWindow Drawing-Surface DataContext Class1 -- Supports Drawing DataServiceCollection<DrawingType> w/events Class2 -- Manages "transactions" (add/delete objects from drawing) DataServiceCollection<TransactionType> w/events Class3 DataServiceCollection<T> w/events

    Read the article

  • Synchronizing in SQL Replication works when manually syncing, but not automatically

    - by Dominic Zukiewicz
    I'm using SQL Server 2005 to create a replication copy of the main databases, so that the reports can point to the replication copy instead of locking out our main databases. I have set up the 3 databases as publications and then 3 subscribers moving the transactions over to the subscribers, instantaneously I hope! What seems to be happening is that when using the "Insert Tracer" function, replication take publisher to distributor < 2 seconds, but to replicate to the subscribers can take over 7 minutes (and these are local databases on a SAN). This could be for 2 reasons: The SQL statements used to query the database are obtaining locks which are stopping the transactions updating the subscribers. The subscribers are just too busy for the replication to apply the changes. What seems to trouble me more, is that although the Replication Monitor / Insert Tracer are showing these statistics, if you use the "View Subscription Details" and then click Start, it will sync within seconds. My goal would be to have the data syncing (ideally) continuously, or every minute, perhaps I should reduce the batch size of the transactions? What am I doing wrong? [Note that the -Continuous flag is set!]

    Read the article

  • Add JTabbedPane with buttons, labels ... in a frame with an absolute layout

    - by alibenmessaoud
    I have a code in java. package interfaces; import javax.swing.JPanel; public class TabbedPaneDemo extends JPanel { public TabbedPaneDemo() { JTabbedPane pane = new JTabbedPane(); JPanel dashboardPanel = new JPanel(); dashboardPanel.add(new JLabel("Dashboard")); // Add Dashboard Tab pane.addTab("Dashboard", dashboardPanel); JPanel transactionPanel = new JPanel(); transactionPanel.add(new JLabel("Transactions")); // Add Transactions Tab pane.addTab("Transactions", transactionPanel); JPanel accountPanel = new JPanel(); accountPanel.add(new JLabel("Account")); // Add Account Tab pane.addTab("Account", accountPanel); this.setLayout(new BorderLayout()); this.setPreferredSize(new Dimension(400, 200)); this.add(pane, BorderLayout.CENTER); } public static void main(String[] args) { JPanel panel = new TabbedPaneDemo(); panel.setOpaque(true); // panel.setBounds(12, 12, 45, 98); JFrame frame = new JFrame("JTabbedPane Demo"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setContentPane(panel); frame.pack(); frame.setVisible(true); }} and the output it's ! I want to change in this code to be like this. I want use an absolute layout (null). I want to add menu, buttons and labels with these tabs ... How can I do?? Thanks in advance. Best regards, Ali

    Read the article

  • Utilizing a Queue

    - by Nathan
    I'm trying to store records of transactions all together and by category for the last 1, 7, 30 or 360 days. I've tried a couple things, but they've brutally failed. I had an idea of using a queue with 360 values, one for each day, but I don't know enough about queue's to figure out how that would work. Input will be an instance of this class: class Transaction { public string TotalEarned { get; set; } public string TotalHST { get; set; } public string TotalCost { get; set; } public string Category { get; set; } } New transactions can occur at any time during the day, and there could be as many as 15 transactions in a day. My program is using a plain text file as external storage, but how I load it depends on how I decide to store this data. What would be the best way to do this?

    Read the article

  • One controller with multiple models? Am I doing this correctly?

    - by user363243
    My web app, up until this point, has been fairly straight forward. I have Users, Contacts, Appointments and a few other things to manage. All of these are easy - it's just one model per section so I just did a scaffold for each, then modified the scaffolded code to fit my need. Pretty easy... Unfortunately I am having a problem on this next section because I want the 'Financials' section of my app to be more in depth than the other sections which I simply scaffolded. For example, when the user clicks the 'Contacts' link on the navigation bar, it just shows a list of contacts, pretty straight forward and is in line with the scaffold. However, when the user clicks the 'Financials' link on the navigation bar, I want to show the bank accounts on the left of the page and a few of the transactions on the right. So the financials tab will basically work with data from two models: transactions and bank_accounts. I think I should make the models (transactions & bank_accounts) and then make a controller called Financials, then I can query the models from the Financials controller and display the pages in app/views/financials/ Am I correct in this app layout? I have never worked with more than the basics of scaffolding so I want to ensure I get this right! Thank you!

    Read the article

  • Why Kiln is based on Mercurial, and not other (D)VCS

    - by Jakub Narebski
    What were the reason for chosing Mercurial as a basis of FogCreek Kiln, a source control management system with tightly integrated code review, and FogBugz integration? Why Mercurial, and not other (distributed) version control system, like Bazaar, Git or Monotone, or creating own version control system like Fossil (distributed software configuration management, including bug tracking and wiki) did? What were features that make FogCreek choose Mercurial as Kiln engine?

    Read the article

  • collection of system properties using web browser

    - by vishwa
    hi i am doing distributed computing environment........For the applications need to get distributed to different clients connected to the server in the network,i prefered to collect the client's system properties like free memory available in the client's system,so that i could distribute d application according to that efficiently......so kindly project me wth some idea.thanks in advance

    Read the article

  • How to generate a random BigInteger value in Java?

    - by Bill the Lizard
    I need to generate arbitrarily large random integers in the range 0 (inclusive) to n (exclusive). My initial thought was to call nextDouble and multiply by n, but once n gets to be larger than 253, the results would no longer be uniformly distributed. BigInteger has the following constructor available: public BigInteger(int numBits, Random rnd) Constructs a randomly generated BigInteger, uniformly distributed over the range 0 to (2numBits - 1), inclusive. How can this be used to get a random value in the range 0 - n, where n is not a power of 2?

    Read the article

  • Choice and setup of version control

    - by Peter M
    I am about to set up an new laptop and in the process transition to a new version control system as part of a general cleanup. Currently I use a centralized version control system (yes it is VSS, and yes I know all the pro's and con's of that system, but as a single user system it works well for me). I have very little requirements for a new system and I am free to choose among any of the current mainstream players, but cost constraints will push me towards oss. Some of my requirements are: Runs on a single machine (ie the laptop in question) under windows I am not sharing things with other developers or workers - this is more for my own historical benefits. I want to version source code, documentation and binary files I have a large hierarchy of projects that are unrelated (see below) I have files within the hierarchy that don't need to be controlled (but could be) Some projects use Visual Studio, so some integration there could be nice. There could be some sharing of files between jobs. I generally only need a small about of branching in code files The directory hierarchy that I have at the moment is somewhat like: Root | |--Customer #1 | | | |--Job #1 | | | | | |--Data files received from Customer for Job (not controlled) | | |--Documentation files (controlled) | | |--Project information files (not controlled - but could be) | | |--Software Project Files (controlled) | | |--Scratch dir for job (not controlled) | | | |--Job #2 | | (same structure as above) | |--Customer #2 | |.. | |--Cusmtomer #n |.. Currently I have about 22 customers with differing numbers of projects underneath them. At the moment I have a single VSS repository based at the root of the directory structure. If I kept with a centralized system (ie SVN) I believe that I should keep the same approach and continue with a single repository based from the root dir. Is this a valid approach? However if I move to a distributed tool then I am unsure of how I should handle the situation. My initial guess is that I should not have a repository based on the root of my entire directory structure - but that is a guess so I really don't know how valid it is. Should I pitch a distributed approach at the Root, Customer, Job or sub-Job directory level? Also what I am not clear on with distributed tools (and perhaps with SVN as well), is if I can branch parts of a repository. For example, I can see branching source code in software projects as being useful, but branching my documentation as not being useful. So if I pitch a repository at the Job level, can I just branch the Software Project Files? Or would all files in that Job be branched? Every time I look at distributed tools I get a nagging feeling that they are not suited to my style of setup. I am uncomfortable with idea of having to manually set up something like 50 to 80 separate repositories (if I pitch at the Job level, or 20+ if at the Customer level) within my directory hierarchy. This feeling also extends to having all those repositories scattered around as well - however I do have a backup strategy that I trust, so this latter feeling is pretty well unfounded. So what advice can you all give me? Thanks in advance!

    Read the article

  • Is there a way to add AD LDS users to an AD Domain Group or allow them domain security rights?

    - by Tom
    I have a web application in which our outside customers need access to run transactions (stored procs on Sql Server) on our domain. We have looked into LDS to keep these users separate from our domain. The problem we are having is allowing the LDS users the AD security rights to access these stored procs. For administration purposes we would like to use an AD group for each transaction (stored proc) which has access to execute. Is there a way to add LDS users to this AD group or allow them the security rights to do this? We have setup LDS and can authenicate an AD user thru to runs these transactions. LDS is running on Server 08 R2. AD is also Server 08 R2. Thanks.

    Read the article

  • Do MSDTC and disaster recovery go together?

    - by DevDelivery
    Our application writes to multiple Sql Server databases within a distributed transaction. The Ops guys are saying that this messes up their disaster recovery plan because while the transactions on the live tables may commit at the same time, the log shipping on the separate databases happen at slightly different times. So in in a disaster recovery situation, there will be a few partial transactions. Is there a method for maintaining separate but synced databases in DR? Or do we have to re-design to relatively independent databases (or a single database)?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >