Search Results

Search found 5611 results on 225 pages for 'contained databases'.

Page 137/225 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

  • Adventures in Lab Management Configuration: CMMI Edition Part 1 of 3

    - by Enrique Lima
    I remember at one point someone telling me how close Migrate was to Migraine. This was a process that included an environment from TFS 2008 to TFS 2010, needed to be migrated too as far as the process template goes.  Here we are talking about CMMI v4.2 to CMMI v5.0.  Now, the process to migrate the TFS Infrastructure is one thing, migrating the Process Template is a different deal, not hard … just involved. Followed a combination of steps that came from a blog post as the main guidance and then MSDN (as suggested on the guidance post) to complement some tasks and steps. Again, the focus I have here is CMMI. The high level steps taken to enable the TFS 2008 CMMI v4.2 migrated to TFS 2010 Process Template are: 1)  Backup the Collection, Configuration and Warehouse Databases. 2)  Downloaded the Process Template using Visual Studio 2010. 3) Exported, modified and imported Bug Type Definition 4) Exported, modified and imported Scenario or Requirement Type Definition. 5) Created and imported bug field mappings. Now, we can attempt to connect using Test Manager, and you should be able to get this going. After that was done, it was time to enroll VMs that already existed in the environment.  This was a bit more challenging, but in the end it was a matter of just analyzing the changes that had been made to had a temporary work around from the time we migrated to the time we converted the Work Items and such and added fields to enable communication between the project and the Test and Lab Manager component. There are 2 more parts to this post, the second will describe the detailed steps taken to complete the Process Template update and the third will talk about the gotchas and fixes for the Lab Management portion.

    Read the article

  • What tools and knowledge do I need to create an application which generates bespoke automated e-mails? [on hold]

    - by Seraphina
    I'd like some suggestions as to how to best go about creating an application which can generate bespoke automated e-mails- i.e. send a personalized reply to a particular individual, interpreting the context of the message as intelligently as possible... (This is perhaps too big a question to be under one title?) What would be a good starting point? What concepts do I need to know? I'd imagine that the program needs to be able trawl through e-mails as and when they come in, and search for keywords in e-mail content, in order to write an appropriate reply. So there needs to be some form of automated response embedded in the code. Machine learning and databases come to mind here, as I'm aware that google incorporates machine learning already in gmail etc. It is quite tricky to google the above topic, and find the perfect tutorial. But there are some interesting articles and papers out there: Machine Learning in Automated Text Categorization (2002) by Fabrizio Sebastiani , Consiglio Nazionale Delle Ricerche However, this is not exactly a quick start guide. I intend to add to this question, and no doubt other questions will spark off this one. I look forward to suggestions.

    Read the article

  • how to architect this to make it unit testable

    - by SOfanatic
    I'm currently working on a project where I'm receiving an object via web service (WSDL). The overall process is the following: Receive object - add/delete/update parts (or all) of it - and return the object with the changes made. The thing is that sometimes these changes are complicated and there is some logic involved, other databases, other web services, etc. so to facilitate this I'm creating a custom object that mimics the original one but has some enhanced functionality to make some things easier. So I'm trying to have this process: Receive original object - convert/copy it to custom object - add/delete/update - convert/copy it back to original object - return original object. Example: public class Row { public List<Field> Fields { get; set; } public string RowId { get; set; } public Row() { this.Fields = new List<Field>(); } } public class Field { public string Number { get; set; } public string Value { get; set; } } So for example, one of the "actions" to perform on this would be to find all Fields in a Row that match a Value equal to something, and update them with some other value. I have a CustomRow class that represents the Row class, how can I make this class unit testable? Do I have to create an interface ICustomRow to mock it in the unit test? If one of the actions is to sum all of the Values in the Fields that have a Number equal to 10, like this function, how can design the custom class to facilitate unit tests. Sample function: public int Sum(FieldNumber number) { return row.Fields.Where(x => x.FieldNumber.Equals(number)).Sum(x => x.FieldValue); } Am I approaching this the wrong way?

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • SPARC SuperCluster: new Software Enhancements announced on December 4

    - by Giuseppe Facchetti
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} December 4, 2012: Oracle Unveils Cloud and Consolidation Capabilities for Oracle SPARC SuperCluster The latest SPARC SuperCluster update offers layered, zero-overhead virtualization for Mission-Critical applications. Oracle today announced new software enhancements to the Oracle SPARC SuperCluster engineered system which enable customers to consolidate any combination of mission-critical enterprise databases, middleware and applications on a single system and rapidly deploy secure, self-service cloud services. For all the details, click here.

    Read the article

  • Webcast: DB Enterprise User Security Integration with Oracle Directory Services

    - by B Shashikumar
    The typical enterprise has a large number of DBA (Database administrator) accounts that are locally managed, which is often very costly, problematic and error-prone. Databases are a crucial component of your enterprise IT infrastructure, housing sensitive corporate data and database user accounts and privileges. To ensure the integrity of your enterprise's data, it's imperative to have a well-managed identity management system. This begins with centralized management of user accounts and access rights. Enterprise User Security (EUS), an Oracle Database Enterprise Edition feature, combined with Oracle Identity Management, gives you the ability to centrally manage database users and their authorizations in one central place. The cost of user provisioning and password resets is dramatically reduced. This technology is a must for new application development and should be considered for existing applications as well. Join Oracle Advisors for a live webcast on Jul 11 at 8am Pacific Time where Oracle experts will briefly introduce EUS, followed by a detailed discussion about the various directory options that are supported, including integration with Microsoft Active Directory. We'll conclude how to avoid common pitfalls deploying EUS with directory services. To register for this event, click here  

    Read the article

  • So my employer wants me to do less programming and focus on IT support

    - by Rich
    I was hired into a non tech company's IT department as a programmer a few years back, and after several rounds of lay offs, we're down to a skeleton crew. I've saved the company hundreds of thousands of dollars with my projects and management has been happy with them (although most of the stakeholders have since left the company). Management now wants me to limit the programming that I do and spend most of my time on IT support: putting out fires, dealing with vendors, outsourced contractors, supporting company systems, managing projects, etc. I am a little burnt out on programming since I've been pushed pretty hard for the past several years. However, I'm not sure if this is a good career move in the long run. I'm a decent programmer (and also good with databases) but not obsessed with it to the point of coding outside of work. I'm approaching my mid 30s and there's potential ageism to deal with down the line. While I'm fortunate to have survived the lay offs, it sorta feels like my job is being "dumbed down". I have both good technical skills and people skills...but it doesn't take a genius to do what I'm doing now. And my success is being increasingly linked to others' performance rather than my own... Just looking for some advice. Is it time to move on? That's not really an easy thing to do since I'd likely have to move to another area to find another comparable tech job. Should I go after another pure technical role? Or should I stay and try to make this work? People say do what you "enjoy" but it doesn't really matter to me as long as I'm getting paid. Also the ageism thing is on the horizon and could be an issue eventually. I'm making a decent (but not great) salary. Should I chase money and maximize my income while I still have a chance? Or be happy with a moderate salary and 40 hour work week?

    Read the article

  • Is this table replicated?

    - by fatherjack
    Another in the potentially quite sporadic series of I need to do ... but I cant find it on the internet. I have a table that I think might be involved in replication but I don't know which publication its in... We know the table name - 'MyTable' We have replication running on our server and its replicating our database, or part of it - 'MyDatabase'. We need to know if the table is replicated and if so which publication is going to need to be reviewed if we make changes to the table. How? USE MyDatabase GO /* Lots of info about our table but not much that's relevant to our current requirements*/ SELECT * FROM sysobjects WHERE NAME = 'MyTable' -- mmmm, getting there /* To quote BOL - "Contains one row for each merge article defined in the local database. This table is stored in the publication database.replication" interesting column is [pubid] */ SELECT * FROM dbo.sysmergearticles AS s WHERE NAME = 'MyTable' -- really close now /* the sysmergepublications table - Contains one row for each merge publication defined in the database. This table is stored in the publication and subscription databases. so this would be where we get the publication details */ SELECT * FROM dbo.sysmergepublications AS s WHERE s.pubid = '2876BBD8-3D4E-4ED8-88F3-581A659E8144' -- DONE IT. /* Combine the two tables above and we get the information we need */ SELECT s.[name] AS [Publication name] FROM dbo.sysmergepublications AS s INNER JOIN dbo.sysmergearticles AS s2 ON s.pubid = s2.pubid WHERE s2.NAME = 'MyTable' So I now know which

    Read the article

  • My first blog post…

    - by steveh99999
    I’ve been meaning to start a blog for a while now, (OK, for several years…..) - finally now, here it begins First post, something really simple but, a wise-man once told me about the best way to improve SQL server performance. Store Less Data. That's it.. that's all there is to it... Over the years, I've seen the following :- -  a 200Gb database which held 3 days data. Once business requirements changed, we were able to hold only 1 days data in this database. -  a table developed by DBAs to hold application table cardinality information - that information was collected at 2 hour intervals every day for 7 years ! After 7 years the DBA space-info table had become the largest table in the database - 60 million rows !  It was a simple change to remove alot of the historical intra-day data and change the schedule to run only once per evening. Suddenly that table held 6 million rows instead of 60 million.... - lots of backup and restore history held in msdb. See this post by Brent Ozar for more details on this issue. Imagine how much faster the backups, DBCC Checks and reindexes ran when the above 3 changes were implemented ?   How often do you review your big databases \ tables to see if you’re actually holding only data that is really required by the business ?

    Read the article

  • Free Webinar: A faster, cheaper, better IT Department with Azure

    - by Herve Roggero
    Join me for a free Webinar on Wednesday October 17th at 1:30PM, Eastern Time. I will discuss the benefits of cloud computing with the Azure platform. There isn’t a company out there that would say “No” to reduced IT costs and unlimited scaling bandwidth. This webinar will focus on the specific benefits of the Microsoft Azure cloud platform and will convince you on the sound business rationale behind moving to the cloud. From Infrastructure as a Service (Iaas) to Platform as a Service (Paas), Azure supports quick deployments, virtual machines, native SQL Databases and much more. Topics that will be discussed: - Why use Azure for your Cloud Computing needs - Iaas and Paas Offerings - Differing project approaches to Cloud computing - How Azure’s agility and reduced costs lead to better solutions Attendees of this webinar will also be eligible to receive the following: Free Two Hour Consultation which can include: - Review of Your Cloud Strategy - Cloud Roadmap Review - Review of Data-mart strategies - Review of Mobility Strategies Click Here to Register Now. About Herve Roggero Hervé Roggero, Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Hervé's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Hervé holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Hervé is the co-author of "PRO SQL Azure" from Apress. For more information, visit www.bluesyntax.net.

    Read the article

  • How to model the components of a non Information System?

    - by Adel C Kod
    So I am working on a project that's related to the Kernel code(specifically related to the TCP/IP stack of the kernel). I need to build some models to describe the functionality and components of my system. Initially I thought about Class Diagram, it can describe the general architecture of my system but it doesn't make sense since my code is VERY structured(written in standard C). I also thought about DFDs, they'd describe the processes of my system, and how the data is flowing. But they contain something which doesn't really fit in; data-storages. I have no databases here(at all). For the functionality, other team members suggested using Activity and Sequence diagrams, which is kinda okay with me, but what about the system components? So basically my question is; I want to describe the components of my system; what do you suggest as a meaningful diagram to follow? (Again, the project is a research low-level systems-oriented project with almost no user-interface at all)

    Read the article

  • Should I be learning Linq, Direct SQL Commands (in .net), EF or other?

    - by Wil
    Basically, I have a very good knowledge of plain old SQL coming from Classic ASP programming. Over the past couple of months, I have been learning C# and today was my first full day at MVC 3 (Razor) which I am loving! I need to get back in to Databases and I know that writing SqlCommand everywhere is obviously outdated (although it is nice I can still do it!). I used to go to a great usergroup as an IT Pro and the developer stuff went completely over my head, however I do remember a few things which kept coming up such as LINQ... However, that was some time ago and now the same people on Twitter are saying how out dated it is. I have tried to do research on both and I am clueless as to what direction I should go in, or when to use one over another (if learning both is a good thing). I am more so confused as I thought EF was a part of the .Net Framework, however, reading through the quick start guide, I had to download a component using Nuget. ... Basically I am out of my depth here and just need some honest advice of where to go!

    Read the article

  • Using R on your Oracle Data Warehouse

    - by jean-pierre.dijcks
    Since it is Predictive Analytics World in our backyard (or are we San Francisco’s backyard…?) I figured it is well worth the time to dust of some old but important news. With big data (should we start calling it “any data analytics” instead?) being the buzz word and analytics the key operative goal, not moving data around is becoming more and more critical to the business users. Why? Because instead of spending time on moving data around into your next analytics server you should be running analytics on those CPUs. You could always do this with Oracle Data Mining within the Oracle Database. But a lot of folks want to leverage R as their main tool. Well, this article describes how you can do this, since 2010… As Casimir Saternos concludes in the article; “There is a growing awareness of the need to effectively analyze astronomical amounts of data, much of which is stored in Oracle databases. Statistics and modeling techniques are used to improve a wide variety of business functions. ODM accessed using the R language increases the value of your data by uncovering additional information. RODM is a powerful tool to enable your organization to make predictions, classify data, and create visualizations that maximize effectiveness and efficiencies.” Happy Analysis!

    Read the article

  • What to learn after standard C++?

    - by Luca Cerone
    I switched to C++ a few months ago, learning its syntax, the main features of the STL and what you can usually find in a "learn C++" manual. Now I would like to go further. What would be your recommendations? I would like to know what to learn next (not only about the language, but also debugging, frameworks etc. etc.) I know probably the answer depends on the specific needs of each user, so here is a list of mine: Cross Platform development Developing GUI for my programs Develop extendible software, allowing the use of plugins Use of scientific libraries Interact with databases (mainly MySQL) Having server/client functionalities (I'd like users of my programs to interact through internet.. as you might have guessed I am not a programmer by training so I might have used the wrong terms.. if so I apologize for that). Of course I know it takes time, but I would like to have a good list of references and resources to start (both books and websites are ok). Thanks a lot for your help!

    Read the article

  • Best way to protect website application code

    - by Gaz_Edge
    Background I have a web application that I host on my own server. I have clients who use the application as is, but some have asked if they can host the application on their own server. This enables them to have their own URLS rather than mine. The application only forms part of their website so I'm assuming it will not be possible for my server to respond to a direct call to their domain etc To give some examples, i currently have urls like www.mydomain.com/profile, www.mydomain.com/index.php?option=someoption&view=someview&id=1 What my clients' want is www.theirdomian.com/profile, www.theirdomian.com/index.php?option=someoption&view=someview&id=1 etc Question My question is, what is the best way for me to allow them to use their own URLs with my application, without giving them all the backend source code and databases to install on their server? One way I thought would be to create a router.php file that sits on their server. The router then asks my server to output the html. The router modifies all the links etc in the html source and outputs the new html through the clients server. When a link is clicked on the clients site, the router receives the request and modifies the url to get the data from my server etc. Is this an effective way to achieve what I want, or is it way off the mark.

    Read the article

  • .NET app - Should we use SQL Server and duplicate some reference data from an external Oracle DB? Or use Oracle and have a DB link?

    - by Daventry
    We're looking to migrate some existing Excel/Access processes into a new system which will provide the users with a Silverlight frontend to run and view the reports instead of using MS Access. The initial idea was to have SQL Server 2008 as RDBMS. The problem is that we've got some static data such as country codes, counterparties, etc which live in an existing Oracle DB. Since we do not want to duplicate that data (if possible), we were thinking of having a DB link between SQL Server and Oracle, but our firm does not allow that. So the options are either duplicate the data or use Oracle as RDBMS - surprise, the firm does allow DB links between Oracle databases. The initial idea was also to use WCF RIA Services, Entity Framework, etc which we're not sure they play well with Oracle, that's why it was decided to go with SQL Server in the first place. Would you advise to go for Oracle so that we can just link the static data? Or use SQL Server 2008 and replicate it because it's "safer" to stay within the Microsoft land? To use or not to use Entity Framework and WCF RIA Services at all? Regards. UPDATE: Thanks everyone for your answers. Nothing is set in stone yet. We'll try to import the data instead of linking, as if the other DB goes down, our system can still carry on. We're likely to use SQL Server just because most developers are more experienced with it. Even if we used RIA Services, we can swap out the Data Access Layer and use other frameworks such those mentioned below.

    Read the article

  • Hostmonster can't change domains around?

    - by loneboat
    (question imported from http://superuser.com/q/204439/53847 ) Horrible title, but I couldn't think of a succinct way to summarize it to fit. I have HostMonster for my web hosting. I have several domain names under the same account (using the same web space, IP address, etc...). Every HM account has one domain set up as the "main domain", and all other domains are "secondary". The only way I have ever encountered this being an issue is in trying to use HTTPS - since (from my limited understanding) HTTPS encrypts headers, you can't route HTTPS requests to different virtual hosts on a server - only unencrypted requests, since it must look in the request to know where to route it. When I registered for my account, I only had one domain name (A). I have since added domain names (B), (C), (D), etc... At one point I switched domain name (B) to be my "main" domain name - so I could use HTTPS with it. I have since sold domain name (B), and would like to make domain name (A) my "main" one again (as it was before), but HM support says, "no, once a domain name has been a 'main' domain name on an account once, we can't set it up to be a 'main' domain name again. You're welcome to use domains (C), or (D), though.". They tell me the only way to reuse domain (A) as a "main" domain would be to set up a new account and transfer over all my files. I'm confused here. If I have domains (D), (E), and (F), they say I'm welcome to make one of them my new main domain name, just never (A) again, since I've already "used" it once. Calls to support only reveal that they can't let me do it because doing so would somehow "break" my account. Can anyone think of any good reason why this should be so? The only thing I can think is that maybe they're using the domain names as keys in some database or something? But if that's the case, that's ridiculous - they need to reorganize their databases!

    Read the article

  • Applying Service Pack 1 to Team Foundation Server 2010

    - by Enrique Lima
    Disclosure:  I performed the following activities on my Windows 7 SP1 system, Visual Studio 2010 SP1 and a local Basic installation of TFS 2010. As with any deployment of a service pack into a server environment, take your recommended precautions and be aware of the changes you are putting in.  With that said, make sure you backup your databases, and that you have an exit/rollback strategy in the event of an unexpected situation. Team Foundation Server 2010 Service Pack 1 corresponds to KB2182621.  The KB article is http://support.microsoft.com/kb/2182621 The process will be very simple to follow, you will need to execute the mu_team_foundation_server_2010_sp1_x86_x64_651711.exe file.  That will extract files needed and launch the wizard driven Installation. Once this process completes, you need to validate the changes. By looking at Team Foundation Server 2010 Administration Console, you should see the reference to the KB number and SP1. There is also a good reason to validate log locations and records. From the Team Foundation Server 2010 Administration Console. Or from Windows Explorer, go to the C:\ProgramData\Microsoft\Team Foundation\Server Configuration\Logs location and review the logs referenced by the servicing references.

    Read the article

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

  • Should one use a separate database for application data and user data?

    - by trycatch
    I’ve been working on a project for a little while and I’m unsure which is the better architecture. I’m interested in the consensus. The answer to me seems fairly obvious but something about it is digging at me and I can't pick out what. The TL;DR is: how do you handle a program with application data and user data in the same DB which needs to be able to receive updates to the application data periodically? One database for user data and one for application, or both in one? The detailed version is.. if an application has a database which needs to maintain application data AND user data, and the user data all references application data, it feels more natural to me to store them in the same database. But if there exists a need to be able to update the application data within this database periodically, should this be stripped into two databases so that one can simply download the updated application data database file as an update and replace the old one? Or should they remain as one database, and the application data be updated via a script which inserts the new data into the existing database? The second sounds clearly preferable to me... but for some reason just doesn’t feel right, and I can't pick out quite why.

    Read the article

  • Is Ruby on Rails supposed to have a steep learning curve or is it just me?

    - by Anita
    I'm a self-taught programmer. I've been learning RoR since October with varying intensity (sometimes all day, sometimes nothing for several weeks). Before that I knew only Java, but knew it pretty well. I've heard so much hype about RoR and how it's supposed to make you happy, productive, etc. So far it's only made me frustrated. I learned it out of the Agile book, and I suspect part of the difficulty might have to do with my not knowing JavaScript and CSS, and having only a shaky grasp of databases and HTML. But apparently it took me much longer to complete the project in the Agile book than other people, and I still don't remember much of it. There are some things about Rails that I just can't seem to get, e.g. when to use symbols and when NOT to, or how dynamic methods are called. Recently I was given a small Rails assignment where I'm asked to make a small change to the interface. It's taken me around 25 hours and although I've made some progress in understanding the code, I still have no idea how to proceed. I can't even ask Stack Overflow because there is so much code I'll have to provide to give context. So my question is in the title: is RoR supposed to take a long time to learn or am I just slow? Can it be that I've been learning from the wrong book? My learning style is such that I either understand nothing or understand everything, if that makes sense. Thanks!

    Read the article

  • What is a Relational Database Management System (RDBMS)?

    A Relational Database Management System (RDBMS)  can also be called a traditional database that uses a Structured Query Language (SQL) to provide access to stored data while insuring the integrity of the data. The data is stored in a collection of tables that is defined by relationships between data items. In addition, data permitted to be joined in new relationships. Traditional databases primarily process data through transactions called transaction processing. Transaction processing is the methodology of grouping related business operations based predefined business events. An example of this can be seen when a person attempts to purchase an item from an online e-tailor. The business must execute specific operations for a related  business event. In this case, a business must store the following information: Customer Info, Order Info, Order Item Info, Customer Payment Data, Payment Results, and Current Order Status. Example: Pseudo SQL Operations needed for processing an online e-tailor sale. Insert Customer into Customers Insert New Order into Orders Insert Each New Order Item into OrderItems Insert Customer Payment Info into PaymentInfo Insert Payment Processing Result into PaymentDetails Update Customer for Current Order Status Common Relational Database Management System Microsoft SQL Server Microsoft Access Oracle MySQL DB2 It is important to note that no current RDBMS has fully implemented all of the Relational Principles. Common RDBMS Traits Volatile Data Supports Transaction Processing Optimized for Updates and Simple Queries 

    Read the article

  • Amazon EC2 vs Dedicated server at Hetzner, what's the use for EC2?

    - by C-Blu
    After searching the web I still can't find the reason to use EC2. What's the point to scale EC2? If you expect a huge burst in traffic, they say. OK, but what if you already have a couple of sites with good traffic, and for example medium reserved EC2 instance is not enough. You are paying $36.60(medium reserved for 1year) in EU(Ireland) + traffic + optional expenses for databases and S3 if you use them. Of course as some point when you are under $56.6-$66.1 you can optimize your hosting costs with Amazon EC2. But when you get at some point if purchase EX4 server from Hetzner, it will surpass your perfomance needs for a long time, before you get a massive traffic. (I am wrong?) CPU: i7-2600 Quadcore (3.4-3.8 Ghz) RAM: 16 GB HDD: 2x3 TB SATA (6 Gbit/s) - I think that disc performance of a dedicated is better then of Amazon EBS Traffic: 10 TiB in month included. This is what you get from Hetzner for $56(- 19% VAT) or $66 for EU residents. Please, tell me what's the reason to use Amazon? Which load won't a server from Hetzner take, but Amazon Auto Scaling will? The maintenance of dedicated vs EC2 is still the same? Or hardware failure at Amazon, won't ruin your EBS storage? I'm still not at the level when I need expensive hosting, but want to know beforehand, just to be sure if Amazon infrastructure is better then pure performance of Hetzner's hardware.

    Read the article

  • Information I need to know as a Java Developer [on hold]

    - by Woy
    I'm a java developer. I'm trying to get more knowledge to become a better programmer. I've listed a number of technologies to learn. Instead of what I've listed, what technologies would you suggest to learn as well for a Junior Java Developer? I realize, there's a lot of things to study. Java: - how a garbage collector works - resource management - network programming - TCP/IP HTTP - transactions, - consistency: interfaces, classes collections, hash codes, algorithms, comp. complexity concurrent programming: synchronizing, semafores steam management metability: thread-safety byte code manipulations, reflections, Aspect-Oriented Programming as base to understand frameworks such as Spring etc. Web stack: servlets, filters, socket programming Libraries: JDK, GWT, Apache Commons, Joda-Time, Dependency Injections: Spring, Nano Tools: IDE: very good knowledge - debugger - profiler - web analyzers: Wireshark, firebugs - unit testing SQL/Databases: Basics SELECTing columns from a table Aggregates Part 1: COUNT, SUM, MAX/MIN Aggregates Part 2: DISTINCT, GROUP BY, HAVING + Intermediate JOINs, ANSI-89 and ANSI-92 syntax + UNION vs UNION ALL x NULL handling: COALESCE & Native NULL handling Subqueries: IN, EXISTS, and inline views Subqueries: Correlated ITH syntax: Subquery Factoring/CTE Views Advanced Topics Functions, Stored Procedures, Packages Pivoting data: CASE & PIVOT syntax Hierarchical Queries Cursors: Implicit and Explicit Triggers Dynamic SQL Materialized Views Query Optimization: Indexes Query Optimization: Explain Plans Query Optimization: Profiling Data Modelling: Normal Forms, 1 through 3 Data Modelling: Primary & Foreign Keys Data Modelling: Table Constraints Data Modelling: Link/Corrollary Tables Full Text Searching XML Isolation Levels Entity Relationship Diagrams (ERDs), Logical and Physical Transactions: COMMIT, ROLLBACK, Error Handling

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >