Search Results

Search found 1533 results on 62 pages for 'rdbms agnostic'.

Page 58/62 | < Previous Page | 54 55 56 57 58 59 60 61 62  | Next Page >

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • Web-app currency input/manipulation/calculation with javascript .. there has got to be a better (fra

    - by dreftymac
    BACKGROUND: I am of the "user-input-lockdown" school of thought. Whenever possible, I try to mistrust and sanitize user input, both client side and server side; and I try to take multiple opportunities to restrict possible inputs to a known subset of possibilities, usually this means providing a lot of checkboxes and select lists. (This is from the usability side of things, I know security-wise that malicious users can easily bypass fixed user input GUI controls). PROBLEM: Anyway, the problem always arises with non-fixed input of currency. Whenever I have to accept a freely-specified dollar amount as user input, I always have to confront these problems/annoyances and it is always painful: 1) Make sure to give the user two input boxes for each currency_datapoint, one for the whole_dollar_part and another for the fractional_pennies_part 2) Whenever the user changes a currency_datapoint, provide keystroke-by-keystroke GUI feedback to let them know whether the currency_datapoint is well-formed, with context-appropriate validation rules (e.g., no negatives?, nonzero only?, numeric only!, no non-numeric punctuation! no symbols!) 3) For display purposes, every user-provided currency_datapoint should be translated to human-readable currency formatting (dollar sign, period, commas provided by the app, where appropriate) 4) For calculation purposes, every user-provided currency_datapoint has to be converted to integer (all pennies, to avoid floating point errors) and summed into a grand total with zero or more subtotals. 5) Every user-provided currency_datapoint should be displayed or displayable in a nice "tabular" format, which auto-updates as the user enters each currency_datapoint, including a baloon that warns when one or more currency_datapoints is not well-formed. I seem to be re-inventing this wheel every time I have to work with currency in Javascript on the client side (server side is a bit more flexible since most programming languages have higher-level currency formatting logic). QUESTION: Has anyone out there solved the problem of dealing with the above issues, client side, in a way that is server-side-technology-stack agnostic, (preferrably plain javascript or jquery)? This is getting old, there has to be a better way.

    Read the article

  • Event feed implementation - will it scale?

    - by SlappyTheFish
    Situation: I am currently designing a feed system for a social website whereby each user has a feed of their friends' activities. I have two possible methods how to generate the feeds and I would like to ask which is best in terms of ability to scale. Events from all users are collected in one central database table, event_log. Users are paired as friends in the table friends. The RDBMS we are using is MySQL. Standard method: When a user requests their feed page, the system generates the feed by inner joining event_log with friends. The result is then cached and set to timeout after 5 minutes. Scaling is achieved by varying this timeout. Hypothesised method: A task runs in the background and for each new, unprocessed item in event_log, it creates entries in the database table user_feed pairing that event with all of the users who are friends with the user who initiated the event. One table row pairs one event with one user. The problems with the standard method are well known – what if a lot of people's caches expire at the same time? The solution also does not scale well – the brief is for feeds to update as close to real-time as possible The hypothesised solution in my eyes seems much better; all processing is done offline so no user waits for a page to generate and there are no joins so database tables can be sharded across physical machines. However, if a user has 100,000 friends and creates 20 events in one session, then that results in inserting 2,000,000 rows into the database. Question: The question boils down to two points: Is this worst-case scenario mentioned above problematic, i.e. does table size have an impact on MySQL performance and are there any issues with this mass inserting of data for each event? Is there anything else I have missed?

    Read the article

  • confused about how to use JSON in C#

    - by Josh
    The answer to just about every single question about using C# with json seems to be "use JSON.NET" but that's not the answer I'm looking for. the reason I say that is, from everything I've been able to read in the documentation, JSON.NET is basically just a better performing version of the DataContractSerializer built into the .net framework... Which means if I want to deserialize a JSON string, I have to define the full, strongly-typed class for EVERY request I might have. so if I have a need to get categories, posts, authors, tags, etc, I have to define a new class for every one of these things. This is fine if I built the client and know exactly what the fields are, but I'm using someone else's api, so I have no idea what the contract is unless I download a sample response string and create the class manually from the JSON string. Is that the only way it's done? Is there not a way to have it create a kind of hashtable that can be read with json["propertyname"]? Finally, if I do have to build the classes myself, what happens when the API changes and they don't tell me (as twitter seems to be notorious for doing)? I'm guessing my entire project will break until I go in and update the object properties... So what exactly is the general workflow when working with JSON? And by general I mean library-agnostic. I want to know how it's done in general, not specifically to a target library... I hope that made sense, this has been a very confusing area to get into... thanks!

    Read the article

  • How to index a table with a Type 2 slowly changing dimension for optimal performance

    - by The Lazy DBA
    Suppose you have a table with a Type 2 slowly-changing dimension. Let's express this table as follows, with the following columns: * [Key] * [Value1] * ... * [ValueN] * [StartDate] * [ExpiryDate] In this example, let's suppose that [StartDate] is effectively the date in which the values for a given [Key] become known to the system. So our primary key would be composed of both [StartDate] and [Key]. When a new set of values arrives for a given [Key], we assign [ExpiryDate] to some pre-defined high surrogate value such as '12/31/9999'. We then set the existing "most recent" records for that [Key] to have an [ExpiryDate] that is equal to the [StartDate] of the new value. A simple update based on a join. So if we always wanted to get the most recent records for a given [Key], we know we could create a clustered index that is: * [ExpiryDate] ASC * [Key] ASC Although the keyspace may be very wide (say, a million keys), we can minimize the number of pages between reads by initially ordering them by [ExpiryDate]. And since we know the most recent record for a given key will always have an [ExpiryDate] of '12/31/9999', we can use that to our advantage. However... what if we want to get a point-in-time snapshot of all [Key]s at a given time? Theoretically, the entirety of the keyspace isn't all being updated at the same time. Therefore for a given point-in-time, the window between [StartDate] and [ExpiryDate] is variable, so ordering by either [StartDate] or [ExpiryDate] would never yield a result in which all the records you're looking for are contiguous. Granted, you can immediately throw out all records in which the [StartDate] is greater than your defined point-in-time. In essence, in a typical RDBMS, what indexing strategy affords the best way to minimize the number of reads to retrieve the values for all keys for a given point-in-time? I realize I can at least maximize IO by partitioning the table by [Key], however this certainly isn't ideal. Alternatively, is there a different type of slowly-changing-dimension that solves this problem in a more performant manner?

    Read the article

  • What constitutes a development environment, and how do you document it?

    - by Joel Coehoorn
    What items go into a software shop's development environment, how do you document it, and what processes do you follow to make changes? I thinking about this from the standpoint where I want to make it easier to bring new hires up to speed quickly by having all this on a checklist we follow when setting them up, and then while I'm at it making it easier for the new hires or existing team members to bring new powerful toolkits and ideas into the environment without disrupting things. I want to keep this platform agnostic, so even though I'm currently at a microsoft shop where Visual Studio would be assumed I'll go ahead and list compiler/IDE as one of the items: Here are some ideas for part 1: [edit]: I'm keeping this updated based on the better suggestions. Source Control access Issue/Bug/Project tracker System Documention, or references to find the system documentation in source control or in a wiki, including: build document/environment covered by this question design documents / technical notes Coding Style guidelines Deploy for review/testing/QA/staging/production procedures Licensing details for your tools and your product Team Calendar, including the project schedule(s), deadlines, vacation time, and support/on-call schedule (if required) compiler/IDE compiler/IDE extensions (things like source control plugins or visual studio add-ins) 3rd party SDKs/toolkits Database connection and tools Testing Frameworks Internal libraries communication tools (chat, wiki, etc) Static analysis tools (FxCop, FlawFinder, etc) Virtual machines (holding dev environment or for testing) Specialized editors (modeling, xml, etc) Other tools What else goes in this list, and how do you document it and vet changes?

    Read the article

  • how to Solve the "Digg" problem in MongoDB

    - by user193116
    A while back,a Digg developer had posted this blog ,"http://about.digg.com/blog/looking-future-cassandra", where the he described one of the issues that were not optimally solved in MySQL. This was cited as one of the reasons for their move to Cassandra. I have been playing with MongoDB and I would like to understand how to implement the MongoDB collections for this problem From the article, the schema for this information in MySQL : CREATE TABLE Diggs ( id INT(11), itemid INT(11), userid INT(11), digdate DATETIME, PRIMARY KEY (id), KEY user (userid), KEY item (itemid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE Friends ( id INT(10) AUTO_INCREMENT, userid INT(10), username VARCHAR(15), friendid INT(10), friendname VARCHAR(15), mutual TINYINT(1), date_created DATETIME, PRIMARY KEY (id), UNIQUE KEY Friend_unique (userid,friendid), KEY Friend_friend (friendid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; This problem is ubiquitous in social networking scenario implementation. People befriend a lot of people and they in turn digg a lot of things. Quickly showing a user what his/her friends are up to is very critical. I understand that several blogs have since then provided a pure RDBMs solution with indexes for this issue; however I am curious as to how this could be solved in MongoDB.

    Read the article

  • C# Virtual method call in constructor - how to refactor?

    - by Cristi Diaconescu
    I have an abstract class for database-agnostic cursor actions. Derived from that, there are classes that implement the abstract methods for handling database-specific stuff. The problem is, the base class ctor needs to call an abstract method - when the ctor is called, it needs to initialize the database-specific cursor. I know why this shouldn't be done, I don't need that explanation! This is my first implementation, that obviously doesn't work - it's the textbook "wrong way" of doing it. The overridden method accesses a field from the derived class, which is not yet instantiated: public abstract class CursorReader { private readonly int m_rowCount; protected CursorReader() { m_rowCount = CreateCursor(sqlCmd); //virtual call ! } protected abstract int CreateCursor(string sqlCmd); } public class SqlCursorReader : CursorReader { private SqlConnection m_sqlConnection; public SqlCursorReader(string sqlCmd, SqlConnection sqlConnection) { m_sqlConnection = sqlConnection; //field initialized here } protected override int CreateCursor(string sqlCmd) { //uses not-yet-initialized member *m_sqlConnection* //so this throws a NullReferenceException var cursor = new CustomCursor(sqlCmd, m_sqlConnection); return cursor.Count(); } } I will follow up with an answer on my attempts to fix this...

    Read the article

  • Dynamic GUI Framework Designing

    - by user575715
    There is a Scenario to be developed for a 3-tier Application .We need to design a Framework or a utility sort of thing . In tradional aspect of GUI Designing , either we tend to create a static gui page and code the elements on it along with other properties of the elements such as (disabled/enabled,image source,name ,id ,which function to be called under onclick event.) or we tend to drag and drop the elements from the control pallete provided by variety of gui frameworks. Certain things i need to design a POC so that we can develop this concept. 1) There must a utility ,such that during creation of screen layout , that screen should be saved in the database(RDBMS) with a screen number. 2) All the Events related to that control should be saved in some other table which will be dynamically mapped during the calling of screen number by the user. 3) When the user call that screen ,a generic function should be invoked which'll call the screen file from the database and apply all the properties ,events,etc at runtime and the final output will be displayed to the user. This POC will help the us to customised the screens according to our usage.also all the code will seperated which can easily be used for some other development process. Thanks Amit Kalra

    Read the article

  • Argument type deduction, references and rvalues

    - by uj2
    Consider the situation where a function template needs to forward an argument while keeping it's lvalue-ness in case it's a non-const lvalue, but is itself agnostic to what the argument actually is, as in: template <typename T> void target(T&) { cout << "non-const lvalue"; } template <typename T> void target(const T&) { cout << "const lvalue or rvalue"; } template <typename T> void forward(T& x) { target(x); } When x is an rvalue, instead of T being deduced to a constant type, it gives an error: int x = 0; const int y = 0; forward(x); // T = int forward(y); // T = const int forward(0); // Hopefully, T = const int, but actually an error forward<const int>(0); // Works, T = const int It seems that for forward to handle rvalues (without calling for explicit template arguments) there needs to be an forward(const T&) overload, even though it's body would be an exact duplicate. Is there any way to avoid this duplication?

    Read the article

  • What's the best way to communicate the purpose of a string parameter in a public API?

    - by Dave
    According to the guidance published in New Recommendations for Using Strings in Microsoft .NET 2.0, the data in a string may exhibit one of the following types of behavior: A non-linguistic identifier, where bytes match exactly. A non-linguistic identifier, where case is irrelevant, especially a piece of data stored in most Microsoft Windows system services. Culturally-agnostic data, which still is linguistically relevant. Data that requires local linguistic customs. Given that, I'd like to know the best way to communicate which behavior is expected of a string parameter in a public API. I wasn't able to find an answer in the Framework Design Guidelines. Consider the following methods: f(string this_is_a_linguistic_string) g(string this_is_a_symbolic_identifier_so_use_ordinal_compares) Is variable naming and XML documentation the best I can do? Could I use attributes in some way to mark the requirements of the string? Now consider the following case: h(Dictionary<string, object> dictionary) Note that the dictionary instance is created by the caller. How do I communicate that the callee expects the IEqualityComparer<string> object held by the dictionary to perform, for example, a case-insensitive ordinal comparison?

    Read the article

  • How to write these two queries for a simple data warehouse, using ANSI SQL?

    - by morpheous
    I am writing a simple data warehouse that will allow me to query the table to observe periodic (say weekly) changes in data, as well as changes in the change of the data (e.g. week to week change in the weekly sale amount). For the purposes of simplicity, I will present very simplified (almost trivialized) versions of the tables I am using here. The sales data table is a view and has the following structure: CREATE TABLE sales_data ( sales_time date NOT NULL, sales_amt double NOT NULL ) For the purpose of this question. I have left out other fields you would expect to see - like product_id, sales_person_id etc, etc, as they have no direct relevance to this question. AFAICT, the only fields that will be used in the query are the sales_time and the sales_amt fields (unless I am mistaken). I also have a date dimension table with the following structure: CREATE TABLE date_dimension ( id integer NOT NULL, datestamp date NOT NULL, day_part integer NOT NULL, week_part integer NOT NULL, month_part integer NOT NULL, qtr_part integer NOT NULL, year_part integer NOT NULL, ); which partition dates into reporting ranges. I need to write queries that will allow me to do the following: Return the change in week on week sales_amt for a specified period. For example the change between sales today and sales N days ago - where N is a positive integer (N == 7 in this case). Return the change in change of sales_amt for a specified period. For in (1). we calculated the week on week change. Now we want to know how that change is differs from the (week on week) change calculated last week. I am stuck however at this point, as SQL is my weakest skill. I would be grateful if an SQL master can explain how I can write these queries in a DB agnostic way (i.e. using ANSI SQL).

    Read the article

  • IE8 ignores jQuery UI 'dialog' minHeight and height settings

    - by Kev
    I'm using jQuery 1.4.4 with jQueryUI 1.8.7 to display a modal dialog box. I'm having a problem where IE8 renders a scrollbar and appears to ignore the minHeight and height options in all the many different combinations I've tried them in. In Chrome 8 and Firefox 3.6 my dialogue looks like: In IE 8 it looks like: The markup and script looks like: <a id="create" href="#">Create New Thing</a> <div id="dlg-create-thing" title="Create new thing?"> <form name="create-thing-form" id="dlg-create-thing-form"> <p style="text-align:left"> <span>Name: <input id="thingName" name="thingName" maxlength="12" size="30" /></span> <br /><br /> <b>Thing options:</b><br /><br /> <input type="radio" id="option1" name="theoptions" value="0" checked="checked" />Use this option<br /> <input type="radio" id="option1AndMem" name="theoptions" value="1" />Use this other option </p> </form> </div> <script type="text/javascript"> $(function () { $("#dlg-create-thing").dialog({ autoOpen: false, resizable: false, width: 500, modal: true, minHeight: 280, buttons: { "Create": function () { /* do stuff */ }, "Cancel": function () { /* do other stuff */} } }); $("body").delegate("a[id='create']", "click", function () { $("#dlg-create-thing").dialog('open'); return false; } ); }); </script> How can I fix this (preferably in a nice browser agnostic way, but I'd settle for anything)?

    Read the article

  • Multithreading: Read from / write to a pipe

    - by Tero Jokinen
    I write some data to a pipe - possibly lots of data and at random intervals. How to read the data from the pipe? Is this ok: in the main thread (current process) create two more threads (2, 3) the second thread writes sometimes to the pipe (and flush-es the pipe?) the 3rd thread has infinite loop which reads the pipe (and then sleeps for some time) Is this so far correct? Now, there are a few thing I don't understand: do I have to lock (mutex?) the pipe on write? IIRC, when writing to pipe and its buffer gets full, the write end will block until I read the already written data, right? How to check for read data in the pipe, not too often, not too rarely? So that the second thread wont block? Is there something like select for pipes? It is possible to set the pipe to unbuffered more or I have to flush it regularly - which one is better? Should I create one more thread, just for flushing the pipe after write? Because flush blocks as well, when the buffer is full, right? I just don't want the 1st and 2nd thread to block.... [Edit] Sorry, I thought the question is platform agnostic but just in case: I'm looking at this from Win32 perspective, possibly MinGW C...

    Read the article

  • Database (and ORM) choice for an small-medium size .NET Application

    - by jim
    I have a requirement to develop a .NET-based application whose data requirements are likely to exceed the 4 gig limit of SQL 2005 Express Edition. There may be other customers of the same application (in the future) with a requirement to use a specific DB platform (such as Oracle or SQL Server) due to in-house DBA expertise. Questions What RDBMS would you guys recommend? From the looks of it the major choices are PostGreSQL, MySQL or FireBird. I've only got experience of MYSQL from these. Which ORM tool (if any) would you recommend using - ideally one that can be swapped out between DB platforms with minimal effort? I like the look of the entity framework but unsure as to the degree to which platforms other than SQL Server are supported. If it helps, we'll be using the 3.5 version of the Framework. I'm open to the idea of using a tool such as NHibernate. On the other hand, if it's going to be easier, I'm happy to write my own stored procedures / DAL code - there won't be that many tables (perhaps 30-35).

    Read the article

  • Custom stream wrappers, what could they be useful for in web applications?

    - by michael
    I suppose the concept is language agnostic, but I don't know what it's called in other languages. In PHP they're Stream Wrappers. In short, a wrapper class that allows manipulation of a streamable resource (resource that can be read to/written to/seek into, such as a file, a db, an url). For example, in a template engine (a view), upon including a template file such as: include "view.wrapper://path/to/my/template/file.phtml"; my custom wrapper, declared elsewhere and associated with "view.wrapper", would first intercepts the file to replace such things as short tags (<?=) with a more verbose counterpart (<?php echo). This allows developers to use short tags in views, even if the server isn't set to allow it. It can also be applied to the preprocessing of views pseudo syntax such as {@myVar} (e.g. replacing it with $this->myVar). This is only one application of custom stream wrappers, but the feature seems powerful enough to make me think that there are others that could make life a lot simpler for developers. What have you built, or thought about building, custom stream wrappers for? where have you seen some interesting implementations? I'm particularly interested in their applications in web development.

    Read the article

  • Dynamic Data Extract Tools

    - by Kevin McGovern
    I've been searching around for a few weeks now for a tool that either is fully built or a direction of something I could build for dynamically extracting data via a web interface. Basically, what I'm looking for is a way to give users a list of all available data objects from our database and then let them pick ones from the list they'd like to view and set parameters then export the results to an excel file. Right now we're doing it purely with SQL statements but we have hundreds of objects so as you might imagine, those statements are really complex and prone to errors. It would be great if there was a tool available to do this or if someone had an idea of an easy way to organize this. Any help would be greatly appreciated. We've looked at BI tools like QlikView and Tableau but that is probably overkill for what we're trying to do. The open-source BI tools we've looked at seemed really primitive in their functionality. The other thing we looked at was MSAS (our DB is SQL Server) but I'd prefer something that was more database-agnostic and lived on a web server instead of on the database.

    Read the article

  • dhcp-snooping option 82 drops valid dhcp requests on 2610 series Procurve switches

    - by kce
    We are slowly starting to implement dhcp-snooping on our HP ProCurve 2610 series switches, all running the R.11.72 firmware. I'm seeing some strange behavior where dhcp-request or dhcp-renew packets are dropped when originating from "downstream" switches due "untrusted relay information from client". The full error: Received untrusted relay information from client <mac-address> on port <port-number> In more detail we have a 48 port HP2610 (Switch A) and a 24 port HP2610 (Switch B). Switch B is "downstream" of Switch A by virtue of a DSL connection to one of Switch A ports. The dhcp server is connected to Switch A. The relevant bits are as follows: Switch A dhcp-snooping dhcp-snooping authorized-server 192.168.0.254 dhcp-snooping vlan 1 168 interface 25 name "Server" dhcp-snooping trust exit Switch B dhcp-snooping dhcp-snooping authorized-server 192.168.0.254 dhcp-snooping vlan 1 interface Trk1 dhcp-snooping trust exit The switches are set to trust BOTH the port the authorized dhcp server is attached to and its IP address. This is all well and good for the clients attached to Switch A, but the clients attached to Switch B get denied due to the "untrusted relay information" error. This is odd for a few reasons 1) dhcp-relay is not configured on either switch, 2) the Layer-3 network here is flat, same subnet. DHCP packets should not have a modified option 82 attribute. dhcp-relay does appear to be enabled by default however: SWITCH A# show dhcp-relay DHCP Relay Agent : Enabled Option 82 : Disabled Response validation : Disabled Option 82 handle policy : append Remote ID : mac Client Requests Server Responses Valid Dropped Valid Dropped ---------- ---------- ---------- ---------- 0 0 0 0 SWITCH B# show dhcp-relay DHCP Relay Agent : Enabled Option 82 : Disabled Response validation : Disabled Option 82 handle policy : append Remote ID : mac Client Requests Server Responses Valid Dropped Valid Dropped ---------- ---------- ---------- ---------- 40156 0 0 0 And interestingly enough the dhcp-relay agent seems very busy on Switch B, but why? As far as I can tell there is no reason why dhcp requests need a relay with this topology. And furthermore I can't tell why the upstream switch is dropping legitimate dhcp requests for untrusted relay information when the relay agent in question (on Switch B) isn't modifying the option 82 attributes anyway. Adding the no dhcp-snooping option 82 on Switch A allows the dhcp traffic from Switch B to be approved by Switch A, by virtue of just turning off that feature. What are the repercussions of not validating option 82 modified dhcp traffic? If I disable option 82 on all my "upstream" switches - will they pass dhcp traffic from any downstream switch regardless of that traffic's legitimacy? This behavior is client operating system agnostic. I see it with both Windows and Linux clients. Our DHCP servers are either Windows Server 2003 or Windows Server 2008 R2 machines. I see this behavior regardless of the DHCP servers' operating system. Can anyone shed some light on what's happening here and give me some recommendations on how I should proceed with configuring the option 82 setting? I feel like i just haven't completely grokked dhcp-relaying and option 82 attributes.

    Read the article

  • Pros/Cons of switching from Exchange to GMail

    - by Brent
    We are a medium-large non-profit company, with around 1000 staff and volunteers, and have been using MS Exchange (currently 2003) for our mail system for years. I recently attended a Google conference where they were positing that "Cloud computing is the way of the future", and encouraging us to switch from doing our own email with Exchange, to using GMail and Google Apps for everything. Additionally, one of our departments has been pushing from inside to do this transition within their own department, if not throughout the entire organization. I can definitely see some benefits - such as: Archive space - we never seem to have the space our users want, and of course, the more we get, the more we have to back up OS Agnostic - Exchange is definitely built for windows, and with mac and linux users on the rise, these users increasingly demand better tools / support. Google offers this. Better archiving - potential of e-discovery, that doesn't exist in a practical way with our current setup. Switching would relieve us of a fair bit of server administration, give more options to our end users, and free up the server resources we are now using for Exchange. Our IT department wants to be perceived as providing up-to-date solutions to technical problems, and this change would definitely provide such an image. Google's infrastructure is obviously much more robust than ours, and they employ some of the world's best security and network experts. However, there are also some serious drawbacks: We would be essentially outsourcing one of our mission-critical systems to a 3rd party The switch would inevitably involve Google Apps and perhaps more as well. That means we would have a-lot more at the mercy of a single (potentially weak) password. (is there a way to make this more secure using a password plus physical key of some sort??) Our data would not remain under our roof - or even in our country (Canada). This obviously has plusses on the Disaster Recovery side, but I think there are potential negatives on the legal side. I can't imagine that somebody as large as Google would be as responsive as we would want with regard to non-critical issues such as tracing missing emails, etc. (not sure how much access we would have to basic mail logs - for instance) Can anyone help me evaluate this decision? What issues am I overlooking? What experiences have you had with this transition (or the opposite - gmail to Exchange) Can you add to the points I have already outlined?

    Read the article

  • Windows Azure Use Case: Hybrid Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Organizations see the need for computing infrastructures that they can “rent” or pay for only when they need them. They also understand the benefits of distributed computing, but do not want to create this infrastructure themselves. However, they may have considerations that prevent them from moving all of their current IT investment to a distributed environment: Private data (do not want to send or store sensitive data off-site) High dollar investment in current infrastructure Applications currently running well, but may need additional periodic capacity Current applications not designed in a stateless fashion In these situations, a “hybrid” approach works best. In fact, with Windows Azure, a hybrid approach is an optimal way to implement distributed computing even when the stipulations above do not apply. Keeping a majority of the computing function in an organization local while exploring and expanding that footprint into Windows and SQL Azure is a good migration or expansion strategy. A “hybrid” architecture merely means that part of a computing cycle is shared between two architectures. For instance, some level of computing might be done in a Windows Azure web-based application, while the data is stored locally at the organization. Implementation: There are multiple methods for implementing a hybrid architecture, in a spectrum from very little interaction from the local infrastructure to Windows or SQL Azure. The patterns fall into two broad schemas, and even these can be mixed. 1. Client-Centric Hybrid Patterns In this pattern, programs are coded such that the client system sends queries or compute requests to multiple systems. The “client” in this case might be a web-based codeset actually stored on another system (which acts as a client, the user’s device serving as the presentation layer) or a compiled program. In either case, the code on the client requestor carries the burden of defining the layout of the requests. While this pattern is often the easiest to code, it’s the most brittle. Any change in the architecture must be reflected on each client, but this can be mitigated by using a centralized system as the client such as in the web scenario. 2. System-Centric Hybrid Patterns Another approach is to create a distributed architecture by turning on-site systems into “services” that can be called from Windows Azure using the service Bus or the Access Control Services (ACS) capabilities. Code calls from a series of in-process client application. In this pattern you move the “client” interface into the server application logic. If you do not wish to change the application itself, you can “layer” the results of the code return using a product (such as Microsoft BizTalk) that exposes a Web Services Definition Language (WSDL) endpoint to Windows Azure using the Application Fabric. In effect, this is similar to creating a Service Oriented Architecture (SOA) environment, and has the advantage of de-coupling your computing architecture. If each system offers a “service” of the results of some software processing, the operating system or platform becomes immaterial, assuming it adheres to a service contract. There are important considerations when you federate a system, whether to Windows or SQL Azure or any other distributed architecture. While these considerations are consistent with coding any application for distributed computing, they are especially important for a hybrid application. Connection resiliency - Applications on-premise normally have low-latency and good connection properties, something you’re not always guaranteed in a distributed and hybrid application. Whether a centralized client or a distributed one, the code should be able to handle extended retry logic. Authorization and Access - In a single authorization environment like a Active Directory domain, security is handled at a user-password level. In a distributed computing environment, you have more options. You can mitigate this with  using The Windows Azure Application Fabric feature of ACS to make the Azure application aware of the App Fabric as an ADFS provider. However, a claims-based authentication structure is often a superior choice.  Consistency and Concurrency - When you have a Relational Database Management System (RDBMS), Consistency and Concurrency are part of the design. In a Service Architecture, you need to plan for sequential message handling and lifecycle. Resources: How to Build a Hybrid On-Premise/In Cloud Application: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx  General Architecture guidance: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx   

    Read the article

  • Going to the Score Cards - Exceptional DBA Awards 2011

    - by Rodney
    This year marks my 4th year as a judge for the Exceptional DBA Awards, founded by Red Gate in 2008 to "recognize the essential but often overlooked contributions of DBAs, the unsung heroes of the IT community." As a professional DBA myself I have been honored to participate as a judge. It is not an easy job because there is a voluminous amount of nominees from all over the world. Each judge has to read through every word of the nominee's answers, deciding what makes each person special and stand out amongst their peers. What drives them? What single element of their submission will shine above all others? It is my hope that what I am about to divulge to you as a judge will prompt you to think about yourself or someone you know and decide that you may be the exceptional DBA who can take home the gold at this year's award ceremony in Seattle. We are more than a few weeks into the nomination process and there are quite a number of submissions already. I can not tell you how many as that would not be fair. I can say it is not 1 million or more. I can also say that it is not 100,000. But that is all I can say about that. However, I can tell you that it is enough this year that we are breaking records on the number of people who have been influenced, inspired or intrigued by the awards in the past. I remember them all like it were yesterday. fuzzy thought cloud here. It was a rainy day in Seattle (all memories for each award ceremony will start thusly) and I was in the hotel going over my notes on what I wanted to say about the winner of the 2008 Red Gate Exceptional DBA Award. The notes were on index cards that I had either bought or stolen from my wife, I do not recall, but I was nervous which was unlike me. This was, after all, a big night for the winner. Of course, we, the judges and the SQL community, had already decided the winner and now all that remained was to present the award. The room was packed. It was Casino night, sponsored by sqlservercentral.com. Money (fake), drinks (not fake) and camaraderie flowed through the room. Dan McClain won the award that year. He worked for Anheuser-Busch at the time. I promise that did not influence my decision. We presented Dan with the award. He was very proud of this achievement, rightfully so, as was the SQL community for him. I spoke with Dan throughout the conference and realized how huge this award was for him, not just personally but professionally. It was a rainy day in Seattle in 2009 and I was nervous. I was asked to speak to a group of people again as a judge for the Exceptional DBA Awards. This year, Josef Richberg would be the recipient of the award, but he would not be able to attend. We all prayed for him as he fought through an illness and congratulated him for his accomplishments as a DBA for his company. He got better and sallied forth and continued to give back to the SQL community that he saw as one big family. In 2010, and I am getting ahead of myself, he was asked to be a judge himself for the very award he had just received the year before. It was a sunny day in Seattle and I missed it, because it was in July and I was not there. It was a rainy day in Seattle and it is 2010 and Tracy Hamlin enters a submission that blows this judge away. She is managing a 50 Terabyte distributed database ("50 Gigabytes! Are you kidding me!!!", Rodney jokes.)  and loves her daily job as a DBA working with developers, mentoring them and teaching them best practices with kindness and patience. She is a people person who just happens to have 10+ years experience with RDBMS'. She wins the award and goes on to be recognized as famous at PASS. It will be a rainy day in Seattle this year when I sit amongst my old constituent judges and friends, Brad McGehee, http://www.simple-talk.com/books/sql-books/how-to-become-an-exceptional-dba,-2nd-edition/, Steve Jones, whom we all know and love at http://www.sqlservercentral.com and a young upstart to the SQL Community, this cat named Brent Ozar to announce the 2011 winner. I personally have not heard of Brent but I am told I have interviewed him for a DBA position several years ago and turned him down, http://www.brentozar.com/archive/2011/05/exceptional-dba-contest/ . I hope that did not jeopardize his future in the SQL world. I am a big hearted oaf and would feel horrible. Hopefully I will meet him at PASS and we can work this all out and I can help him get a DBA job. The rain has stopped and a new year is upon us. The stakes are high...the competition is fierce...the rewards are incredible. The entry form awaits you. http://www.exceptionaldba.com/ I very much look forward to meeting you and presenting the award to you in front of hundreds of your envious but proud peers as the new Exceptional DBA for 2011 at the PASS Summit. Here is what you could win: The Exceptional DBA of the Year receives full conference registration for the 2011 PSS Summit in Seattle, where the awards ceremony will take place, four nights' hotel accommodation, and $300 towards travel expenses. They will also be featured on Simple-Talk. Are you ready? Are you nervous?

    Read the article

  • Oracle Enterprise Manager Ops Center 12c : Enterprise Controller High Availability (EC HA)

    - by Anand Akela
    Contributed by Mahesh sharma, Oracle Enterprise Manager Ops Center team In Oracle Enterprise Manager Ops Center 12c we introduced a new feature to make the Enterprise Controllers highly available. With EC HA if the hardware crashes, or if the Enterprise Controller services and/or the remote database stop responding, then the enterprise services are immediately restarted on the other standby Enterprise Controller without administrative intervention. In today's post, I'll briefly describe EC HA, look at some of the prerequisites and then show some screen shots of how the Enterprise Controller is represented in the BUI. In my next post, I'll show you how to install the EC in a HA environment and some of the new commands. What is EC HA? Enterprise Controller High Availability (EC HA) provides an active/standby fail-over solution for two or more Ops Center Enterprise Controllers, all within an Oracle Clusterware framework. This allows EC resources to relocate to a standby if the hardware crashes, or if certain services fail. It is also possible to manually relocate the services if maintenance on the active EC is required. When the EC services are relocated to the standby, EC services are interrupted only for the period it takes for the EC services to stop on the active node and to start back up on a standby node. What are the prerequisites? To install EC in a HA framework an understanding of the prerequisites are required. There are many possibilities on how these prerequisites can be installed and configured - we will not discuss these in this post. However, best practices should be applied when installing and configuring, I would suggest that you get expert help if you are not familiar with them. Lets briefly look at each of these prerequisites in turn: Hardware : Servers are required to host the active and standby node(s). As the nodes will be in a clustered environment, they need to be the same model and configured identically. The nodes should have the same processor class, number of cores, memory, network cards, for example. Operating System : We can use Solaris 10 9/10 or higher, Solaris 11, OEL 5.5 or higher on x86 or Sparc Network : There are a number of requirements for network cards in clusterware, and cables should be networked identically on all the nodes. We must also consider IP allocation for public / private and Virtual IP's (VIP's). Storage : Shared storage will be required for the cluster voting disks, Oracle Cluster Register (OCR) and the EC's libraries. Clusterware : Oracle Clusterware version 11.2.0.3 or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html Remote Database : Oracle RDBMS 11.1.0.x or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html For detailed information on how to install EC HA , please read : http://docs.oracle.com/cd/E27363_01/doc.121/e25140/install_config-shared.htm#OPCSO242 For detailed instructions on installing Oracle Clusterware, please read : http://docs.oracle.com/cd/E11882_01/install.112/e17214/chklist.htm#BHACBGII For detailed instructions on installing the remote Oracle database have a read of: http://www.oracle.com/technetwork/database/enterprise-edition/documentation/index.html The schematic diagram below gives a visual view of how the prerequisites are connected. When a fail-over occurs the Enterprise Controller resources and the VIP are relocated to one of the standby nodes. The standby node then becomes active and all Ops Center services are resumed. Connecting to the Enterprise Controller from your favourite browser. Let's presume we have installed and configured all the prerequisites, and installed Ops Center on the active and standby nodes. We can now connect to the active node from a browser i.e. http://<active_node1>/, this will redirect us to the virtual IP address (VIP). The VIP is the IP address that moves with the Enterprise Controller resource. Once you log on and view the assets, you will see some new symbols, these represent that the nodes are cluster members, with one being an active member and the other a standby member in this case. If you connect to the standby node, the browser will redirect you to a splash page, indicating that you have connected to the standby node. Hope you find this topic interesting. Next time I will post about how to install the Enterprise Controller in the HA frame work. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • HDFC Bank's Journey to Oracle Private Database Cloud

    - by Nilesh Agrawal
    One of the key takeaways from a recent post by Sushil Kumar is the importance of business initiative that drives the transformational journey from legacy IT to enterprise private cloud. The journey that leads to a agile, self-service and efficient infrastructure with reduced complexity and enables IT to deliver services more closely aligned with business requirements. Nilanjay Bhattacharjee, AVP, IT of HDFC Bank presented a real-world case study based on one such initiative in his Oracle OpenWorld session titled "HDFC BANK Journey into Oracle Database Cloud with EM 12c DBaaS". The case study highlighted in this session is from HDFC Bank’s Lending Business Segment, which comprises roughly 50% of Bank’s top line. Bank’s Lending Business is always under pressure to launch “New Schemes” to compete and stay ahead in this segment and IT has to keep up with this challenging business requirement. Lending related applications are highly dynamic and go through constant changes and every single and minor change in each related application is required to be thoroughly UAT tested certified before they are certified for production rollout. This leads to a constant pressure in IT for rapid provisioning of UAT databases on an ongoing basis to enable faster time to market. Nilanjay joined Sushil Kumar, VP, Product Strategy, Oracle, during the Enterprise Manager general session at Oracle OpenWorld 2012. Let's watch what Nilanjay had to say about their recent Database cloud deployment. “Agility” in launching new business schemes became the key business driver for private database cloud adoption in the Bank. Nilanjay spent an hour discussing it during his session. Let's look at why Database-as-a-Service(DBaaS) model was need of the hour in this case  - Average 3 days to provision UAT Database for Loan Management Application Silo’ed UAT environment with Average 30% utilization Compliance requirement consume UAT testing resources DBA activities leads to $$ paid to SI for provisioning databases manually Overhead in managing configuration drift between production and test environments Rollout impact/delay on new business initiatives The private database cloud implementation progressed through 4 fundamental phases - Standardization, Consolidation, Automation, Optimization of UAT infrastructure. Project scoping was carried out and end users and stakeholders were engaged early on right from planning phase and including all phases of implementation. Standardization and Consolidation phase involved multiple iterations of planning to first standardize on infrastructure, db versions, patch levels, configuration, IT processes etc and with database level consolidation project onto Exadata platform. It was also decided to have existing AIX UAT DB landscape covered and EM 12c DBaaS solution being platform agnostic supported this model well. Automation and Optimization phase provided the necessary Agility, Self-Service and efficiency and this was made possible via EM 12c DBaaS. EM 12c DBaaS Self-Service/SSA Portal was setup with required zones, quotas, service templates, charge plan defined. There were 2 zones implemented - Exadata zone  primarily for UAT and benchmark testing for databases running on Exadata platform and second zone was for AIX setup to cover other databases those running on AIX. Metering and Chargeback/Showback capabilities provided business and IT the framework for cloud optimization and also visibility into cloud usage. More details on UAT cloud implementation, related building blocks and EM 12c DBaaS solution are covered in Nilanjay's OpenWorld session here. Some of the key Benefits achieved from UAT cloud initiative are - New business initiatives can be easily launched due to rapid provisioning of UAT Databases [ ~3 hours ] Drastically cut down $$ on SI for DBA Activities due to Self-Service Effective usage of infrastructure leading to  better ROI Empowering  consumers to provision database using Self-Service Control on project schedule with DB end date aligned to project plan submitted during provisioning Databases provisioned through Self-Service are monitored in EM and auto configured for Alerts and KPI Regulatory requirement of database does not impact existing project in queue This table below shows typical list of activities and tasks involved when a end user requests for a UAT database. EM 12c DBaaS solution helped reduce UAT database provisioning time from roughly 3 days down to 3 hours and this timing also includes provisioning time for database with production scale data (ranging from 250 G to 2 TB of data) - And it's not just about time to provision,  this initiative has enabled an agile, efficient and transparent UAT environment where end users are empowered with real control of cloud resources and IT's role is shifted as enabler of strategic services instead of being administrator of all user requests. The strong collaboration between IT and business community right from planning to implementation to go-live has played the key role in achieving this common goal of enterprise private cloud. Finally, real cloud is here and this cloud is accompanied with rain (business benefits) as well ! For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Online Password Security Tactics

    - by BuckWoody
    Recently two more large databases were attacked and compromised, one at the popular Gawker Media sites and the other at McDonald’s. Every time this kind of thing happens (which is FAR too often) it should remind the technical professional to ensure that they secure their systems correctly. If you write software that stores passwords, it should be heavily encrypted, and not human-readable in any storage. I advocate a different store for the login and password, so that if one is compromised, the other is not. I also advocate that you set a bit flag when a user changes their password, and send out a reminder to change passwords if that bit isn’t changed every three or six months.    But this post is about the *other* side – what to do to secure your own passwords, especially those you use online, either in a cloud service or at a provider. While you’re not in control of these breaches, there are some things you can do to help protect yourself. Most of these are obvious, but they contain a few little twists that make the process easier.   Use Complex Passwords This is easily stated, and probably one of the most un-heeded piece of advice. There are three main concepts here: ·         Don’t use a dictionary-based word ·         Use mixed case ·         Use punctuation, special characters and so on   So this: password Isn’t nearly as safe as this: P@ssw03d   Of course, this only helps if the site that stores your password encrypts it. Gawker does, so theoretically if you had the second password you’re in better shape, at least, than the first. Dictionary words are quickly broken, regardless of the encryption, so the more unusual characters you use, and the farther away from the dictionary words you get, the better.   Of course, this doesn’t help, not even a little, if the site stores the passwords in clear text, or the key to their encryption is broken. In that case…   Use a Different Password at Every Site What? I have hundreds of sites! Are you kidding me? Nope – I’m not. If you use the same password at every site, when a site gets attacked, the attacker will store your name and password value for attacks at other sites. So the only safe thing to do is to use different names or passwords (or both) at each site. Of course, most sites use your e-mail as a username, so you’re kind of hosed there. So even though you have hundreds of sites you visit, you need to have at least a different password at each site.   But it’s easier than you think – if you use an algorithm.   What I’m describing is to pick a “root” password, and then modify that based on the site or purpose. That way, if the site is compromised, you can still use that root password for the other sites.   Let’s take that second password: P@ssw03d   And now you can append, prepend or intersperse that password with other characters to make it unique to the site. That way you can easily remember the root password, but make it unique to the site. For instance, perhaps you read a lot of information on Gawker – how about these:   P@ssw03dRead ReadP@ssw03d PR@esasdw03d   If you have lots of sites, tracking even this can be difficult, so I recommend you use password software such as Password Safe or some other tool to have a secure database of your passwords at each site. DO NOT store this on the web. DO NOT use an Office document (Microsoft or otherwise) that is “encrypted” – the encryption office automation packages use is very trivial, and easily broken. A quick web search for tools to do that should show you how bad a choice this is.   Change Your Password on a Schedule I know. It’s a real pain. And it doesn’t seem worth it…until your account gets hacked. A quick note here – whenever a site gets hacked (and I find out about it) I change the password at that site immediately (or quit doing business with them) and then change the root password on every site, as quickly as I can.   If you follow the tip above, it’s not as hard. Just add another number, year, month, day, something like that into the mix. It’s not unlike making a Primary Key in an RDBMS.   P@ssw03dRead10242010   Change the site, and then update your password database. I do this about once a month, on the first or last day, during staff meetings. (J)   If you have other tips, post them here. We can all learn from each other on this.

    Read the article

  • Becoming A Great Developer

    - by Lee Brandt
    Image via Wikipedia I’ve been doing the whole programming thing for awhile and reading and watching some of the best in the business. I have come to notice that the really great developers do a few things that (I think) makes them great. Now don’t get me wrong, I am not saying that I am one of these few. I still struggle with doing some of the things that makes one great at development. Coincidently, many of these things also make you a better person period. Believe That Guidance Is Better Than Answers This is one I have no problem with. I prefer guidance any time I am learning from another developer. Answers may get you going, but guidance will leave you stranded. At some point, you will come across a problem that can only be solved by thinking for yourself and this is where that guidance will really come in handy. You can use that guidance and extrapolate whatever technology to salve that problem (if it’s the right tool for solving that problem). The problem is, lots of developers simply want someone to tell them, “Do this, then this, then set that, and write this.” Favor thinking and learn the guidance of doing X and don’t ask someone to show you how to do X, if that makes sense. Read, Read and Read If you don’t like reading, you’re probably NOT going to make it into the Great Developer group. Great developers read books, they read magazines and they read code. Open source playgrounds like SourceForge, CodePlex and GitHub, have made it extremely easy to download code from developers you admire and see how they do stuff. Chances are, if you read their blog too, they’ll even explain WHY they did what they did (see “Guidance” above). MSDN and Code Magazine have not only code samples, but explanations of how to use certain technologies and sometimes even when NOT to use that same technology. Books are also out on just about every topic. I still favor the less technology centric books. For instance, I generally don’t buy books like, “Getting Started with Jiminy Jappets”. I look for titles like, “How To Write More Effective Code” (again, see guidance). The Addison-Wesley Signature Series is a great example of these types of books. They teach technology-agnostic concepts. Head First Design Patterns is another great guidance book. It teaches the "Gang Of Four" Design Patterns in a very easy-to-understand, picture-heavy way (I LIKE pictures). Hang Your Balls Out There Even though the advice came from a 3rd-shift Kinko’s attendant, doesn’t mean it’s not sound advice. Write some code and put it out for others to read, criticize and castigate you for. Understand that there are some real jerks out there who are absolute geniuses. Don’t be afraid to get some great advice wrapped in some really nasty language. Try to take what’s good about it and leave what’s not. I have a tough time with this myself. I don’t really have any code out there that is available for review (other than my demo code). It takes some guts to do, but in the end, there is no substitute for getting a community of developers to critique your code and give you ways to improve. Get Involved Speaking of community, the local and online user groups and discussion forums are a great place to hear about technologies and techniques you might never come across otherwise. Mostly because you might not know to look. But, once you sit down with a bunch of other developers and start discussing what you’re interested in, you may open up a whole new perspective on it. Don’t just go to the UG meetings and watch the presentations either, get out there and talk, socialize. I realize geeks weren’t meant to necessarily be social creatures, but if you’re amongst other geeks, it’s much easier. I’ve learned more in the last 3-4 years that I have been involved in the community that I did in my previous 8 years of coding without it. Socializing works, even if socialism doesn’t. Continuous Improvement Lean proponents might call this “Kaizen”, but I call it progress. We all know, especially in the technology realm, if you’re not moving ahead, you’re falling behind. It may seem like drinking from a fire hose, but step back and pick out the technologies that speak to you. The ones that may you’re little heart go pitter-patter. Concentrate on those. If you’re still overloaded, pick the best of the best. Just know that if you’re not looking at the code you wrote last week or at least last year with some embarrassment, you’re probably stagnating. That’s about all I can say about that, cause I am all out of clichés to throw at it. :0) Write Code Great painters paint, great writers write, and great developers write code. The most sure-fire way to improve your coding ability is to continue writing code. Don’t just write code that your work throws on you, pick that technology you love or are curious to know more about and walk through some blog demo examples. Take the language you use everyday and try to get it to do something crazy. Who knows, you might create the next Google search algorithm! All in all, being a great developer is about finding yourself in all this code. If it is just a job to you, you will probably never be one of the “Great Developers”, but you’re probably okay with that. If, on the other hand, you do aspire to greatness, get out there and GET it. No one’s going hand it to you.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62  | Next Page >