Search Results

Search found 58486 results on 2340 pages for 'data integrator'.

Page 707/2340 | < Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >

  • SilverlightShow for Jan 7-13, 2011

    - by Dave Campbell
    Check out the Top Five most popular news at SilverlightShow for Feb 7-13, 2011. Here's SilverlightShow top 5 news for last week: Data Binding in Silverlight with RIA and EntityFramework - Part 1 (Displaying Data) Simple Silverlight MVVM Base Class This Thursday: Free Windows Phone 7 Webinar by Telerik A Circular ProgressBar Style using an Attached ViewModel Using SilverLight 4 Line Of Business Application Development Template (LOB) Visit and bookmark SilverlightShow. Stay in the 'Light

    Read the article

  • Solving “The Select operation is not supported by .. unless the SelectMethod is specified.”

    - by anas
    In most cases, You will get that error when you are using a data source control(like ObjectDataSource) without setting it’s SelectMethod as data source for the DetailsView control. If you want to display one record in the detailsView control to allow the user to edit it, then you should set the SelectMethod for the DataSource control,otherwise the detailsView control will not be able to get the record from the underlying datasource. But what if you are only using the DetailsView for only inserting...(read more)

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • Upgrading 10.04LTS -> 10.10 using custom sources

    - by Boatzart
    I'm trying to upgrade to 10.10 from 10.04 LTS using a custom sources.list file that points to an unofficial mirror*. The mirror does have maverick, but I get the following output when upgrading: boatzart@somecomputer: > sudo do-release-upgrade Checking for a new ubuntu release Done Upgrade tool signature Done Upgrade tool Done downloading extracting 'maverick.tar.gz' authenticate 'maverick.tar.gz' against 'maverick.tar.gz.gpg' tar: Removing leading `/' from member names Reading cache Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Updating repository information WARNING: Failed to read mirror file No valid mirror found While scanning your repository information no mirror entry for the upgrade was found. This can happen if you run a internal mirror or if the mirror information is out of date. Do you want to rewrite your 'sources.list' file anyway? If you choose 'Yes' here it will update all 'lucid' to 'maverick' entries. If you select 'No' the upgrade will cancel. Continue [yN] y WARNING: Failed to read mirror file 96% [Working] Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: The package 'update-manager-kde' is marked for removal but it is in the removal blacklist. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Here is the relevant section from /var/log/dist-upgrade/main.log: 2010-11-18 14:05:52,117 DEBUG The package 'update-manager-kde' is marked for removal but it's in the removal blacklist 2010-11-18 14:05:52,136 ERROR Dist-upgrade failed: 'The package 'update-manager-kde' is marked for removal but it is in the removal blacklist.' 2010-11-18 14:05:52,136 DEBUG abort called *I'm located inside of USC, and for some crazy reason any sustained downloads to anywhere outside of the University are throttled down to 5kbps inside of my lab. Because of this I need to use the following sources.list: deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-updates main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-backports main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-security main restricted universe multiverse I've tried adding four more entries to the sources.list with s/lucid/maverick/ but that didn't help. Does anyone know how to fix this? Thanks!

    Read the article

  • Is there a canonical resource on multi-tenancy web applications using ruby + rails

    - by AlexC
    Is there a canonical resource on multi-tenancy web applications using ruby + rails. There are a number of ways to develop rails apps using cloud capabilities with real elastic properties but there seems to be a lack of clarity with how to achieve multitenancy, specifically at the model / data level. Is there a canonical resource on options to developing multitenancy rails applications with the required characteristics of data seperation, security, concurrency and contention required by an enterprise level cloud application.

    Read the article

  • The Buzz at the JavaOne Bookstore

    - by Janice J. Heiss
    I found my way to the JavaOne bookstore, a hub of activity. Who says brick and mortar bookstores are dead? I asked what was hot and got two answers: Hadoop in Practice by Alex Holmes was doing well. And Scala for the Impatient by noted Java Champion Cay Horstmann also seemed to be a fast seller. Hadoop in PracticeHadoop is a framework that organizes large clusters of computers around a problem. It is touted as especially effective for large amounts of data, and is use such companies as  Facebook, Yahoo, Apple, eBay and LinkedIn. Hadoop in Practice collects nearly 100 Hadoop examples and presents them in a problem/solution format with step by step explanations of solutions and designs. It’s very much a participatory book intended to make developers more at home with Hadoop.The author, Alex Holmes, is a senior software engineer with more than 15 years of experience developing large-scale distributed Java systems. For the last four years, he has gained expertise in Hadoop solving Big Data problems across a number of projects. He has presented at JavaOne and Jazoon and is currently a technical lead at VeriSign.At this year’s JavaOne, he is presenting a session with VeriSign colleague, Karthik Shyamsunder called “Java: A Perfect Platform for Data Science” where they will explain how the Java platform has emerged as a perfect platform for practicing data science, and also talk about such technologies as Hadoop, Hive, Pig, HBase, Cassandra, and Mahout. Scala for the ImpatientSan Jose State University computer science professor and Java Champion Cay Horstmann is the principal author of the highly regarded Core Java. Scala for the Impatient is a basic, practical introduction to Scala for experienced programmers. Horstmann has a presentation summarizing the themes of his book on at his website. On the final page he offers an enticing summary of his conclusions:* Widespread dissatisfaction with Java + XML + IDEs               --Don't make me eat Elephant again * A separate language for every problem domain is not efficient               --It takes time to master the idioms* ”JavaScript Everywhere” isn't going to scale* Trend is towards languages with more expressive power, less boilerplate* Will Scala be the “one ring to rule them”?* Maybe              --If it succeeds in industry             --If student-friendly subsets and tools are created The popularity of both books echoed comments by IBM Distinguished Engineer Jason McGee who closed his part of the Sunday JavaOne keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers increasingly must know the strengths and weaknesses of such languages going forward.

    Read the article

  • Oracle Magazine, July/August 2009

    Oracle Magazine July/August features articles on business efficiency with Oracle data warehousing, business intelligence and enterprise performance management; Oracle Enterprise Linux and Oracle Unbreakable Linux support, Oracle OpenWorld preview, open source, Oracle Application Development Framework, best PL/SQL practices, security for Oracle Application Express applications, Microsoft Visual Studio for .NET and Oracle Database, Oracle Data Pump, Tom Kyte answering your questions and much more.

    Read the article

  • Oracle University New Courses (Week 14)

    - by swalker
    Oracle University released the following new (versions of) courses recently: Database Oracle Data Modeling and Relational Database Design (4 days) Fusion Middleware Oracle Directory Services 11g: Administration (5 days) Oracle Unified Directory 11g: Services Deployment Essentials (2 days) Oracle GoldenGate 11g Management Pack: Overview (1 day) Business Intelligence & Datawarehousing Oracle Database 11g: Data Mining Techniques (2 days) Oracle Solaris Oracle Solaris 10 System Administration for HP-UX Administrators (5 days) E-Business Suite R12.x Oracle Time and Labor Fundamentals Get in contact with your local Oracle University team for more details and course dates. Stay Connected to Oracle University: LinkedIn OracleMix Twitter Facebook Google+

    Read the article

  • Why is Double.Parse so slow?

    - by alexhildyard
    I was recently investigating a bottleneck in one of my applications, which read a CSV file from disk using a TextReader a line at a time, split the tokens, called Double.Parse on each one, then shunted the results into an object list. I was surprised to find it was actually the Double.Parse which seemed to be taking up most of the time.Googling turned up this, which is a little unfocused in places but throws out some excellent ideas:It makes more sense to work with binary format directly, rather than coerce strings into doublesThere is a significant performance improvement in composing doubles directly from the byte stream via long intermediariesString.Split is inefficient on fixed length recordsIn fact it turned out that my problem was more insidious and also more mundane -- a simple case of bad data in, bad data out. Since I had been serialising my Doubles as strings, when I inadvertently divided by zero and produced a "NaN", this of course was serialised as well without error. And because I was reading in using Double.Parse, these "NaN" fields were also (correctly) populating real Double objects without error. The issue is that Double.Parse("NaN") is incredibly slow. In fact, it is of the order of 2000x slower than parsing a valid double. For example, the code below gave me results of 357ms to parse 1000 NaNs, versus 15ms to parse 100,000 valid doubles.            const int invalid_iterations = 1000;            const int valid_iterations = invalid_iterations * 100;            const string invalid_string = "NaN";            const string valid_string = "3.14159265";            DateTime start = DateTime.Now;                        for (int i = 0; i < invalid_iterations; i++)            {                double invalid_double = Double.Parse(invalid_string);            }            Console.WriteLine(String.Format("{0} iterations of invalid double, time taken (ms): {1}",                invalid_iterations,                ((TimeSpan)DateTime.Now.Subtract(start)).Milliseconds            ));            start = DateTime.Now;            for (int i = 0; i < valid_iterations; i++)            {                double valid_double = Double.Parse(valid_string);            }            Console.WriteLine(String.Format("{0} iterations of valid double, time taken (ms): {1}",                valid_iterations,                ((TimeSpan)DateTime.Now.Subtract(start)).Milliseconds            )); I think the moral is to look at the context -- specifically the data -- as well as the code itself. Once I had corrected my data, the performance of Double.Parse was perfectly acceptable, and while clearly it could have been improved, it was now sufficient to my needs.

    Read the article

  • Wesquare (NL) helps major CG customer integrating Oracle Service Cloud (RightNow) with JDEdwards

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE When this well known, Italy based, CG player claimed that they needed a new CRM tool, Oracle partner WeSquare had a precise idea of what would be required, knowing that the customer was using JDEdwards as an ERP: they immediately thoughts about a solution that would help synchronizing the customer’s back-end system with the new CRM interface. The customer asked for presentations from three companies, including Oracle, and eventually selected Oracle Service Cloud (RightNow) with Alfa Sistemi (Oracle Platinum Partner) as a System Integrator supported by Wesquare (Oracle Gold partner specialized in RightNow). Synchronizing an On Premises ERP with a new SaaS based CRM platform could be seen as an uphill task, but WeSquare was determined, during the presales cycle, to prove that they had the skills and the attitude to make the difference. So, they rolled up their sleeves and got to it: five days of relentless work, missed lunches, and hours of brainstorming showed its result in the form of a new interface that works fabulously well with the JDEdwards ERP back-end and was successfully pitched by Oracle to the end-customer to win the deal! WeSquare took the occasion to learn that they can integrate Oracle Service Cloud (RightNow) with practically every other solution that a customer may run. As part of the project, WeSquare was also involved in different add-on’s development with the aim of enriching Oracle Service Cloud’s functionality. WeSquare is based in The Netherlands with an in-shore practice supported by off-shore teams in India. WeSquare can integrate and synchronize any application with RightNow. For more information, visit www.wesquare.nl or contact Wiebe Blankenberg (Managing Director) at +31 (0) 6 3632 1104 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Need help identifing what resources (eg. In MIT OpenCourseWare) can help me prepare for a test [closed]

    - by jiewmeng
    I am entering uni soon. I can sit for a placement test to see if I elegible for exemptions. The details are http://www.comp.nus.edu.sg/undergraduates/TestScope11_12.html Or CS2100 Computer Organisation (please click title) The objective of this module is to familiarise students with the fundamentals of computing devices. Through this module students will understand the basics of data representation, and how the various parts of a computer work, separately and with each other. This allows students to understand the issues in computing devices, and how these issues affect the implementation of solutions. Topics covered include data representation systems, combinational and sequential circuit design techniques, assembly language, processor execution cycles, pipelining, memory hierarchy and input/output systems. Recommended Textbooks Digital Design: Principles and Practices [DDPP] by John F. Wakerly, Prentice-Hall. ISBN 0-13-324500-4. Computer Organizations and Design (The hardware/software interface) by David A. Patterson and John L. Hennessy. CS2105 Introduction to Computer Networks (please click title) This course aims to provide a broad introduction to computer networks and some appreciations of network application programming. It covers a range of topics including basic data communication and computer network concepts, protocols, networked computing concepts and principles, network applications development and network security. The emphasis of teaching is on the working principles and application of computer networks. As an integral part of the course, tutorials and practical assignments enforcing learning will also be given. These assignments provide an early exposure in network application programming and they should be able to complete by using personal computers and school's network facilities. Topics included: An overview of computer networks and the Internet Basic data communications Application layer Transport layer Network layer and routing Link layer and local area networks Recommended Textbook James F. Kurose & Keith W. Ross, Computer networking: A top-down approach featuring internet, Addison Wesley, 2001 I am wondering what resources eg. MIT OpenCourseWare or other universities resources are available to help he perpare for these particular modubles. I am thinking does the Networking one look like CCNA? The computer oragization. Its like electronics, assembly etc? I learnt some electronics in Poly but looking at the sample papers, uni looks very different... I have about 1 month to prepare if I want any chance of exempting from these modules :) any help?

    Read the article

  • Handy Generic JQuery Functions

    - by Steve Wilkes
    I was a bit of a late-comer to the JQuery party, but now I've been using it for a while it's given me a host of options for adding extra flair to the client side of my applications. Here's a few generic JQuery functions I've written which can be used to add some neat little features to a page. Just call any of them from a document ready function. Apply JQuery Themeroller Styles to all Page Buttons   The JQuery Themeroller is a great tool for creating a theme for a site based on colours and styles for particular page elements. The JQuery.UI library then provides a set of functions which allow you to apply styles to page elements. This function applies a JQuery Themeroller style to all the buttons on a page - as well as any elements which have a button class applied to them - and then makes the mouse pointer turn into a cursor when you mouse over them: function addCursorPointerToButtons() {     $("button, input[type='submit'], input[type='button'], .button") .button().css("cursor", "pointer"); } Automatically Remove the Default Value from a Select Box   Required drop-down select boxes often have a default option which reads 'Please select...' (or something like that), but once someone has selected a value, there's no need to retain that. This function removes the default option from any select boxes on the page which have a data-val-remove-default attribute once one of the non-default options has been chosen: function removeDefaultSelectOptionOnSelect() {     $("select[data-val-remove-default='']").change(function () {         var sel = $(this);         if (sel.val() != "") { sel.children("option[value='']:first").remove(); }     }); } Automatically add a Required Label and Stars to a Form   It's pretty standard to have a little * next to required form field elements. This function adds the text * Required to the top of the first form on the page, and adds *s to any element within the form with the class editor-label and a data-val-required attribute: function addRequiredFieldLabels() {     var elements = $(".editor-label[data-val-required='']");     if (!elements.length) { return; }     var requiredString = "<div class='editor-required-key'>* Required</div>";     var prependString = "<span class='editor-required-label'> * </span>"; var firstFormOnThePage = $("form:first");     if (!firstFormOnThePage.children('div.editor-required-key').length) {         firstFormOnThePage.prepend(requiredString);     }     elements.each(function (index, value) { var formElement = $(this);         if (!formElement.children('span.editor-required-label').length) {             formElement.prepend(prependString);         }     }); } I hope those come in handy :)

    Read the article

  • Winnipeg Code Camp EF4 Resources

    - by Aaron Kowall
    I had fun presenting “What’s new in Entity Framework 4” at the Winnipeg Code Camp today. I mentioned some resources on my deck that I thought I’d include here in my blog. •EF 4.0 Hands on Labs •EF CTP  5 (has the new DbContext and CodeFirst support)   •MSDN Data Developer Center: MSDN.com/Data •ADO.NET Team Blog •EF Design Blog •How to choose an inheritance strategy Programming Entity Framework, Second Edition by Julia Lerman

    Read the article

  • XNA Shader Texture Memory

    - by Alex
    I was wondering about texture optimization in XNA 4.0. Will the the contentmanager send the texturedata to the GPU directly when the texture gets loaded or do I send the texture data to the GPU when I declare a texture in my shader. If that's the case, what happens if I have 5 shaders all using the same texture, does that mean that I send 5 instances of that texture data to the gpu or am I simply telling the GPU what preloaded texture to use? Or does XNA do the heavy lifting in the background?

    Read the article

  • Exadata X3 Expandability Update

    - by Bandari Huang
    Exadata Database Machine X3-2 Data Sheet New 10/29/2012 Up to 18 racks can be connected without requiring additional InfiniBand switches. Exadata Database Machine X3-8 Data Sheet New 10/24/2012 Scale by connecting multiple Exadata Database Machine X3-8 racks or Exadata Storage Expansion Racks. Up to 18 racks can be connected by simply connecting via InfiniBand cables. Larger configurations can be built with additional InfiniBand switches.  

    Read the article

  • Restoring databases to a set drive and directory

    - by okeofs
     Restoring databases to a set drive and directory Introduction Often people say that necessity is the mother of invention. In this case I was faced with the dilemma of having to restore several databases, with multiple ‘ndf’ files, and having to restore them with different physical file names, drives and directories on servers other than the servers from which they originated. As most of us would do, I went to Google to see if I could find some code to achieve this task and found some interesting snippets on Pinal Dave’s website. Naturally, I had to take it further than the code snippet, HOWEVER it was a great place to start. Creating a temp table to hold database file details First off, I created a temp table which would hold the details of the individual data files within the database. Although there are a plethora of fields (within the temp table below), I utilize LogicalName only within this example. The temporary table structure may be seen below:   create table #tmp ( LogicalName nvarchar(128)  ,PhysicalName nvarchar(260)  ,Type char(1)  ,FileGroupName nvarchar(128)  ,Size numeric(20,0)  ,MaxSize numeric(20,0), Fileid tinyint, CreateLSN numeric(25,0), DropLSN numeric(25, 0), UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0), ReadWriteLSN numeric(25,0), BackupSizeInBytes bigint, SourceBlocSize int, FileGroupId int, LogGroupGUID uniqueidentifier, DifferentialBaseLSN numeric(25,0), DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit,  TDEThumbPrint varchar(50) )    We now declare and populate a variable(@path), setting the variable to the path to our SOURCE database backup. declare @path varchar(50) set @path = 'P:\DATA\MYDATABASE.bak'   From this point, we insert the file details of our database into the temp table. Note that we do so by utilizing a restore statement HOWEVER doing so in ‘filelistonly’ mode.   insert #tmp EXEC ('restore filelistonly from disk = ''' + @path + '''')   At this point, I depart from what I gleaned from Pinal Dave.   I now instantiate a few more local variables. The use of each variable will be evident within the cursor (which follows):   Declare @RestoreString as Varchar(max) Declare @NRestoreString as NVarchar(max) Declare @LogicalName  as varchar(75) Declare @counter as int Declare @rows as int set @counter = 1 select @rows = COUNT(*) from #tmp  -- Count the number of records in the temp                                    -- table   Declaring and populating the cursor At this point I do realize that many people are cringing about the use of a cursor. Being an Oracle professional as well, I have learnt that there is a time and place for cursors. I would remind the reader that the data that will be read into the cursor is from a local temp table and as such, any locking of the records (within the temp table) is not really an issue.   DECLARE MY_CURSOR Cursor  FOR  Select LogicalName  From #tmp   Parsing the logical names from within the cursor. A small caveat that works in our favour,  is that the first logical name (of our database) is the logical name of the primary data file (.mdf). Other files, except for the very last logical name, belong to secondary data files. The last logical name is that of our database log file.   I now open my cursor and populate the variable @RestoreString Open My_Cursor  set @RestoreString =  'RESTORE DATABASE [MYDATABASE] FROM DISK = N''P:\DATA\ MYDATABASE.bak''' + ' with  '   We now fetch the first record from the temp table.   Fetch NEXT FROM MY_Cursor INTO @LogicalName   While there are STILL records left within the cursor, we dynamically build our restore string. Note that we are using concatenation to create ‘one big restore executable string’.   Note also that the target physical file name is hardwired, as is the target directory.   While (@@FETCH_STATUS <> -1) BEGIN IF (@@FETCH_STATUS <> -2) -- As long as there are no rows missing select @RestoreString = case  when @counter = 1 then -- This is the mdf file    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.mdf' + '''' + ', '   -- OK, if it passes through here we are dealing with an .ndf file -- Note that Counter must be greater than 1 and less than the number of rows.   when @counter > 1 and @counter < @rows then -- These are the ndf file(s)    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ndf' + '''' + ', '   -- OK, if it passes through here we are dealing with the log file When @LogicalName like '%log%' then    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ldf' +'''' end --Increment the counter   set @counter = @counter + 1 FETCH NEXT FROM MY_CURSOR INTO @LogicalName END   At this point we have populated the varchar(max) variable @RestoreString with a concatenation of all the necessary file names. What we now need to do is to run the sp_executesql stored procedure, to effect the restore.   First, we must place our ‘concatenated string’ into an nvarchar based variable. Obviously this will only work as long as the length of @RestoreString is less than varchar(max) / 2.   set @NRestoreString = @RestoreString EXEC sp_executesql @NRestoreString   Upon completion of this step, the database should be restored to the server. I now close and deallocate the cursor, and to be clean, I would also drop my temp table.   CLOSE MY_CURSOR DEALLOCATE MY_CURSOR GO   Conclusion Restoration of databases on different servers with different physical names and on different drives are a fact of life. Through the use of a few variables and a simple cursor, we may achieve an efficient and effective way to achieve this task.

    Read the article

  • TechEd 2010 Followup

    - by AllenMWhite
    Last week I presented a couple of sessions at Tech Ed NA in New Orleans. It was a great experience, even though my demos didn't always work out as planned. Here are the sessions I presented: DAT01-INT Administrative Demo-Fest for SQL Server 2008 SQL Server 2008 provides a wealth of features aimed at the DBA. In this demofest of features we'll see ways to make administering SQL Server easier and faster such as Centralized Data Management, Performance Data Warehouse, Resource Governor, Backup Compression...(read more)

    Read the article

  • Report Builder 3.0: Adding Charts to Your Report

    Charts are one of the commonest ways of visualizing reports from data. Report Builder provides a way of generating charts and reports that will be intuitive to anyone who has done the same in Excel. Robert Sheldon provides a simple explanation of how to get the best from charts using Report Builder. SQL Monitor v3 is even more powerfulUse custom metrics to monitor and alert on data that's most important for your environment, easily imported from our custom metrics site. Find out more.

    Read the article

  • Java heap space

    - by java_mouse
    In Java/JVM, why do we call the memory place where Java creates objects as "Heap"? Does it use the Heap Data Structure to create/remove/maintain the objects? As I read in the documentation of Heap data structure, the algorithm compares the objects with existing nodes and places them in such a way that Parent object is "greater" than the children. ( Or "lesser" in case of min heap). So in JVM, how are the objects compared against each other before placing them in the heap?

    Read the article

  • Sybase IQ 15.4 annoncé : Sybase parie sur Hadoop et MapReduce, et défie sa maison mère ?

    Sybase IQ 15.4 annoncé pour fin novembre Sybase veut repousser les limites du Big Data avec Hadoop et MapReduce Alors que la grand messe annuelle de SAP, le SAPPHIRE NOW, battait son plein, la nouvelle filiale de l'éditeur allemand Sybase a annoncé en totale indépendance la sortie de Sybase IQ 15.4, son serveur analytique haute performance structuré en colonnes pour gérer les "big data". Alors que de son côté SAP met en avant HANA, sa nouvelle technologie de mise en cache des données (ou "In-Memory Computing") pour accélérer la vitesse de traite...

    Read the article

  • What actions should I not rely on the packaged functionality of my language for?

    - by David Peterman
    While talking with one of my coworkers, he was talking about the issues the language we used had with encryption/decryption and said that a developer should always salt their own hashes. Another example I can think of is the mysql_real_escape_string in PHP that programmers use to sanitize input data. I've heard many times that a developer should sanitize the data themselves. My question is what things should a developer always do on their own, for whatever reason, and not rely on the standard libraries packaged with a language for it?

    Read the article

  • Speaking About SQL Server

    - by AllenMWhite
    There's a lot of excitement in the SQL Server world right now, with the RTM (Release to Manufacturing) release of SQL Server 2012 , and the availability of SQL Server Data Tools (SSDT) . My personal speaking schedule has exploded as well. Just this past Saturday I presented a session called Gather SQL Server Performance Data with PowerShell . There are a lot of events coming up, and I hope to see you at one or more of them. Here's a list of what's scheduled so far: First, I'll be presenting a session...(read more)

    Read the article

  • What is the canonical resource on multi-tenancy web applications using ruby + rails

    - by AlexC
    What is the canonical resource on multi-tenancy web applications using ruby + rails. There are a number of ways to develop rails apps using cloud capabilities with real elastic properties but there seems to be a lack of clarity with how to achieve multitenancy, specifically at the model / data level. Is there a canonical resource on options to developing multitenancy rails applications with the required characteristics of data seperation, security, concurrency and contention required by an enterprise level cloud application.

    Read the article

  • What the best way to achieve RPO of zero and lowest possible RTO (less than 15 minutes) with SQL 2008 R2?

    - by Adrian Hope-Bailie
    We are running a payments (EFT transaction processing) application which is processing high volumes of transactions 24/7 and are currently investigating a better way of doing DB replication to our disaster recovery site. Our current and previous strategies have included using both DoubleTake and Redgate to replicate data to a warm stand-by. DoubleTake is the supported solution from the payments software vendor however their (DoubleTake's) support in South Africa is very poor. We had a few issues and simply couldn't ever resolve them so we had to give up on DoubleTake. We have been using Redgate to manually read the data from the primary site (via queries) and write to the DR site but this is: A bad solution Getting the software vendor hot and bothered whenever we have support issues as it has a tendency to interfere with the payment application which is very DB intensive. We recently upgraded the whole system to run on SQL 2008 R2 Enterprise which means we should probably be looking at using some of the built-in replication features. The server has 2 fairly large databases with a mixture of tables containing highly volatile transactional data and pretty static configuration data. Replication would be done over a WAN link to a separate physical site and needs to achieve the following objectives. RPO: Zero loss - This is transactional data with financial impact so we can't lose anything. RTO: Tending to zero - The business depends on our ability to process transactions every minute we are down we are losing money I have looked at a few of the other questions/answers but none meet our case exactly: SQL Server 2008 failover strategy - Log shipping or replication? How to achieve the following RTO & RPO with logshipping only using SQL Server? What is the best of two approaches to achieve DB Replication? My current thinking is that we should use mirroring but I am concerned that for RPO:0 we will need to do delayed commits and this could impact the performance of the primary DB which is not an option. Our current DR process is to: Stop incoming traffic to the primary site and allow all in-flight transaction to complete. Allow the replication to DR to complete. Change network routing to route to DR site. Start all applications and services on the secondary site (Ideally we can change this to a warmer stand-by whereby the applications are already running but not processing any transactions). In other words the DR database needs to, as quickly as possible, catch up with primary and be ready for processing as the new primary. We would then need to be able to reverse this when we are ready to switch back. Is there a better option than mirroring (should we be doing log-shipping too) and can anyone suggest other considerations that we should keep in mind?

    Read the article

  • How to model a many to many from a DDD perspective in UML?

    - by JD01
    I have a two entity objects Site and Customer where there is a many to many relationship. I have read you try not to model this in DDD as it is in the data model and go for a unidirectional flow. If I wanted to show this in UML, would I show it as it is in the data model: Site * ----->*Customer but the direction arrow gives the flow? or as following Site ----->*Customer But then this would imply that Customer can only go in one site.

    Read the article

< Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >